Categories: TECH

Did xAI lie about Grok 3’s benchmarks?


Debates over AI benchmarks — and how they’re reported by AI labs — are spilling out into public view.

This week, an OpenAI employee accused Elon Musk’s AI company, xAI, of publishing misleading benchmark results for its latest AI model, Grok 3. One of the co-founders of xAI, Igor Babushkin, insisted that the company was in the right.

The truth lies somewhere in between.

In a post on xAI’s blog, the company published a graph showing Grok 3’s performance on AIME 2025, a collection of challenging math questions from a recent invitational mathematics exam. Some experts have questioned AIME’s validity as an AI benchmark. Nevertheless, AIME 2025 and older versions of the test are commonly used to probe a model’s math ability.

xAI’s graph showed two variants of Grok 3, Grok 3 Reasoning Beta and Grok 3 mini Reasoning, beating OpenAI’s best-performing available model, o3-mini-high, on AIME 2025. But OpenAI employees on X were quick to point out that xAI’s graph didn’t include o3-mini-high’s AIME 2025 score at “cons@64.”

What is cons@64, you might ask? Well, it’s short for “consensus@64,” and it basically gives a model 64 tries to answer each problem in a benchmark and takes the answers generated most frequently as the final answers. As you can imagine, cons@64 tends to boost models’ benchmark scores quite a bit, and omitting it from a graph might make it appear as though one model surpasses another when in reality, that’s isn’t the case.

Grok 3 Reasoning Beta and Grok 3 mini Reasoning’s scores for AIME 2025 at “@1” — meaning the first score the models got on the benchmark — fall below o3-mini-high’s score. Grok 3 Reasoning Beta also trails ever-so-slightly behind OpenAI’s o1 model set to “medium” computing. Yet xAI is advertising Grok 3 as the “world’s smartest AI.”

Babushkin argued on X that OpenAI has published similarly misleading benchmark charts in the past — albeit charts comparing the performance of its own models. A more neutral party in the debate put together a more “accurate” graph showing nearly every model’s performance at cons@64:

But as AI researcher Nathan Lambert pointed out in a post, perhaps the most important metric remains a mystery: the computational (and monetary) cost it took for each model to achieve its best score. That just goes to show how little most AI benchmarks communicate about models’ limitations — and their strengths.



Source link

Mainedigitalnews.com

Share
Published by
Mainedigitalnews.com

Recent Posts

Floridian Theatremakers Fight Back Against State and Local Governments in Arts Funding Battle

By Zachary Rivera. In Florida, state and local arts funding has become the site of…

1 day ago

NY Rangers Game 60 Open Thread: Rangers vs Columbus

The Rangers have three points in their last two games and actually won a game…

1 day ago

Nasdaq Joins Wall Street Push For Prediction Markets

One of Nasdaq’s options exchanges, Nasdaq MRX, has filed to offer cash-settled, binary-style contracts on…

1 day ago

How Winston Churchill’s ‘Iron Curtain’ speech launched the Cold War 80 years ago

Churchill reminded people how he had warned in the 1930s against the appeasement of Hitler…

1 day ago

Recognition Is Not Retrieval: Solving The Illusion Of Student Preparedness

contributed by Mike Brown, education researcher at preppool. Every educator has seen it. A thoughtful,…

1 day ago

Alan Cumming Apologizes for a ‘Trauma Triggering’ BAFTAs

Photo: James McCauley/Variety via Getty Images Alan Cumming issued a second apology for last week’s…

1 day ago