The final days of the AI Mathematical Olympiad’s latest competition were a transcontinental relay for team NVIDIA.
Every evening, two team members on opposite ends of the U.S. would submit an AI reasoning model to Kaggle — the online Olympics of data science and machine learning. They’d wait a tense five hours before learning how well the model tackled a sample set of 50 complex math problems.
After seeing the results, the U.S. team would pass the baton to teammates waking up in Armenia, Finland, Germany and Northern Ireland, who would spend their day testing, modifying and optimizing different model versions.
“Every night I’d be so disappointed in our score, but then I’d wake up and see the messages that came in overnight from teammates in Europe,” said Igor Gitman, senior applied scientist. “My hopes would go up and we’d try again.”
While the team was disheartened by their lack of improvement on the public dataset during the competition’s final days, the real test of an AI model is how well it can generalize to unseen data. That’s where their reasoning model leapt to the top of the leaderboard — correctly answering 34 out of 50 Olympiad questions within a five-hour time limit using a cluster of four NVIDIA L4 GPUs.
“We got the magic in the end,” said Northern Ireland-based team member Darragh Hanley, a Kaggle grandmaster and senior large language model (LLM) technologist.
Building a Winning Equation
The NVIDIA team competed under the name NemoSkills — a nod to their use of the NeMo-Skills collection of pipelines for accelerated LLM training, evaluation and inference. The seven members each contributed different areas of expertise, spanning LLM training, model distillation and inference optimization.
For the Kaggle challenge, over 2,200 participating teams submitted AI models tasked with solving 50 math questions — complex problems at the National Olympiad level, spanning algebra, geometry, combinatorics and number theory — within five hours.
The team’s winning model uses a combination of natural language reasoning and Python code execution.
To complete this inference challenge on the small cluster of NVIDIA L4 GPUs available via Kaggle, the NemoSkills team had to get creative.
Their winning model used Qwen2.5-14B-Base, a foundation model with chain-of-thought reasoning capabilities which the team fine-tuned on millions of synthetically generated solutions to math problems.
These synthetic solutions were primarily generated by two larger reasoning models — DeepSeek-R1 and QwQ-32B — and used to teach the team’s foundation model via a form of knowledge distillation. The end result was a smaller, faster, long-thinking model capable of tackling complex problems using a combination of natural language reasoning and Python code execution.
To further boost performance, the team’s solution reasons through multiple long-thinking responses in parallel before determining a final answer. To optimize this process and meet the competition’s time limit, the team also used an innovative early-stopping technique.
A reasoning model might, for example, be set to answer a math problem 12 different times before picking the most common response. Using the asynchronous processing capabilities of NeMo-Skills and NVIDIA TensorRT-LLM, the team was able to monitor and exit inference early if the model had already converged at the correct answer four or more times.
TensorRT-LLM also enabled the team to harness FP8 quantization, a compression method that resulted in a 1.5x speedup over using the more commonly used FP16 format. ReDrafter, a speculative decoding technique developed by Apple, was used for a further 1.8x speedup.
The final model performed even better on the competition’s unseen final dataset than it did on the public dataset — a sign that the team successfully built a generalizable model and avoided overfitting their LLM to the sample data.
“Even without the Kaggle competition, we’d still be working to improve AI reasoning models for math,” said Gitman. “But Kaggle gives us the opportunity to benchmark and discover how well our models generalize to a third-party dataset.”
Sharing the Wealth
The team will soon release a technical report detailing the techniques used in their winning solution — and plans to share their dataset and a series of models on Hugging Face. The advancements and optimizations they made over the course of the competition have been integrated into NeMo-Skills pipelines available on GitHub.
Key data, technology, and insights from this pipeline were also used to train the just-released NVIDIA Llama Nemotron Ultra model.
“Throughout this collaboration, we used tools across the NVIDIA software stack,” said Christof Henkel, a member of the Kaggle Grandmasters of NVIDIA, known as KGMON. “By working closely with our LLM research and development teams, we’re able to take what we learn from the competition on a day-to-day basis and push those optimizations into NVIDIA’s open-source libraries.”
After the competition win, Henkel regained the title of Kaggle World Champion — ranking No. 1 among the platform’s over 23 million users. Another teammate, Finland-based Ivan Sorokin, earned the Kaggle Grandmaster title, held by just over 350 people around the world.
For their first-place win, the group also won a $262,144 prize that they’re directing to the NVIDIA Foundation to support charitable organizations.
Meet the full team — Igor Gitman, Darragh Hanley, Christof Henkel, Ivan Moshkov, Benedikt Schifferer, Ivan Sorokin and Shubham Toshniwal — in the video below:
Sample math questions in the featured visual above are from the 2025 American Invitational Mathematics Examination. Find the full set of questions and solutions on the Art of Problem Solving wiki.