In a groundbreaking development, Samsung AI’s Alexia Jolicoeur-Martineau has unveiled a tiny AI model that surpasses the reasoning capabilities of colossal Large Language Models (LLMs). The Tiny Recursive Model (TRM), boasting a mere 7 million parameters – a minuscule fraction (under 0.01%) of those found in leading LLMs – has achieved state-of-the-art performance on challenging reasoning benchmarks like the ARC-AGI intelligence test. This achievement directly challenges the prevailing notion that sheer scale is the sole path to advancing AI.
While LLMs excel at text generation, they often falter when confronted with complex, multi-step reasoning tasks. Samsung’s TRM employs a single, compact network that iteratively refines both its internal reasoning and its proposed answer. This recursive process allows the model to progressively rectify its own errors with remarkable parameter efficiency.
Intriguingly, a two-layer network exhibited superior generalization compared to a four-layer variant, mitigating the risk of overfitting. TRM forgoes complex mathematical justifications, instead leveraging back-propagation throughout its full recursion process, resulting in a significant performance boost.
The TRM showcases impressive results, achieving an 87.4% accuracy on the Sudoku-Extreme dataset and an 85.3% score on Maze-Hard. Most remarkably, it attains a 44.6% accuracy on ARC-AGI-1 and 7.8% on ARC-AGI-2, eclipsing the performance of larger models, including Gemini 2.5 Pro. Furthermore, the training process for TRM has been streamlined with a simplified adaptive mechanism.
This research indicates that by engineering architectures capable of iterative reasoning and self-correction, challenging problems can be addressed with significantly reduced computational demands, opening new possibilities for AI development.
