Photo by cottonbro studio on Pexels
A compact new AI system is generating buzz with claims of surpassing Deepseek R1 and Llama 3.2 1B in Logicbench tests, boasting dramatically faster processing speeds. The system, surprisingly small at just 102.3kb, reportedly achieved superior accuracy on various Logicbench tasks. According to its creator, the AI utilizes core math functions as a kernel, augmented by a logic, expert module hub, math, and memory modules. This architecture allowed it to outperform Deepseek and Llama models. The source code is publicly available on GitHub (github.com/midatlanticAI/atlandemo), inviting community scrutiny and validation. The creator humorously referenced a Claude OPUS 4 output, characterizing DeepSeek’s performance as a ‘mathematical homicide’ in comparison to the new system’s results. The initial report originated from a Reddit post (https://old.reddit.com/r/artificial/comments/1luahmz/i_built_a_novel_ai_system_that_outperformed/).