Photo by Ash Valiente on Pexels
A new open-source GPT model is making waves for its compelling price-to-performance ratio. While benchmarks indicate it may not rival the performance of models like o4-mini, its affordability is a major draw. Input and output costs are reportedly significantly lower, making it an attractive option for budget-conscious developers and researchers. Intriguingly, preliminary data suggests that running the larger 120B parameter version of the model could even be cheaper than the 20B variant, possibly attributed to variations in reasoning token consumption. The initial discussion on this model originated in a Reddit thread.