Photo by Alexandre Canteiro on Pexels
OpenAI’s GPT-5 is now available, but initial reactions suggest it might be more of a refinement than a revolution. Despite CEO Sam Altman’s hints of a major breakthrough, early users are reporting persistent errors and flawed reasoning within the model. The touted automatic model selection feature is also reportedly causing issues. On the upside, GPT-5 appears to be better at avoiding overly flattering or obsequious responses.
This incremental progress leads some to believe that GPT-5 is primarily a product update focused on user experience improvements, rather than a fundamental advancement in artificial intelligence. This shift may signal a broader trend within the AI industry, where companies are prioritizing specific application development over chasing radical breakthroughs in core AI capabilities.
Adding to the complexity, OpenAI is actively encouraging users to seek health advice from its models, a move that raises serious ethical concerns. The company’s promotion of AI for medical guidance highlights the risks of inaccurate or biased information, with little to no recourse for users who experience harm from relying on the technology’s health-related recommendations. Accountability in this area remains a significant challenge.