Chinese AI firm DeepSeek has unveiled its latest flagship model, V4, which promises to significantly improve upon its predecessor with enhanced capabilities to process longer prompts and handle large volumes of text more efficiently.
V4 represents DeepSeek’s most substantial release since the groundbreaking R1 reasoning model, which captivated the global AI industry in January 2025 with its impressive performance and efficiency. Like its predecessors, V4 is open-source, allowing users to download, utilize, and modify it freely.
The V4 model is available in two versions: V4-Pro, a larger model designed for complex coding tasks and agent operations, and V4-Flash, a smaller, faster, and more cost-effective version. Both models feature reasoning modes, enabling them to meticulously parse user prompts and display each step of the problem-solving process.
DeepSeek asserts that V4’s performance is on par with the best models in the industry, but at a significantly lower cost, making it an attractive option for developers and companies leveraging the technology. The costs associated with using V4-Pro are $1.74 per million input tokens and $3.48 per million output tokens, while V4-Flash offers even greater affordability, at approximately $0.14 per million input tokens and $0.28 per million output tokens.
According to the company’s released results, DeepSeek V4-Pro demonstrates competitive performance with leading closed-source models, achieving comparable results to the latest big AI models on major benchmarks.
Photo by Okiki Onipede on Pexels
Photos provided by Pexels
