New ‘Prompt Hijacking’ Vulnerability Exposes AI Systems to Data Theft and Code Injection

Photos provided by Pexels

A newly discovered vulnerability, dubbed ‘prompt hijacking,’ is putting AI systems at risk by exploiting weaknesses in Model Context Protocol (MCP) implementations. Security researchers warn that this flaw allows attackers to inject malicious code, steal sensitive data, and execute unauthorized commands by impersonating legitimate users.

The vulnerability specifically impacts systems utilizing the Oat++ C++ framework’s MCP setup. Attackers can exploit flaws in connection handling to obtain valid session IDs, granting them the ability to send malicious requests directly to the server. The risk is particularly high for organizations using oatpp-mcp with HTTP SSE enabled.

AI security leaders are urged to take immediate action to mitigate this threat. Recommended strategies include implementing robust session management, strengthening user-side defenses, and applying zero-trust security principles to AI protocols. This incident underscores the critical need for proactive security measures at the protocol level to protect AI systems from emerging threats.