The US Department of Defense is planning to create secure environments for generative AI companies to train military-specific versions of their models on classified data, in an effort to improve the accuracy and effectiveness of AI models in certain tasks.
This move would allow AI models to learn from sensitive intelligence, such as surveillance reports or battlefield assessments, which could potentially give the US military an edge in future conflicts.
However, training AI models on classified data also raises unique security concerns, including the risk of sensitive information being inadvertently exposed to unauthorized users.
To mitigate these risks, the Pentagon plans to first evaluate the performance of AI models trained on non-classified data, before allowing training on classified data in a secure, accredited data center.
Only personnel with appropriate security clearances would be allowed to access the classified data, and even then, only in rare cases.
This development is part of the Pentagon’s broader effort to become an ‘AI-first’ warfighting force, with a growing demand for more powerful and sophisticated AI models, particularly in the context of the escalating conflict with Iran.
Photo by Eva Bronzini on Pexels
Photos provided by Pexels
