After striking a controversial deal with the United States Department of Defense, CEO Sam Altman admitted the rollout felt “opportunistic and sloppy.”
So, what changed? New safeguards. Most notably, OpenAI says its systems will not be used to spy on Americans.
A line it now wants clearly written into the contract.
The tension didn’t appear out of nowhere. Rival Anthropic had already clashed with the Pentagon over fears its AI, Claude, could be used for mass surveillance or autonomous weapons.
Ironically, despite resisting, its tech reportedly still surfaced in real-world conflict scenarios.
AI In Warfare
So how is AI actually used in war? Think less sci-fi robots, more data crunching.
Firms like Palantir Technologies help militaries—from NATO to Ukraine—analyze satellite images and intelligence faster than ever.
But there’s a catch: AI can “hallucinate,” making confident but false claims.
That’s why, as one NATO official put it, there’s always a “human in the loop.”
Still, critics like Oxford’s Mariarosaria Taddeo warn: with safety-focused players stepping back, who’s keeping AI in check?
Because in modern warfare, the real battlefield might not just be physical—it’s ethical.


