

The U.S. War Department has signed a new package of classified-network AI agreements with some of America’s biggest model and infrastructure companies, pushing frontier systems deeper into military workflows.
The official May 1 release lists eight companies: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle. The agreements allow their AI capabilities to operate inside Impact Level 6 and Impact Level 7 environments for lawful operational use.
Those environments cover highly sensitive workloads, which makes the rollout more than another government software contract. It places advanced AI close to classified data, intelligence workflows, planning systems, and operational decision support.
The War Department framed the move as part of its AI-first military strategy. Officials said the models will support data synthesis, situational understanding, warfighter decision-making, intelligence work, and enterprise operations.
The scale is already large. GenAI.mil, the department’s official AI platform, has crossed 1.3 million users in five months, with personnel generating tens of millions of prompts and deploying hundreds of thousands of agents. The department said the system is already cutting some tasks from months to days.
That adoption curve explains why the vendor list matters. By spreading access across model developers, cloud providers, chip firms, and infrastructure companies, the department is trying to avoid dependence on one AI supplier. The mix also gives it room to use both closed and open-weight systems across different classified workflows.
OpenAI’s role will attract close attention because the company has already published its position on military AI use. In its agreement update, OpenAI said its red lines include no mass domestic surveillance, no direction of autonomous weapons systems, and no high-stakes automated decisions without human responsibility.
That separates OpenAI’s public posture from broader “lawful use” language around classified AI deployment. The company said its models will run through cloud-only deployment with its safety stack in place, rather than being handed over without controls.
Anthropic is not part of the latest list. Its absence keeps the ethics fight around defense AI alive, especially as model providers face pressure to balance national-security contracts with limits around surveillance, autonomy, and weapons systems.
The deal package shows how quickly AI is becoming part of national-security infrastructure. Cloud capacity, chips, model access, open-weight systems, secure deployment, and classified data controls are now moving together.
That same pressure is visible outside government. As AI agents gain access to tools, code, data, and financial systems, stronger controls around permissions and sandboxing are becoming critical. Tests around AI agents bypassing sandbox controls already show how tool access can create unexpected paths around intended limits.
The War Department’s AI push now puts that issue inside the most sensitive operational environments. The next test is execution: which systems clear classified deployment first, how guardrails survive real workloads, and whether a multi-vendor approach can give the military speed without creating new security blind spots.
The post Pentagon Pulls AI Giants Into Classified Networks In Major War AI Push appeared first on Crypto Adventure.