Use with caution
Designing the use of AI, from the debate between Anthropic and the Pentagon
This week Artifacts should normally not go out. Yet what has happened between Anthropic, OpenAI and the US Pentagon is too interesting a story to miss.
If you already know the news, feel free to skim to the “So, what?” section. If not, I tried to keep the story concise.
In July 2025, the Pentagon awarded two-year contracts to Anthropic, OpenAI, Google, and xAI to prototype frontier AI capabilities for national security. The goal was to develop agentic workflows, intelligence analysis tools, and battlefield simulations.
Among them, Anthropic had something the others did not. Its model Claude was the only frontier AI system cleared to run on fully classified Pentagon networks, deployed through Palantir Technologies’ AI Platform and Amazon’s Top Secret Cloud. It had been used, reportedly, during the operation to capture Nicolás Maduro and, most recently, for the strikes in Iran.
All four contracts, though, included some usage safety restrictions. This changed on January 12, 2026 when the Defense Secretary Pete Hegseth issued an AI strategy memo declaring the Pentagon would become ‘an ‘AI-first’ warfighting force with AI agents integrated ‘from campaign planning to kill chain execution’.
To this end, the Pentagon asked Anthropic and the others to change the contracts so to use AI at its full potential, with no ‘policy constraints’ decided by the AI companies. “Any lawful use” should be allowed.
On February 27, Hegseth reportedly gave Anthropic an ultimatum: accept the revised terms or lose the contract. Anthropic rejected the offer, citing concerns about undermining democratic values, and no intention to allow the use of AI for mass surveillance of American citizens and autonomous weapons that can kill with no human in the decision loop.
That same evening, Sam Altman announced that OpenAI had reached a new agreement with the Pentagon. According to Altman, the deal preserved the same red lines on autonomous weapons and mass surveillance that Anthropic had defended.
However, the details of the contract have not been made public, and some observers believe the new arrangement may allow broader uses than Anthropic considered acceptable.
OpenAI claims that they will be able to enforce better their constraints as the models used by the Pentagon will run on their own cloud.
Finally, the Pentagon has responded by designating Anthropic a “Supply-Chain Risk to National Security” - usually reserved for foreign companies (like Chinese ones) - ordering federal agencies to stop using its products in the next 6 months. The New York Times said this is ‘likely the harshest punitive action the U.S. government has taken against a major American company this century, possibly ever.’
Anthropic will challenge the designation, which could greatly impact their business, while also saying they ‘have much more in common with the Department of War than we have differences.’
So, what?
This story is ultimately about constraints.
Not only those that AI companies want to put on the US government in the use of technologies or those put by technology regulations on the use of AI, but also, and maybe more crucially, those baked into the design of AI technologies.
Let’s look at the two red lines set by Anthropic:
‘Mass domestic surveillance is incompatible with democratic values and presents risks to fundamental liberties’. Therefore, despite being technically feasible to use their AI models to analyse data on citizens, there has to be constraints as this is overly dangerous and threatens rights.
Fully autonomous weapons, instead, ’need to be deployed with proper guardrails, which don’t exist today.’ So they could be used, but the current safety constraints are not sufficient, and thus it is imperative to have safeguards in place.
These constraints are not just on the AI models but on the blurred terminology of ‘any lawful use’ adopted by the Pentagon.
In theory, this sounds straightforward. But in the United States - where there is still no comprehensive federal regulation of AI - the meaning of “lawful use” is far from settled. The boundaries are still evolving.
In practice, this means that the law alone does not fully define the acceptable uses of AI.
And that is precisely where these technological constraints come in.
Interestingly, the Pentagon itself had tried to push back constraints in the January memo, writing that they ‘must utilize AI models free from usage policy constraints that may limit lawful military applications.’
This sounds like a very controversial statement depicting a contrast where the constraints of technologies could be countering or impeding uses allowed under the law.
Provocatively, this is exactly the contrary of the reasons why technology regulations are often criticised: that they impede the full use of technologies.
Usually, you see BigTech or other organisation complaining about regulation. Here, the situation is almost inverted: the Pentagon argues that constraints embedded in the technology itself could limit uses that the law might otherwise allow.
Or, as put more bluntly by Hegseth ‘America’s warfighters will never be held hostage by ideological whims of Big Tech.’ Or from Trump:
At its core, this raises a fundamental question: who ultimately controls powerful technologies? Those who build them or those who regulate them, this time more interested in a laissez-faire regulation for unbridled use?
By setting these constraints, AI companies are doing something subtle but important. They are not only designing the technology itself, but also shaping the space of possible uses.
Interestingly, they are designing a downstream layer of the use of AI that sits on top of the design of technology itself.
If we have long argued that technologies are not neutral - that their design shapes their political and social effects - this story reveals something further.
With AI, the design of use becomes almost as important as the design of the system itself.
This is particularly true because artificial intelligence is a general-purpose technology. Its capabilities can be applied across a vast number of domains, and thus the layer of how it can be used is critical. Not all possible options are good options.
Concretely, as AI would, for instance, enable easy data analysis, the issue is rather with what use is done of such capability: is it to assess, say, decarbonisation metrics or rather spying on citizens through combining data?
The real question is therefore not just what AI can technically do but also what it will be allowed to do.
And in the absence of clear ‘lawful uses’, those decisions increasingly fall for now to the organisations building these technologies and in how they design what’s allowed and what’s not.
Save for Later
Why someone should write, from Orwell - quite crucial nowadays
So what will happen to jobs, by Anthropic. Btw, Anthropic gave Claude a Substack.
Maybe just put a stop on the AI slop, please? In the meantime, a super funny video on a professional enshittificator (maybe a new word?)
If you need to vibe with your Google documents, go for this.
AI doesn’t really understand the physical world.
How to curate what information we eat.
Where next?
In 10 days I’ll be at the Govtech forum in Milan 🇮🇹 - let’s say hi if you’re around!
The Bookshelf
Speaking of edge uses of technology, the ‘Palestine Laboratory’ is a good book to shed light on what the use of technology really means and why its design matters.
📚 All the books I’ve read and recommended in Artifacts are here.
Nerding
If you’re into cool interactions, design, animations, you may want to check out Mobbin, which collects them all together and is a great source of inspiration for anyone building products!
☕?
If you want to know more about Artifacts, where it all started, or just want to connect...







