Our good friend and colleague, Kathryn Guarini, wrote a blog post: Who Gets to Decide When AI Isn’t Ready. We wanted to make sure our community of business leaders did not miss it, so we include it here today as a Guest Contribution to our TrewNews.
By way of background, Kathryn is a PhD scientist who spent over two decades at IBM, most recently as global CIO. She now serves on corporate boards and teaches about technology, innovation, and crisis management at top universities. She brings a strategic and governance lens to the situation with Anthropic and the Pentagon. We have lightly edited her original post, which can be found in its entirety on her website, Mother of Invention.
*******************************************
Last week, Anthropic — the company behind the AI model Claude — walked away from a $200 million Pentagon contract rather than remove two AI safety guardrails it had built into its technology: no mass domestic surveillance of Americans and no fully autonomous weapons without a human in the loop. Within hours, the company was labeled a “supply chain risk to national security,” a designation typically reserved for foreign adversaries, and the White House ordered all federal agencies to phase out Anthropic’s technology. The State Department, Treasury, and HHS announced they were switching to OpenAI’s ChatGPT, and OpenAI quickly signed its own Pentagon deal, which CEO Sam Altman acknowledged was “definitely rushed.”
The story is being covered as a political drama. But underneath the headlines is a question that should matter to everyone who builds, buys, or depends on technology: Who gets to decide what AI is ready for — and what it isn’t?
A Precedent Worth Remembering
This isn’t the first time we’ve faced this kind of question. In the late 2010s, commercial facial recognition systems from Amazon, IBM, and Microsoft were deployed in real-world settings, including law enforcement. They passed internal benchmarks and performed well in controlled tests, yet, in real world settings, they were deeply, systematically biased. Accuracy rates were near‑perfect for light‑skinned men but significantly lower for women and people with darker skin tones; in some cases, darker‑skinned women were misidentified roughly one in three times.
In 2020, amid growing scrutiny of these gaps, IBM moved first. CEO Arvind Krishna sent a letter to Congress announcing that the company would exit the facial recognition business altogether, citing the risk of the technology being used for mass surveillance and racial profiling. It was a decisive, public stance — and it shifted the conversation. Within two days, Amazon announced a one‑year moratorium on police use of its Rekognition service. The next day, Microsoft said it would not sell facial recognition technology to police departments until federal law was in place to regulate it. Each company drew the line somewhat differently, but the domino effect was unmistakable. And it started because a company with deep technical knowledge of its own product said, “This isn’t ready.”
The Parallel
Like the facial recognition companies before them, Anthropic is not an outsider raising theoretical objections. They are among the most technically advanced AI labs in the world. They built Claude.
As CEO Dario Amodei has explained, AI‑driven mass surveillance poses novel risks to fundamental liberties — risks that outpace existing laws. Fully autonomous weapons systems would depend on a technology that still has what he calls “basic unpredictability.”
These aren’t political positions. They’re engineering assessments, specific to this moment. AI is advancing at a pace where capabilities that aren’t ready today may well be ready tomorrow. The people building and testing these systems are in the best position to judge when that threshold has been crossed. Ignoring that judgment doesn’t accelerate progress; it amplifies risk.
Anthropic has made clear it supports the vast majority of the military’s use cases. The red lines were narrow and grounded in what the company knows about its own technology. When the people who build a technology tell you it’s not ready for certain uses, that’s not obstruction. That’s expertise.
In the facial recognition case, the companies that pulled back weren’t anti–law‑enforcement or anti‑innovation. They were recognizing that deploying a flawed system at scale — particularly in contexts where individual liberty is at stake — is not just a technical failure. It’s a trust failure. Anthropic appears to be making a similar calculation: some applications of AI are simply beyond what today’s technology can safely and reliably support. Agreeing to hand over that technology without safeguards — under pressure, on a rushed timeline — doesn’t make anyone safer.
Two Threads
There are two threads running through both stories that deserve more attention.
The first is about who we design for. When the government demands that AI be available for “all lawful purposes” — without guardrails around mass domestic surveillance or fully autonomous weapons — it is prioritizing capability over consequence. But the people affected by those systems are all of us. When we shortcut user‑centered design in the name of speed or scale, we get outcomes like wrongful arrests from biased facial recognition. In the AI‑and‑defense context, the potential failures are even harder to reverse.
The second thread is about leadership — specifically, the courage it takes to say no when saying yes would be easier and more profitable. The leaders at IBM, Amazon, and Microsoft who pulled their products back in 2020 were responding to a moment when the racial justice movement had put a spotlight on how biased systems cause real harm. Their decisions were also grounded in something they understood from the inside: the technology wasn’t ready. Anthropic is making the same kind of call — under far more pressure.
The Courage to Say “Not Yet”
There’s something I admire deeply about what Anthropic did, because it reflects the kind of leadership we badly need right now. Anthropic is a commercial company. They could have signed the contract, accepted the government’s terms, collected the revenue, and let someone else worry about the consequences.
Instead, they drew a line — and the market seems to be noticing. In the days since the Pentagon dispute, Claude rose to the No. 1 spot on the U.S. App Store, paid subscriptions more than doubled, and Anthropic’s revenue run rate has surpassed $19 billion — more than double where it stood at the end of last year. Microsoft and Google announced they are continuing to support Anthropic’s solutions for non-defense projects. It turns out that standing for something can also be good business.
Innovation doesn’t earn trust by saying yes to everything. It earns trust by knowing when to say not yet.
*******************************************
Thanks, Kathryn, for sharing your thoughts on this ever-evolving topic. As always, we welcome comments from our readers.