Cognitive warmup. If you hear about Anthropic more than once in our conversation this week, it is purely because there are a few things that need to be cleared up about the AI company. First, a federal appeals court in Washington turned down a plea to pause the “supply-chain risk” designation by the US Department of War. This means Anthropic’s defence contracts will remain restricted. (Second, later in our conversation this week.)

This came about after the company, in February, refused to allow government use of Claude AI for what it claims are fully autonomous weapons and domestic surveillance applications. Anthropic maintains that current frontier AI systems are not reliable enough to make life-or-death decisions without human oversight.
“It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” Dario Amodei said at the time.
Gemini’s important notebooks
I have to say: Google’s old-school approach to organisation is very much appreciated. Gemini is now getting an update, which extends the scope of integration to Google’s other important AI app NotebookLM, with notebooks core to the Gemini experience—from managing notes for exams to tracking workflows for complex projects or even a new hobby. These are first rolling out for subscribers and then free-tier users in the weeks ahead.
“Think of notebooks as personal knowledge bases shared across Google products, starting in Gemini. They give you a dedicated space to organise your chats and files, and because they sync with NotebookLM, you can unlock even more efficient workflows directly from Gemini,” says Rebecca Zapfel, senior product manager at Google DeepMind.
Core to the Gemini-NotebookLM integration would be to keep all data, research and conversations neatly organised with a seamless sync across both AI apps.
Learn a new term: Cognitive Surrender
If your brain hasn’t become a total mush after handing over all thinking to an AI chatbot, you might want to take the research paper by Wharton School researchers Steven Shaw and Gideon Nave a little seriously.
Shaw and Nave gave 1,372 people a test adapter from the Cognitive Reflection Test with access to a chatbot for help. Critical to this is the Tri-System Theory, which extends Daniel Kahneman’s dual-process model of cognition (”Thinking, Fast and Slow”) to add the third system, which is artificial intelligence.
“We show that people not only use System 3 to assist with reasoning, but often surrender to its outputs—whether correct or flawed. This cognitive surrender illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage. Similar to how System 1-driven heuristics lead to systematic bias, System 3 has differential cognitive shortcomings that will challenge decision-makers and society at large,” the researchers conclude.
Certain questions must now be answered by humans. What happens to judgements shaped not by our minds? What happens to effort and intuition when the thinking is done by a machine or an algorithm? And last but not least, how do we preserve elements of agency or reflection if we surrender thinking to a chatbot? I would add another one here—how lazy are we going to become?
The legend of Claude Mythos
In my analysis a few days ago, I noted an irony that’s underlying the entire situation as it unfolds. For an apparently super-powerful AI model that got leaked via Anthropic’s content management system, there are claims this will fix as yet unknown and soon to be known vulnerabilities in all of the world’s software.
It might, or might not (and chances are much higher here), but AI companies continue to weave extraordinarily effective narratives, hoping no one would notice.
Anyway, Anthropic claims that Claude Mythos is so powerful, they will not make this model generally available. Instead, it provides the foundation for something called Project Glasswing to secure every other software. The partners include Apple Inc., Google, Microsoft Corp., Amazon Web Services, Nvidia Corp. and Cisco Inc.
Anthropic says they will commit up to $100 million in usage credits for the Mythos Preview across these efforts. After that, Claude Mythos Preview will be available to participants at $25/$125 per million input/output tokens. It’s important to note the commercial architecture that’s being created here: Claude Mythos is still very much a product in Anthropic’s scheme of things—scarcity, capabilities, and therefore premium pricing strategies will be at work.
Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Want this newsletter delivered to your inbox. Subscribe here.




