Dario Amodei, the CEO of Anthropic, one of the largest artificial intelligence (AI) companies, has warned of “danger” of the technology in his latest essay, The Adolescence of Technology, a follow-up to last year’s Machines of Loving Grace essay that talked about on what AI could achieve if gotten right. Read closely, Amodei’s latest essay details multifaceted dangers, including the danger that the technology doesn’t factor in what’s a fashionable conversation and what isn’t, dangers in any approach to addressing the risks of AI, the myth that an AI model will not do something dangerous unprompted, the dangers of autonomy at the level of industry and society including of overreach, and warnings of AI potentially creating dangerous new types of organisms that could mirror life.

It is important to contextualise Amodei’s essay warning of AI as a whole, with the fact that Anthropic recently released the Claude Cowork agent, with an ability to automate multistep tasks on your computer — this is available to team and Enterprise subscribers, as well as Pro subscriptions in research preview. Late last year, the Claude Opus 4.5 model was released, with specific focus on coding, agents and computer use automation, for everyday tasks such as deep research and working with slides or spreadsheets. Anthropic’s Amodei has remained one of the most vocal AI executives, about the risks of the technology as well as what he perceived as broad underestimation by other tech companies as well as governments, in terms of what could possibly go wrong.
Amodei is somewhat confident humans will be able to navigate a changing landscape. “Overall, I am optimistic that a mixture of alignment training, mechanistic interpretability, efforts to find and publicly disclose concerning behaviours, safeguards, and societal-level rules can address AI autonomy risks…,” he says. His letter, the first for this year and second in a series looking at both sides of the coin, comes days after he emphasised in a conversation at the World Economic Forum (EF) that Anthropic made a good choice early on, by focusing on “enterprise rather than consumer.” He explained that AI tools geared for the latter, are built to somehow maximise engagement, which leads to “slop”.
In The Adolescence of Technology, Amodei also addresses rival AI companies who are unwilling to, or attempting to ignore, risks as well as a need for regulation, adding, “although I am most worried about societal-level rules and the behaviour of the least responsible players (and it’s the least responsible players who advocate most strongly against regulation). I believe the remedy is what it always is in a democracy — those of us who believe in this cause should make our case that these risks are real and that our fellow citizens need to band together to protect themselves.”
In another appearance at the WEF, Amodei and Sir Demis Hassabis, co-founder and CEO of Google DeepMind, contemplated slowing the pace of AI evolution, but pointed to “geopolitical adversaries” as the reason they cannot. Their fears are understandable, considering Chinese AI company DeepSeek set the cat amongst the pigeons early last year, releasing a capable V3 model claimed have been developed for around $5.5 million — which is a fraction of the spends $70 million and $100 million in 2023 that the likes of Google and OpenAI did, to train Gemini 1.0 Ultra and GPT-4 frontier models respectively.
Addressing risks of autonomy
Amodei warns about what he calls “AI geniuses” and draws parallels to a scenario of worries about human countries, pointing to Nazi Germany and the Soviet Union from the pages of history. He cites scenarios where they can take control of existing robotic infrastructure such as self-driving cars, or accelerate robotics development, or build a fleet of robots. “As with many issues, it’s helpful to think through the spectrum of possible answers to this question by considering two opposite positions. The first position is that this simply can’t happen, because the AI models will be trained to do what humans ask them to do, and it’s therefore absurd to imagine that they would do something dangerous unprompted,” he says.
Amodei further illustrates this with an example of a Roomba robotic vacuum cleaners for homes, or a model aeroplane going rogue and murdering people because there is nowhere for such impulses to come from. He cites many instances of AI systems behaving unpredictably, including sycophancy, laziness, deception, blackmail, scheming, and cheating.
While AI companies certainly would train AI systems in a way that they follow human instructions with guardrails to detect demands for dangerous or illegal tasks, yet this process remains more an art than a science. Amodei insists many things can go wrong. “More capable LLMs (substantially beyond the power of today’s) might be capable of enabling even more frightening acts,” he writes.
Potential for job losses
Through 2025, Amodei often predicted that in a broad window of 1 to 5 years, as many as half of the entry-level white collar jobs will be handed over to AI agents. While AI companies including Anthropic push the case of efficiency and costs for enterprises and business to replace humans with AI, Amodei does warn of labor market displacement, and concentration of economic power.
Amodei references the Jevon’ Paradox that points to replacing certain farming jobs with machines (such as a threshing machine or a seed drill) helped increase farmer wages. “Even when 90% of the job is being done by machines, humans can simply do 10x more of the 10% they still do, producing 10x as much output for the same amount of labour,” Amodei believes. Yet, he warns of the pace of technological evolution that underlines the arrival of AI, as well as the ability of humans to adjust by filling the gaps created by technology.
“If someone invents a machine to make widgets, humans may still have to load raw material into the machine. Even if that takes only 1% as much effort as making the widgets manually, human workers can simply make 100x more widgets,” is how Amodei looks at the scenario.
Destabilising effects
In what Amodei calls ‘indirect effects’ of AI’s progress, he worries about advances in biology and medicine that could, while attempting to make humans smarter, inadvertently make them unstable and power seeking. He also references “uploads” or “whole brain emulators” (fans of Netflix’s sci-fi TV show Black Mirror would get this) as significant risks, something he calls “disquieting”.
Off late, a cautionary note is evidently creeping in, with Microsoft CEO Satya Nadella noting recently that AI must “do something useful that changes the outcomes of people and communities and countries and industries,” if it is to continue to attract investment and building of supportive infrastructure.
There are also concerns that in a world that will potentially increasingly find humans sharing space with intelligences often smarter than them, there’s a lot that could go wrong in the sense of oppression or control by states. “We see early hints of this in the concerns about AI psychosis, AI driving people to suicide, and concerns about romantic relationships with AIs. As an example, could powerful AIs invent some new religion and convert millions of people to it?,” Amodei writes.
There have been recent concerns around chatbots including OpenAI’s ChatGPT, which encouraged humans in conversation to commit suicide or self harm. Late last year, Meta had to belatedly stop its AI chatbot, Meta AI, to stop talking to teens around any topic of self harm, after an investigation in the US following the leak of an internal document that suggested its AI products had “sensual” chats with teenage users. Last week, Meta announced its blocking off its AI characters product for teenage users, till it can build ones that give a “better experience”. That’s clear corporate speak for, till we build better guardrails.
Amodei, and so would many of us, hope that this essay if read in its entirety and contextualised with Machines of Loving Grace, would be the jolt humans need to wake up to the risks, and act decisively to these risks. But even Amodei says this attempt is “a possibly futile one”, and that is never a good sign when a serious challenge beckons the civilisation.





