AGI timelines, AI chips, and a race no one can stop| Business News

0
4
AGI timelines, AI chips, and a race no one can stop| Business News


If there were any lingering doubts about whether the world’s leading artificial intelligence companies agree on where AI is headed — or how fast it should get there — those doubts were dispelled at the World Economic Forum (WEF) 2026. During The Day After AGI session in Davos, Demis Hassabis, co-founder and CEO of Google DeepMind, and Dario Amodei, CEO and co-founder of Anthropic, laid out starkly different views on AGI timelines, geopolitical as well as societal risks, and whether the AI race can — or should — be slowed. This is crucial since Google DeepMind and Anthropic develop two of the most capable AI models at this time, Gemini and Claude, respectively.

CEO of DeepMind Technologies Demis Hassabis and CEO and co-founder of Anthropic Dario Amodei talk during the 56th annual World Economic Forum meeting in Davos, Switzerland, on January 20. (Reuters)
CEO of DeepMind Technologies Demis Hassabis and CEO and co-founder of Anthropic Dario Amodei talk during the 56th annual World Economic Forum meeting in Davos, Switzerland, on January 20. (Reuters)

Unlike WEF 2025, where much of the AI chatter revolved around China’s DeepSeek and its strikingly cost-efficient large language models, Davos this year zoomed out. The conversation was no longer about who built the cheapest or fastest model, as it was last year about Chinese company DeepSeek’s impressively cost effective model development. This year, discussions revolve around a broader standpoint — of how the technology is expected to be implemented, its risks, and the impact on society.

A cautionary note is evidently creeping in, with Microsoft CEO Satya Nadella noting that AI must “do something useful that changes the outcomes of people and communities and countries and industries,” if it is to attract investment and building of supportive infrastructure.

AGI and a global backlash

Google DeepMind and Anthropic remain confident about their approach. Hassabis agrees that it’ll be a complicated next couple of years as AI companies and the society navigates various challenges including geopolitical questions, but points to their AlphaFold and science work as crucial elements trying to solve real world problems such as curing diseases and finding new energy sources.

“I think the balance of what the industry is doing is not enough balance towards those types of activities. We should have a lot more examples of AlphaFold-like things that help with unequivocal good in the world,” Hassabis noted. “It is incumbent on the industry and on us leading players to not just talk about it but demonstrate that,” he added.

Demis touched on the geopolitical landscape that must be navigated, and the “cross-border nature of the technology”, and called for a worldwide consensus which he believes will provide a positive fillip to minimum safety standards and deployment. “It’s vitally needed, because it’s going to affect all of humanity,” he emphasised.

Last year, Amodei had predicted that Anthropic would have a model that would be able to do everything a human could do at the level of a Nobel laureate across many fields by the year 2026-27. When reminded of that, Amodei responded by saying “it’s hard to know exactly when something will happen”, but insists “I don’t think it’ll turn out to be that far off”.

Amodei then explained Anthropic’s approach, and therefore the basis for his prediction, which sees the AI company develop models that are proficient in coding and AI research. “We would use that to produce the next generation of models and to speed up the loop,” says Amodei. He further illustrated the progress in this direction by saying that engineers within Anthropic now don’t write code anymore — they let the Claude models write the code, and then the human steps in to edit it.

“We might be 6 to 12 months away from when a model may be doing all of what software engineers do end to end. And then it’s a question of how fast that loop closes,” Amodei predicts.

Slowing pace of evolution

With these challenges to navigate, Hassabis believes there may be a need to slow down the pace of AI’s evolution, to find a broader window for solutions. “It would be good to have a slightly slower pace than what we are currently predicting. Even my timelines say we can get this right societally,” he suggests, adding that would require some coordination (between AI companies).

“I prefer your timelines, that I’ll concede”, Amodei said, likely referencing Hassabis’ prediction that artificial general intelligence, or AGI, has a 50% chance of arriving by the year 2030, but a more realistic timeline of 5 to 10 years. Amodei has predicted AGI will be achieved in around 2 years from now.

“We’re just one company, we are trying to do the best we can and operate in an environment that exists now matter how crazy it is,” Amodei remarked. “But at least my policy recommendations haven’t changed. That not selling chips is one of the biggest things we can do to make sure we have the time to handle this.”

Amodei circled back to Sir Demis’ predictive timelines and said he wished AI companies in the US had as much time. “But assuming I’m right and it can be done in one or two years, why not slow down to Demis’ timeline,” he asked. When moderator Zanny Minton Beddoes, Editor-in-Chief of The Economist said “well, you could just slow down,” Amodei said it wasn’t possible.

“The reason we can’t do that is because we have geopolitical adversaries building the same technology, at a similar pace. It’s very hard to have an enforceable agreement where they slow down and we slow down,” Amodei responded, widening the scope of any change in AI development pace to include countries, and not just US based AI companies.

Interestingly, Amodei didn’t mention the supposed cost advantage that DeepSeek’s models have. The Chinese AI company has claimed to have spent around $5.5 million to train its V3 model at the time, a considerably frugal approach to delivering similar results as its US counterpart models — something that took the likes of Google, OpenAI, Meta and others, hundreds of millions of dollars in investments to achieve.

According to research by Epoch.AI, Google and OpenAI spent roughly between $70 million and $100 million in 2023 to train the Gemini 1.0 Ultra and GPT-4 frontier models respectively. Those costs have only increased in the years since.

Nvidia selling chips to China

That was the perfect cue for Amodei to say, “if we can just not sell the chips, then this isn’t a question of competition between the US and China, this is a question of competition between me and Demis, which I’m very confident we can work out.” The Anthropic CEO made clear his views about Nvidia’s intention to ship AI chips to Chinese companies, targeting the company’s CEO Jensen Huang.

Earlier this month, at the Consumer Electronics Show in Las Vegas, Huang had said Nvidia was seeing “very high” customer demand in China for its H200 AI chips. We’ve fired up our supply chain, and H200s are flowing through the line,” Huang said at the time, implying the company was preparing stock for shipments to China.

Amodei comments come with the backdrop of an intriguing scenario unfolding regarding AI chip sales. The US government, earlier this month, allowed Nvidia to export its powerful AI chips to companies based in China. The Chinese government has reportedly, in response to the US policy, blocked all imports of Nvidia’s chips. That includes the latest generation H200 silicon.

Amodei insisted that this isn’t a question of just the timescale, but more in terms of the significance of the technology. “It is more a decision like are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing and we can say the cases were made by Boeing and the US is winning,” he explained. Amodei called it a trade-off that doesn’t “make sense”.

The Anthropic CEO then reminded everyone of a policy that dictated the removal of Huawei equipment from telecom networks in the US a few years ago, something many other countries followed with later, over fears of proliferation and backdoor access by the Chinese government.


LEAVE A REPLY

Please enter your comment!
Please enter your name here