Owen Larter of Google DeepMind| Business News

0
3
Owen Larter of Google DeepMind| Business News


At a time when artificial intelligence (AI) is no longer a laboratory breakthrough but a geopolitical and developmental force, Google DeepMind is recalibrating how it works with governments. This week, they announced its National Partnerships for AI with the Indian government, which includes working with the Anusandhan National Research Foundation (ANRF) to make its scientific AI models more accessible, as well as supporting IIT-Bombay with a grant of $50,000 to use Gemma to process Indic language health governance and policy documents to build a novel ‘India-Centric Trait Database’.

Owen Larter, senior director and head of frontier policy and public affairs at Google DeepMind. (Official image)
Owen Larter, senior director and head of frontier policy and public affairs at Google DeepMind. (Official image)

For Owen Larter, who is senior director and head of frontier policy and public affairs at Google DeepMind, this is not merely an expansion in a large market, but reflects a view that India’s position is uniquely catalytic, with strong ties to the developing world. It must play a role in shaping how AI benefits are distributed more equitably across geographies. Yet, Larter argues the responsibility to ensure AI works safely for everyone also lies with companies building frontier systems, who must actively ensure governments understand what these technologies can do. Transparency, he suggests, is a prerequisite for effective regulation. Edited excerpts:

Q. Google DeepMind has been vocal about extreme risks from advanced AI. How do you prioritise near-term harms versus longer-term existential concerns in your policy work?

Owen Larter: This is a really important conversation, and obviously our mission is to develop advanced AI and put it into the world responsibly. We’re excited about how people are using this technology, such as leading Indian scientists using AlphaFold to develop new types of cancer therapies. If we’re going to continue to build progress in this, we need to make sure this technology is trustworthy and we need to continue to build out governance frameworks.

There is a little bit of a danger in segmenting different risks that we need to address. This is going to be a sort of continuous journey, to come up with durable frameworks. There are a few principles that we must work to. We need to continue to build a really solid scientific understanding of the technology, of what it can do, its capabilities, and its limitations. It’s critical to then work with partners to understand the impact this technology will have when it’s used in the real world, and of testing for mitigation.

That’s really an approach that we need to apply across whatever the set of risks is, whether it’s protecting child safety or making sure that our systems are useful in different languages, through to critical risks of advanced frontier systems developing capabilities that could be misused by bad actors to perpetrate a cyberattack or create a biological weapon. DeepMind has the frontier safety framework since 2024, which we iterate over time. This will not be a static issue. AI governance is never going to be a solved problem, it is an ongoing journey.

Q. We are witnessing differing regulatory philosophies emerging globally? Is convergence desirable, or should we expect regulatory pluralism?

Owen Larter: I think there’s certainly convergence in certain places. All of these different regulatory philosophies are trying to do the same thing; every country wants to use AI across their economies. But there are risks that need to be understood and addressed. We are seeing some different approaches, where the EU has moved first and a little bit further than other jurisdictions. The US is taking a slightly different approach, and there are some state regulations addressing frontier governance in California and New York now. That will continue to develop.

We want to lean in and be helpful to governments worldwide, to help them understand the technology. It’s a responsibility on our part to share information around what it can do today and where it’s going. One bit of the conversation that has been really heartening this week is the attention to the importance of developing standardised best practises for testing of systems for risks, and applying mitigation before a system goes into the world.

Q. The AI Safety Summit series represents international coordination. What mechanisms are proving most effective in translating high-level commitments into policy action?

Owen Larter: The India AI Impact Summit in particular has been really important in shining light on some important issues that haven’t been addressed as much in previous summits, particularly the importance of spreading access and opportunity with AI and making sure that you’re putting it into the hands of people. The multilingual discussions that have happened are essential. It’s something that we’re leaning into and trying to do more of, with our grant for IIT-Bombay. Regular discussion at a global level, is really important. I’m really pleased this will be carried forward in Switzerland and then the UAE.

Q. Given India’s strengths in digital public infrastructure and its scale of deployment, what unique contributions could it make to global AI governance discussions?

Owen Larter: It’s been absolutely fundamental that as the technology matures, discussions around how to use it are maturing alongside. It’s great that this summit series is broadened out slightly. I think India is going to be absolutely foundational to how this technology is developed and used. It is clear that India is going to be an AI powerhouse within its own rights. That’s why we are continuing to continuing to invest here.

Q. At what point does a model become ‘frontier’ for governance clarity?

Owen Larter: We need to think about different types of systems, to advance understanding around how they work, and the risks. From a legal perspective, defining is important but is easy to caught up in the semantics. We think of frontier systems as being sort of the most advanced systems that exist at any one point. Of course, the frontier continues to advance, with systems becoming more capable.

One of the reasons we’re interested in frontiers is, they may develop capabilities that could pose risks. Our framework is a monitoring mechanism as we continue to push, to test these systems and see if they develop capabilities that may pose some of these risks around biological weapons or cyber security. Or gain capabilities that need attention to make sure humans continue to manage these systems in a safe way. It’s interesting to see this increasingly becoming standard across the industry. We’re proud that we moved early with our frontier safety. Conversations across industry, government and civil society around how to improve them are going to be critical to continue progress.


LEAVE A REPLY

Please enter your comment!
Please enter your name here