Conflict costs and tougher perspectives for AI| Business News

0
4
Conflict costs and tougher perspectives for AI| Business News


Cognitive warmup. I’ll be blunt. The really, really dark cloud of the geopolitical conflict unfolding in West Asia has the slimmest of silver linings (albeit a temporary one; though one would always pray war is also temporary, if it does have to happen)—the AI bros have been mostly silent (except Huang, I’ll get to that in a moment). There is probably a realisation that the world doesn’t revolve around AI, irrespective of any bubble inflating assumptions that have been loudest in the past couple of years. Replacing humans with AI en masse, being one.

For representational purposes only.
For representational purposes only.

More to the point, there’s a realisation every day this war has been on and every further day this war goes on: We can clearly see the fragility of the very foundations of AI. The current conflict has already pushed up energy risk (and therefore prices) through the Strait of Hormuz, which the International Energy Agency says handled nearly 15 million barrels a day of crude in 2025. There is also a direct hardware angle — tech supply chains beginning to slow down. Helium is not a minor issue either; it is used in semiconductor manufacturing processes, and any disruption there will have a ripple effect on servers and broader electronics supply.

There is a financing angle too. The Gulf had in the past year become one of the most important pools of capital for global AI infrastructure, compute, and “Sovereign AI” ambitions. You might want to revisit Elon Musk being confused between millions and billions of dollars, and generating “bazillion trillion”. The World Economic Forum had noted GCC states were increasingly investing in AI as a strategic capability, while Nvidia and Musk had also spoken of massive infrastructure buildouts (Musk’s xAI had locked in a $3 billion investment from Saudi firm Humain earlier this year). If conflict persists, that money does not necessarily disappear, but the stream will slow down, investments will be more defensive. There will of course be a conversation about domestic reconstruction, prioritised over AI bets.

The reality of costs and slop

A few days ago, and chances are you missed this because, of course, the AI bros didn’t talk about this loudly on social media, OpenAI shut down the Sora video generation app. A rather pragmatic sounding post on their website seems to be pointing to a reality where the video-generating AI consumed massive amount of compute without financial returns to justify those costs.

“The creativity that emerged exceeded what we could have expected. Our focus now is on making this transition as clear and thoughtful as possible. We’ll share timelines and access details so you can plan accordingly, and we’re exploring ways to support export and preservation of your work,” the statement says.

OpenAI’s reality is simple—it has to lose less money now—so much so, the much talked about “adult mode” erotic conversations in ChatGPT seems to have been shelved for now. The AI company hopes a more focused approach on productivity features, will allow it to compete better with Google and Anthropic. I’d still say Sora leaves behind a legacy—one that played its part in giving us slop filled social media feeds, and absolute lack of trust in identifying between the generated and the real.

Vague mission, and victory

Getting back to the point of AI bros mostly not talking about things bordering on nonsense. Jensen Huang showed up on the Lex Friedman podcast (my views about podcasts, in general, is better kept to myself), and confidently proclaimed “I think we’ve achieved AGI”. That’s artificial general intelligence (if you didn’t know this already, trust me, you’ve not missed anything in life).

All these calls of AGI keep cropping up from time to time, which I’d say is a desperate cry for attention. It is even more impressive for Huang to say this, since no one so far has actually agreed on what AGI actually means or whether it is even the correct terminology. But well, Huang now has the chance to plant the flag in the terrain, claim victory in the AGI race, and define this obscure pursuit as it is convenient for Nvidia’s business. To be fair, vagueness can often be a benchmark—as AGI has been, all this while.

Switching chatbots

To be fair, for all the perceived intelligence of artificial intelligence, these chatbots have made it nigh impossible to shift from one to the other, with your complete conversation history and therefore the balance of memory plus context. Google has finally found a way around it, with new switching tools to bring that historical conversational context from other AI apps into Gemini.

If you wish to, here’s largely the path to follow:

𝙾𝚙𝚎𝚗 𝚌𝚞𝚛𝚛𝚎𝚗𝚝 𝙰𝙸 𝚌𝚑𝚊𝚝𝚋𝚘𝚝 > 𝙶𝚘 𝚝𝚘 ‘𝚂𝚎𝚝𝚝𝚒𝚗𝚐𝚜’ 𝚊𝚗𝚍 𝚕𝚘𝚘𝚔 𝚏𝚘𝚛 𝚘𝚙𝚝𝚒𝚘𝚗𝚜 𝚜𝚞𝚌𝚑 𝚊𝚜 ‘𝙳𝚊𝚝𝚊 𝙲𝚘𝚗𝚝𝚛𝚘𝚕𝚜’ 𝚘𝚛 ‘𝙿𝚛𝚒𝚟𝚊𝚌𝚢’ > 𝚂𝚎𝚕𝚎𝚌𝚝 𝚘𝚙𝚝𝚒𝚘𝚗 𝚝𝚘 𝚎𝚡𝚙𝚘𝚛𝚝 𝚢𝚘𝚞𝚛 𝚍𝚊𝚝𝚊.

For now, Gemini will support imports up to 5 GB in size, in the .zip format. Google has added a “import chats” and “import memory into Gemini” options—point the zip file to the former, and copy/paste a conversation response from another AI chatbot with the latter. Google says the ability to switch chats, is free for all Gemini users.

Neural Dispatch is your weekly guide to the rapidly evolving landscape of AI. Want this newsletter delivered in your inbox? Subscribe here.


LEAVE A REPLY

Please enter your comment!
Please enter your name here