Cognitive warmup. Where is Aravind Srinivas these days? From poking Google to sell him the Chrome web browser, to being the golden boy of particularly the Indian online media, Srinivas and Perplexity were missing from the AI summit in Delhi.

Also, you may not have realised, Perplexity doesn’t offer any Indian LLM on its platform (at least as I write, early this week). I wonder why. Badly advised, or run out of steam? I hope it is neither, because that’s one company I hope does well. The Perplexity proposition is indeed valuable for subscribers.
ALGORITHM
In this week’s conversation, decoding big developments of the past few days:
- Nvidia’s big cheque to OpenAI is suddenly not as big anymore.
- The slop problem for companies that replaced humans with AI.
- Agentic AI used its own assessment, and created a mess.
$30 billion in store credit!
Nvidia has finally realised they don’t have $100 billion to give to OpenAI, to keep that boat afloat. Instead, that has for now been recalculated to $30 billion and then immediately take back $30 billion in revenue. OpenAI will reinvest much of this new-found capital into, guess what, buying Nvidia’s hardware. Since it’ll likely be noted as an investment in Nvidia’s books, and OpenAI will buy Nvidia’s hardware, it’s a win-win. And then Jensen Huang gets very annoyed at the mention of circular investments or creative accounting.
Slop cleaners
AI bros keep telling us that AI will take away some jobs and instead create new jobs. Here’s one—slop cleaners. AI that has replaced humans in enterprises with little foresight, is creating so much slop (oh, Satya Nadella gets sentimental when we say “slop”), those companies are now re-hiring humans to clean that mess. Companies tried to save money with AI, because the AI companies promised that future. Real intelligence will always be greater than artificial intelligence.
AI’s brilliant little mind
This week, I read a report that Amazon.com Inc.’s Kiro AI coding tool found what it was working with was inadequate, and autonomously decided to delete production code for an AWS service. That ended up triggering a 13-hour outage in December 2025. Mind you, this was the second such AI-linked disruption in months. The code built by human AWS engineers worked fine before this AI intervention. AWS can try to put up a brave face, calling all this a “misconfiguration”, but the reality doesn’t change.
THINKING
Now that the AI summit in New Delhi is done and dusted with, normal service can resume on this newsletter. Call me optimistically pessimistic, but I will continue to address concerns of this technology, poke a pin into the hyperbole bubble and remain unflinchingly unimpressed by this trillion-dollar obsession, till it defines something that is worthy enough of this obsession.
- Saying AI 150 times in a keynote isn’t it, I’ll be blunt. And that neatly takes me to the point about Sam Altman.
Yes, the same Sam Altman who is 40 years of age, and has eaten food every day of those 40 years of existence. At some point, old enough for two to three proper meals a day, perhaps. He said something really absurd while in Delhi for the shindig:
I’d like to point out a few things here (including my surprise that the host kept smiling and didn’t bother to counter question; not everyone decodes AI beyond the press release hype, admittedly).
First, a human consumes about 2,000 calories per day on average. Let’s do some calculations here, and I’m sure they’ll boggle your mind while driving the point about the ridiculousness of the comments.
To convert this to kilowatt-hours (kWh), we use the conversion factor that is 1 kcal ≈ 0.00116 kWh. That means 2,000 kcal×0.00116≈2.32 kWh daily, 2.32 kWh×365≈847 kWh yearly and 847 kWh×20≈16,940 kWh over 20 years.
While OpenAI never released official figures, industry estimates suggest GPT-4’s training consumption at approximately 50 gigawatt-hours (GWh). To compare this to your human figure, we convert GWh to kWh using 50 GWh = 50,000,000 kWh. If we are to equate a human’s supposed energy consumption with GPT-4, that would mean 3,000 humans consuming 16,940 kWh over 20 years is the same as training one model. Over 20 years, that’s roughly 17,000 kWh of total food energy. Training GPT-4 consumed an estimated 50 GWh of electricity. That’s 3,000 humans worth of “training energy” for a single model run.
By the way, GPT-4 is already outdated. Humans aren’t, the last time I checked.
Secondly, this is a beautiful way to distract and draw a false equivalence when questions are raised about AI’s energy consumption. A rather strange attempt at escaping environmental scrutiny. At this point, it’s interesting enough to note that New Jersey has become the latest US city to have told the AI bros a few facts. The New Brunswick, New Jersey City Council has voted to cancel plans for an AI data centre and will instead build a public park.
Third, these so-called resources that Altman is counting, also form the basis to produce more humans. Those memories and skills remain sustainable for decades, till the human body gives up after a certain age. Unlike AI models that become obsolete within a year, and AI chips that become absolutely redundant when the next generation is made at great cost. And both, neverending cycles.
Human efficiency > AI. Period.
Fourth, this is a very coder-friendly moral equivalence between a baby and a spreadsheet. Clearly, a structure of thoughts of someone who is desperate to sell you an idea, which clearly isn’t working as well as the salesmen expected, there is a clear disdain for humans existing, calling humans meat computers if they happen to exist, and someone who has seemingly not had much experience in human love, friendships or emotions (can be called human outputs, with differing weights). It is clearly a difficult economy, even to sell snake oil.
There are ongoing lawsuits against OpenAI, including wrongful death claims linking ChatGPT to at least three suicides and a murder-suicide since 2025. Altman’s sister Ann Altman hasn’t presented the AI bro as the best of personalities either (read that up, it’s in public domain). The point is simple — Altman should talk less about humans as spreadsheet numbers.
Think about it — if a random person said what Altman said so confidently, you’d have laughed that person out of the room. But well, it is also a human trait to be unable to admit that many of you fell for this grift; course correction takes time, if at all.
Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Each edition delivers curated insights on breakthrough technologies, practical applications, and strategic implications shaping our digital future.
Want this newsletter delivered in your inbox? Subscribe here.




