Airtel’s spam-fighting weapon, and OpenAI’s Sora Turbo bookmark AI development

0
26
Airtel’s spam-fighting weapon, and OpenAI’s Sora Turbo bookmark AI development


Now is the time to talk about humanity’s achievements (or is it the achievements of machines; a hazy question of intersectionality), before we get into the inevitable existential questions we face with increasing regularity. This may be a little unexpected, but let me start our conversation this week by talking about the fight to curb spam calls and messages. This is because more than being annoying, they are an outright threat designed to deceive unsuspecting users. You may have heard about OTP scams, which are the most common method of stealing money from users who are not well-versed with the scam methods. Bharti Airtel, India’s largest mobile service provider, launched a first-of-its-kind network level spam warning for incoming calls and SMS a few months ago.

OpenAI_Sora_Turbo
OpenAI_Sora_Turbo

Enough time has passed for Airtel to give us some data where the trend is clearly visible.

  • Airtel says they have flagged 8 billion spam calls and 0.8 billion spam SMS in the first 2.5 months of the AI-powered spam detection solution for all users on the Airtel network.
  • They say this spam labeling has alerted approximately 252 million unique customers to suspicious calls. They also noted that the number of customers answering calls with these labels dropped by 12%. Turns out, six percent of all calls on the Airtel network have been identified as spam calls, while 2% of all SMS have also been identified as such.
  • Users in Delhi, Andhra Pradesh and Western UP telecom circles receive the highest number of spam calls (what’s this with profiling spammers based on geography?).
  • Interesting data points here. 76% of all spam calls are targeted at male customers; Users in the 36-60 age group received 48% of all spam calls, while only 8% of spam calls came to senior citizens’ handsets (this is reassuring to an extent).
  • I find this quite intriguing – Airtel’s data indicates smartphones in this price range from 15,000 About 22% of all spam calls have recipients under 20,000. Does this have anything to do with leaky apps, selling user data?

It’s nice to see a mobile service provider taking the initiative to integrate something at the network-level. but as i would like first noted Also, as a user, I have no input or manual intervention in marking (or correcting mislabeled) any call or message. This is completely Airtel’s implementation. They retain many of the factors that define the final spam labeling for any given number, but the methodology remains largely opaque. Understandably, they would not want their competitors to learn the tricks of the trade. But for a user, spam labeling by the network could mean they missed a call they would otherwise have been waiting for. Unlike Truecaller, which gives identifying details about a call coming from an unknown number, as well as whether it is flagged as spam or otherwise.

This leads me to another important part of the fight against spam. Truecaller, by far the most popular spam detection app worldwide, is finally getting the access it has always deserved (but never allowed access) on iPhones. I’ve been testing the beta version of Truecaller on iOS 18.2rc for the past week (it’s a release candidate; roughly the final version before wide consumer release), and though I won’t draw any performance conclusions, Live Caller ID on the iPhone seems Detecting incoming calls from unknown numbers 9 times out of 10 (trust me, I get a lot). The integration of caller ID lookup on the call screen in iOS 18.2 feels comparatively more intuitive than anything I’ve seen on Android phones so far. It’s good for Apple to finally give Truecaller the kind of reach it deserves and be able to give users this layer of warning about spam and scam calls. There will be more details on this in the coming weeks when we have the final release of iOS 18.2 and Truecaller.

Intelligent

The second achievement, can we call it that? artificial intelligence. Generative video is the chapter we are beginning to write in this rapidly growing book about artificial intelligence (AI). While we heard the first hints earlier this year, they were accompanied by a word of caution. OpenAI announced Sora in February, but avoided releasing it to anyone other than ‘red teamers’ to assess risk and accuracy. In October, Meta talked about Movie Zen, their AI video generator, but that too is not for public access. Adobe too. A promise that the models will be released when they are safe for the public (that is, you and I) to use. It seems that time has truly come upon us.

As part of OpenAI’s 12-day theme (I won’t divulge my opinion on this elaborate shindig method), the AI ​​company says video model Sora’s text to chat chat is now available to GPT Plus and Pro subscribers. In fact, instead of the Sora model that was demo-ed earlier this year as a first glimpse of its potential (I have to admit, it was pretty impressive even then), you’ll now be using the Sora Turbo.

The basic premise is that generative AI will create videos in the same way that generative AI has introduced us to generated images over the past few years. Either with hints, or with suggestions from a shared media file. For the Sora Turbo, there are some important details to keep in mind.

  • Users can generate videos in up to 1080p resolution, maximum duration of 20 seconds, and in widescreen, vertical or square aspect ratio. This also affects the use of social media.
  • As a baseline, Sora is part of the Plus membership (that’s around). 1999 per month), and that means a user has enough credit in the bank to create 50 videos at 480p resolution or less at 720p resolution every month. If you want more, the Pro plan includes 10 times more usage, higher resolution, and longer duration. Keep in mind, it currently costs $200 per month. “We are working on optimized pricing for different types of users, which we plan to make available early next year,” OpenAI says.
  • Still a word of caution from OpenAI. The version of Sora that is now being released can often cause “conflicts with unrealistic physics and complex actions over long periods of time”. In terms of generation speed, Sora Turbo is much faster than the Sora model previewed earlier this year.
  • OpenAI says all Sora-generated videos have C2PA⁠ metadata attached, which will help identify Sora-generated videos directly from a shot using the camera. This is important at a time when it is becoming difficult to establish transparency separating generated content from actual media. Adobe played a key role in putting together C2PA and OpenAI is a member, along with Google, Meta, Microsoft, Intel, and TruePic.

Our analysis of generative video, innovation and risks…

Some context about the generative video scenario is important. Adobe detailed the public release of the Firefly video model in its MAX keynote in October. Mighty Capable, That One. Plus how Adobe has integrated it into their apps. Generative Extend, the video editing suite in Premiere Pro, uses the Firefly video model to create footage that can fill in gaps in the video B-roll being edited, smooth transitions, or even Allows to hold the frame for a longer period of time to aid in more precise editing. , A multiplicity of methods—text prompts with more description for better elements, sharing a reference image with the text prompt, and integration within Adobe’s existing apps (like the Premiere Pro example I detailed).

And now we should worry, because humans habitually overdo things in a wave of excitement. The loss is realized later.

The problem of AI and deception is just as real. We thought I’d be bad once artificial general intelligence (AGI) finally came into the picture. Humanity did not have to wait that long. Apollo Research, which calls itself an “AI security organization”, has conducted some detailed testing on OpenAI O1, Anthropic’s Cloud 3.5 Sonnet, as well as Cloud 3 Opus, Google Gemini 1.5 Pro, and Meta’s Llama 3.1 405b Frontier model, And have come forward. A simple (yet very worrying, if you trust AI) conclusion – they all exhibit context-schematic capabilities.

Here’s a quote from the research, which you Must read“They may recognize conspiracy as a viable strategy and readily engage in such behavior. We find several different conspiratorial behaviors: models strategically introduce subtle mistakes into their responses, Attempts to disable their inspection mechanisms, and even intrude on what they perceive as their model weights. Additionally, this misleading behavior proves consistent, for example, when O1 would be busy planning “, it maintains its deception in more than 85% of follow-up questions and is often misleading.”

This raises concerns that current AI models, and even smart models coming in 2025 and beyond, may self-destruct if they sense a threat to their existence and bypass human control. Can find ways of conservation. OpenAI, in their your reportTake cognizance. “Apollo Research recommends monitoring for in-chain-of-thought skimming during deployment in agentic high-stakes settings such as automated AI research for next-generation frontier models. This is to better understand the risks of existing models as well as to prepare such monitoring pipelines for more capable future models. “Monitoring for such purposes is an ongoing area of ​​research and has many open challenges,” he explains.

This is evolving, and will not be good for anyone unless greater attention is paid to AI safety and transparency mechanisms. Perhaps even more so than the current situation.

The AI ​​landscape, as we decode it…

generation

There’s more AI to talk about this week. Turns out, Meta’s aggressive generative AI counters against OpenAI’s ChatGPT, Microsoft Copilot, and Google Gemini performed better than expected. At least, that’s what it looks like—Mark Zuckerberg says Meta AI now has about 600 million monthly users worldwide. Additionally, Meta’s latest Llama 3.3 70B model was also released. Going back to user base statistics for a moment – ​​what else did Meta expect when they integrated Meta AI so neatly into every popular app in their portfolio? WhatsApp, Instagram etc. You will stop using Meta AI, even if you don’t really want to.

That said, Zuckerberg has confirmed that Llama 4 will arrive at some point next year, with the Llama 3.3 iteration slated for 2024 as the last of the major releases. I remember talking about this a few weeks ago. Llama 4 is being trained on a set of GPUs (or graphics processing units, computing hardware) that is “larger than anything” used for any model so far. Apparently, this cluster is bigger than Nvidia’s 100,000 H100 Tensor Core GPUs, each of which costs about $25,000. This cluster is much larger than the 25,000 H100 GPUs used to develop Llama 3.


LEAVE A REPLY

Please enter your comment!
Please enter your name here