Death by cyberchondria – The Hindu

0
2
Death by cyberchondria – The Hindu


Trigger warning: This article contains references to sexual abuse and suicide. Please use your discretion in deciding what, when and where to read.

The day 30-year-old Sanju Devi allegedly murdered her two children – a girl and a boy aged 10 and 7 – in Rajasthan’s Bhilwara district, she called her father-in-law Prabhu Lal. Sanju’s husband Rajkumar Teli, 32, says, “She told my father that she had cancer for which there was no cure. She said she had killed our children because no one would be able to take care of them after her death.”

After this Sanju allegedly attempted to kill himself. Teli’s father called her, but because he was outside, she called the neighbors who somehow entered the house, which was locked from inside. They took Sanju to the Community Health Center in Mandalgarh, 16 km away. She was later referred to Mahatma Gandhi Government Hospital in Bhilwara, where she remained under medical supervision till January 16. After being discharged, he was arrested and a case of murder was registered under section 103(1) of the Indian Judicial Code based on the complaint of 50-year-old Lal.

Teli, the owner of a tent house in Manpura village, says that his wife had deep love for the children. “I still can’t believe she could do that,” he says.

In the weeks leading up to January 11, Sanju was worried. He had ulcers in his mouth and pain in his stomach. Teli says he was preparing to visit a specialist in Ahmedabad for consultation after treatment in Bhilwara failed.

He recalls that when Sanju had a minute to herself, she would be on her phone and fall asleep while watching content on the device.

Later, Sanju told the police that he had seen an online video which claimed that long-term ulcers could lead to cancer. His mind took him down a rabbit hole of medical misinformation. Police say that due to his health problems, he had developed an intense fear of death.

Mandalgarh Deputy Superintendent of Police BL Vishnoi says, “Investigation revealed that Sanju Devi was regularly watching reels on Instagram about cancer and the association of mouth ulcers with the deadly disease.”

Now, he is in “severe mental distress”, he says. Vishnoi says, “His medical examination showed no signs of cancer. Our investigation so far has not found any indication of family feud.” He says that he has not seen or heard of any such case where a person would take such a drastic step due to wrong health information.

Manpura sarpanch Chanda Devi says that Lal’s family had no complaints against him in the village, which has a population of about 5,000. Neighbors in Balaji Ka Chowk area were shocked by this crime. Kamla Devi, a neighbour, says Sanju spent a lot of time with her children – feeding them, playing with them and getting them ready for school.

Another neighbor Sita Devi wishes Sanju had talked to her about his fears. “I met and talked to him almost every day, but I didn’t get a hint of his mental distress.”

As India reaches one billion internet subscriptions and access to health information increases through social media, algorithms are raising health concerns. If 2020 is the age of fake news, medical misinformation is a big part of it. Influencers on social media often make health claims that are not based on current scientific consensus. It is enhanced by algorithms designed to cater to concerns and fears.

What hypochondria, or illness-anxiety disorder, was to pre-digital times, cyberchondria is to the information age of this millennium.

A peer-reviewed research analysis in International Journal of Indian Psychology Cyberchondria has been described as “an excessive, anxiety-induced online health search” that has emerged as “a significant mental condition in the digital age”.

doctor-patient relationship breakdown

Googling symptoms have been a problem since the inception of search engines in the late 1990s. However, twenty years ago, people went looking for information. What has changed with social media and its recommendation engines is that information now finds its way to users. Now, people risk creating larger language models that reflect their fears and confirm their concerns by offering concrete diagnoses.

Dr Siddharth Sahay, a Delhi-based oncologist who has been practicing medicine for nearly two decades, says that since many symptoms are associated with cancer, search results may routinely point users to explanations. “It causes a lot of concern,” he added.

“People don’t understand that it is difficult to say whether the Internet is completely wrong or right. Doctors make a detailed assessment based on the patient’s examination and history.” Searches and algorithms can’t do that.

Dr Thara Rangaswami, a Chennai-based psychiatrist, says it is “nothing new” for people without medical training to rule out symptom-disease associations. This, she says, was before widespread Internet access. “Even 15 years ago, when there were newspaper articles on particular diseases like impetigo or hemangiomatosis, few people reading them imagined they had that particular disease,” says Dr. Rangaswami. “They’ll pick up symptoms from those articles and say, ‘Oh, maybe I have it.'”

Now, cyberchondriacs are not only worrying about the worst possible outcome, but are also questioning the drug due to the listed side effects. Dr. Rangaswami says, “There is no drug that doesn’t have side effects and Google will list about 20 side effects. If it has something to do with sexual performance, for example, people get very, very upset. It’s a very distressing factor that many of us doctors experience.”

Cyberchondriacs are a small portion of patients overall, she says. “A vast majority want reassurance. In fact, they’ll tell you, ‘It’s been great talking to you, I feel a lot better.'”

However, many people are not aware of it or do not have access to a mental health professional.

algorithmic multiplier

Dr. Sahay also points to issues of distrust in the medical system. For this distrustful class of patients, social media algorithms can be a force multiplier. For example, Sanju tried to get medical help.

For social media companies, one measure of success is how long a user – yes, companies use terminology from addiction phraseology – stays on their platform. A time-tested way to do this is to recommend content similar to the content someone is engaging with.

“People are often not searching for very specific things. They will search and watch a video on oral disease. Now the recommendation engine, inspired by the user’s viewing history and recency, will place more such videos on the home page and related videos section,” explains Digvijay Singh, co-founder of Contrails AI, an online content security start-up. As he sees more, the process becomes more complicated, he says.

There are some safety measures in place to help users avoid falling into these rabbit holes, Singh says. “If users are watching a lot of videos on suicide and depression, YouTube will specifically point them to mental health helplines.”

Sprinklr, a company that provides enterprise solutions, describes social media algorithms as “complex rule sets powered by machine learning to decide what content appears in your feed”.

It talks about how they work. “The goal of every social platform is to deliver the most relevant content at the right time and place. To do this, they use algorithms driven by user actions: likes, follows, comments and more. The more relevant the content, the higher the engagement, which creates a new tranche of data to fuel the next round of recommendations. And the cycle continues.”

From the “chronological feed” before 2015, social media was driven by “engagement-based sorting” between 2016 and 2020. Then came the “AI-powered feed”, with 2025 seeing “real-time personalisation” that “adjusts as you scroll”.

This means that if someone pauses for even a moment on a video, it will be recorded and millions of people will “predict what content you will engage with”.

With the algorithmic push, social media content is far more successful than its true competition. Researchers from Sathyabama Dental College and Hospital, Chennai wrote Pharmacy and Bioallied Sciences Journal in 2024 that “misleading information had more positive engagement metrics than useful information”, and that oral health-related misinformation on YouTube was “in large quantities” compared to what comes up with a simple search.

Reputation also matters little. “About 75% of the videos containing misleading information were created by non-professionals and only 15% of the videos containing misleading information were created by medical professionals,” the research paper said.

black box information

Cyberchondriacs feed on both badly relevant information and medical misinformation. Hansika Kapoor, a psychologist and researcher at research firm Monk Laboratory, says that at its core, medical misinformation was an act of trusting authority, but it brought distortions to India. “We live in a country that is extremely sensitive to the influence of power and power is what you perceive as power,” Kapoor said in a phone interview from Mumbai.

Conspiratorial thinking, says Kapoor, “provides people with a way to find meaning, it provides them with some form of comfort, and the ability to find meaning for an absurd thing that happened to them, which is extremely unlikely, but possible.”

Medicine is one of those areas that can feel like a “black box” for a large portion of the population – hence, the slide down the rabbit hole is primed. And a cyberchondriac who is making sense of an absurdity has only to show his face and the rabbit hole sucks them in.

Kapoor calls structures such as governments and science “black box institutions”. “You don’t really understand how and why they work. It promotes more conspiratorial thinking.”

This makes people more susceptible to receiving overly simplified information online. Medical misinformation research calls this “nonsense sensitivity,” she says.

major technical problem

Social media platforms have policies in place against health misinformation. For example, Meta says it prohibits “promoting or advocating harmful miracle cures for health problems”, and posts may be removed if they are “likely to contribute directly to the risk of imminent physical harm”. Cyberchondria has not been addressed.

YouTube bans content that “contradicts health authority guidance on the treatment of specific health conditions” and often shows pop-ups for videos containing medical misinformation. Neither Google, which owns YouTube, nor Meta, which owns Instagram and Facebook, responded to questions. The Hindu.

It’s not that big tech companies aren’t aware of the need to provide accurate medical information. In fact, Google had signed a partnership with Apollo Hospitals in 2018 itself, to provide reliable and doctor-written information when users search for symptoms in India. But cyberchondriacs skim the first result, excluding potentially reliable sources.

Aparna Sridhar, a clinical professor at UCLA Health, wrote on her website in 2023, “At a time when, according to a recent survey, 33% of Gen Z turned to TikTok for health information before their doctors, one must question where this will take us.”

“Cyberchondria is very real. As professional healthcare providers, we must understand its implications for both our patients and our practices, and be prepared to address cyberchondria as a part of our educational toolkit for the future.”

mohammed.iqbal@thehindu.co.in

aroon.dep@thehindu.co.in

(If you are in distress, contact these helplines: Aasra 022-27546669 and Telemanas 1-8008914416.)


LEAVE A REPLY

Please enter your comment!
Please enter your name here