Death by cyberchondria – The Hindu

Death by cyberchondria – The Hindu

Trigger warning: This article contains references to sexual abuse and suicide. Please use your discretion in deciding if, when, and where to read.

On the day Sanju Devi, 30, allegedly murdered her two children – a girl and a boy aged 10 and 7 – in Rajasthan’s Bhilwara district, she called her father-in-law, Prabhu Lal. “She told my father that she had cancer, for which there was no cure. She said she had killed our children because no one would be able to take care of them after her death,” says Rajkumar Teli, 32, Sanju’s husband.

Sanju then allegedly attempted to kill herself. Teli’s father phoned him, but because he was out, he called the neighbours who managed to get into the house, which was locked from within. They rushed Sanju to the Community Health Centre in Mandalgarh, 16 km away. She was later referred to Mahatma Gandhi Government Hospital in Bhilwara, where she remained under medical supervision until January 16. She was arrested on being discharged and booked for murder under Section 103(1) of the Bharatiya Nyaya Sanhita based on a complaint lodged by Lal, in his 50s.

Teli, a tent-house owner in Manpura village, says his wife was deeply attached to the children. “I still cannot believe she could do this,” he says.

In the weeks leading up to January 11, Sanju was worried. She had mouth ulcers and abdominal pain. Teli says they were preparing to visit a specialist in Ahmedabad for consultation after treatment in Bhilwara failed.

He recalls that Sanju would be on her phone when she had a minute to herself and would go to sleep watching content on the device.

Later, Sanju told the police that she had watched online videos claiming that long-time ulcers could cause cancer. Her mind took her down a medical misinformation rabbit hole. The police say she developed an intense fear of death, triggered by her health issues.

“The investigation revealed that Sanju Devi was regularly watching reels on Instagram about the correlation of mouth ulcers with cancer and malignancy,” says Mandalgarh Deputy Superintendent of Police B.L. Vishnoi.

Now, she is in “severe mental distress”, he says. “Her medical examination did not show any signs of cancer. Our probe so far has not found any indication of a family feud,” Vishnoi adds. He says he has not seen or heard of a case where a person takes such an extreme step due to health misinformation.

Manpura sarpanch Chanda Devi says Lal’s family did not have any complaints against them in the village with a population of about 5,000. Neighbours at Balaji Ka Chowk locality were left stunned by the crime. One neighbour, Kamla Devi, says Sanju spent a great deal of time with her children – feeding them, playing with them, and getting them ready for school.

Another neighbour, Sita Devi, wishes Sanju had spoken to them about her fears. “I used to meet her and talk to her almost every day, but I didn’t get a hint about her mental distress.”

As India hits one billion Internet subscriptions and access to health information proliferates through social media, algorithms feed health anxieties. If the 2020s is the age of fake news, medical misinformation is a large part of this. Influencers on social media often make health claims not based on current scientific consensus. This is amplified by algorithms that are designed to cater to anxieties and fears.

What hypochondria, or illness-anxiety disorder, was in a pre-digital time, cyberchondria is to the information age of this millennium.

A peer-reviewed research analysis in the International Journal of Indian Psychology describes cyberchondria as “an excessive, anxiety-driven online health search” that has emerged as a “significant mental condition in the digital age”.

Doctor-patient disconnect

Googling symptoms has been a problem almost since the inception of search engines in the late 1990s. Twenty years ago though, people went in search of information. What has changed with social media and its recommendation engines is that information now finds its way to users. Now, people run the risk of large language models mirroring their fears and confirming their anxieties by throwing up convincing diagnoses.

Dr. Siddharth Sahai, a Delhi-based oncologist who has been practising medicine for nearly two decades, says since many symptoms are associated with cancer, search results can point users to it as an explanation regularly. “This causes a lot of anxiety,” he adds.

“What people don’t understand is that it is difficult to say that the Internet is completely wrong or right. Doctors make detailed assessments based on an examination of the patient and their history.” Searches and algorithms cannot do this.

People spiralling out based on symptom-disease links without medical training is “nothing new”, says Chennai-based psychiatrist Dr. Thara Rangaswamy. It predates widespread Internet access, she says. “Even 15 years ago, when there were newspaper articles on particular illnesses like impetigo or hemangiomatosis, some people who read them imagined that they had that particular illness,” Dr. Rangaswamy says. “They would pick up the symptoms from those articles and say, ‘Oh, maybe I have this.’”

Now, cyberchondriacs are not only worrying about the worst possible outcome but also questioning medication due to the listed side effects. “There is no medicine that does not have side effects and Google will list some 20 side effects. If it has something to do with sexual performance, for example, people get very, very disturbed. That is the very disturbing factor that many of us as doctors experience,” says Dr. Rangaswamy.

Cyberchondriacs are a small slice of patients overall, she says. “A large majority wants reassurance. In fact, they’ll tell you, ‘It’s been so good talking to you, I feel much better.’”

However, not many have awareness or access to a mental health professional.

Algorithm multipliers

Dr. Sahai also points to issues of distrust of the medical system. For this untrusting slice of patients, social media algorithms can be a force multiplier. Sanju, for instance, had tried to access medical help.

For social media companies, one of the measures of success is how long a user – yes, companies use vocabulary from addiction phraseology – stays on their platform. A time-tested way of doing that is to recommend content similar to what a person is engaging with.

Digvijay Singh, co-founder of the online content safety start-up, Contrails AI, explains, “People are often not really searching for very specific things. They’ll search for and watch, let’s say, a video on a mouth-related ailment. Now the recommendation engine, which is driven by the user’s viewing history and its recency, will place more such videos on the home page and the related videos section.” As they watch more, the process compounds, he says.

There are some safeguards to help users avoid falling into these rabbit holes, Singh says. “YouTube in particular will prompt users with mental health helpline information if they’re watching a lot of videos on suicide and depression.”

Sprinklr, a company that provides enterprise solutions, describes the social media algorithm as “complex rule sets powered by machine learning to decide what content appears in your feed”.

It talks of how these work. “Every social platform aims to deliver the most relevant content at the right time and place. To do this, they use algorithms powered by user actions: likes, follows, comments, and more. The more relevant the content, the higher the engagement, which creates a fresh tranche of data to fuel the next round of recommendations. And this cycle goes on and on.”

From pre-2015’s “chronological feeds”, social media was driven by “engagement-based sorting” between 2016 and 2020. Then came “AI-powered feeds”, with 2025 seeing “real-time personalisation” that “adjusts as you scroll”.

This means that even if a person pauses for a moment over a video, that will be recorded and millions used to “predict what content you’ll engage with”.

With algorithmic push, social media content is far more successful than its truthful competition. Researchers at Chennai’s Sathyabama Dental College and Hospital wrote in the Journal of Pharmacy and Bioallied Sciences in 2024 that “misleading information had more positive engagement metric than the useful information”, and that “there was a large amount” of oral health misinformation on YouTube that came with a simple search.

Credentials also seem to matter very little. “About 75% of the videos containing misleading information were created by non-professionals and only about 15% of the videos containing misleading information were created by medical professionals,” the research paper stated.

Black box information

Cyberchondriacs feed on both badly contextualised information and medical misinformation. Hansika Kapoor, a psychologist and researcher at the research firm Monk Prayogshala, says at its root, medical misinformation was a function of trusting authority, but that this came with distortions for India. “We live in a nation that is highly susceptible to influence by authority, and authority is whatever you perceive as being authority,” Kapoor says in a phone interview from Mumbai.

Conspiratorial thinking, Kapoor says, “offers people a way to make meaning, it offers them some kind of comfort, and a sense-making ability for an absurd thing that has happened to them, which is highly improbable, but is possible.”

Medicine is one of those fields that can feel like a “black box” for a large part of the population – therefore, the slide into rabbit holes is primed. And all a cyberchondriac who is making sense of an absurdity has to do is show their face and the rabbit hole sucks them in.

Kapoor calls structures like governments and science as “black box institutions”. “You don’t really understand how or why they operate. This fuels greater conspiratorial thinking.”

This makes people more susceptible to getting oversimplified information online. Medical misinformation research calls this “bullshit susceptibility”, she says.

Big tech trouble

Social media platforms have policies against health misinformation in force. Meta, for instance, says it prohibits “[p]romoting or advocating for harmful miracle cures for health issues”, and posts can be taken down if they are “likely to directly contribute to the risk of imminent physical harm”. Cyberchondria is not addressed.

YouTube prohibits content “that contradicts health authority guidance on treatments for specific health conditions” and frequently shows pop-ups for medical misinformation videos. Neither Google, which owns YouTube, nor Meta, which owns Instagram and Facebook, responded to questions from The Hindu.

It’s not like Big Tech firms are not aware that accurate medical information needs to be provided. Indeed, Google signed a partnership with Apollo Hospitals, as far back as 2018, to surface reliable and doctor-written information when users search for symptoms in India. But cyberchondriacs go beyond the first result, potentially crowding out credible sources.

“At a time when, according to a recent survey, 33% of Gen Z turned to TikTok before their doctors for information about health, one must question where this is going to take us,” Aparna Sridhar, a clinical professor at UCLA Health, wrote on its website in 2023.

“Cyberchondria is very real. As professional healthcare providers, we must understand its implications, both for our patients and our practices, and be prepared to address cyberchondria as a part of our educational toolkit for the future.”

mohammed.iqbal@thehindu.co.in

aroon.deep@thehindu.co.in

(If you’re in distress, do reach out to these helplines: Aasra 022-27546669 and TeleMANAS 1-8008914416.)

Leave a Reply

Your email address will not be published. Required fields are marked *