Trigger warning: This article contains references to sexual abuse and suicide. Please use your discretion in deciding if, when, and where to read.
On the day Sanju Devi, 30, allegedly murdered her two children â a girl and a boy aged 10 and 7 â in Rajasthanâs Bhilwara district, she called her father-in-law, Prabhu Lal. âShe told my father that she had cancer, for which there was no cure. She said she had killed our children because no one would be able to take care of them after her death,â says Rajkumar Teli, 32, Sanjuâs husband.
Sanju then allegedly attempted to kill herself. Teliâs father phoned him, but because he was out, he called the neighbours who managed to get into the house, which was locked from within. They rushed Sanju to the Community Health Centre in Mandalgarh, 16 km away. She was later referred to Mahatma Gandhi Government Hospital in Bhilwara, where she remained under medical supervision until January 16. She was arrested on being discharged and booked for murder under Section 103(1) of the Bharatiya Nyaya Sanhita based on a complaint lodged by Lal, in his 50s.
Teli, a tent-house owner in Manpura village, says his wife was deeply attached to the children. âI still cannot believe she could do this,â he says.
In the weeks leading up to January 11, Sanju was worried. She had mouth ulcers and abdominal pain. Teli says they were preparing to visit a specialist in Ahmedabad for consultation after treatment in Bhilwara failed.
He recalls that Sanju would be on her phone when she had a minute to herself and would go to sleep watching content on the device.
Later, Sanju told the police that she had watched online videos claiming that long-time ulcers could cause cancer. Her mind took her down a medical misinformation rabbit hole. The police say she developed an intense fear of death, triggered by her health issues.
âThe investigation revealed that Sanju Devi was regularly watching reels on Instagram about the correlation of mouth ulcers with cancer and malignancy,â says Mandalgarh Deputy Superintendent of Police B.L. Vishnoi.
Now, she is in âsevere mental distressâ, he says. âHer medical examination did not show any signs of cancer. Our probe so far has not found any indication of a family feud,â Vishnoi adds. He says he has not seen or heard of a case where a person takes such an extreme step due to health misinformation.
Manpura sarpanch Chanda Devi says Lalâs family did not have any complaints against them in the village with a population of about 5,000. Neighbours at Balaji Ka Chowk locality were left stunned by the crime. One neighbour, Kamla Devi, says Sanju spent a great deal of time with her children â feeding them, playing with them, and getting them ready for school.
Another neighbour, Sita Devi, wishes Sanju had spoken to them about her fears. âI used to meet her and talk to her almost every day, but I didnât get a hint about her mental distress.â
As India hits one billion Internet subscriptions and access to health information proliferates through social media, algorithms feed health anxieties. If the 2020s is the age of fake news, medical misinformation is a large part of this. Influencers on social media often make health claims not based on current scientific consensus. This is amplified by algorithms that are designed to cater to anxieties and fears.
What hypochondria, or illness-anxiety disorder, was in a pre-digital time, cyberchondria is to the information age of this millennium.
A peer-reviewed research analysis in the International Journal of Indian Psychology describes cyberchondria as âan excessive, anxiety-driven online health searchâ that has emerged as a âsignificant mental condition in the digital ageâ.
Doctor-patient disconnect
Googling symptoms has been a problem almost since the inception of search engines in the late 1990s. Twenty years ago though, people went in search of information. What has changed with social media and its recommendation engines is that information now finds its way to users. Now, people run the risk of large language models mirroring their fears and confirming their anxieties by throwing up convincing diagnoses.
Dr. Siddharth Sahai, a Delhi-based oncologist who has been practising medicine for nearly two decades, says since many symptoms are associated with cancer, search results can point users to it as an explanation regularly. âThis causes a lot of anxiety,â he adds.
âWhat people donât understand is that it is difficult to say that the Internet is completely wrong or right. Doctors make detailed assessments based on an examination of the patient and their history.â Searches and algorithms cannot do this.
People spiralling out based on symptom-disease links without medical training is ânothing newâ, says Chennai-based psychiatrist Dr. Thara Rangaswamy. It predates widespread Internet access, she says. âEven 15 years ago, when there were newspaper articles on particular illnesses like impetigo or hemangiomatosis, some people who read them imagined that they had that particular illness,â Dr. Rangaswamy says. âThey would pick up the symptoms from those articles and say, âOh, maybe I have this.ââ
Now, cyberchondriacs are not only worrying about the worst possible outcome but also questioning medication due to the listed side effects. âThere is no medicine that does not have side effects and Google will list some 20 side effects. If it has something to do with sexual performance, for example, people get very, very disturbed. That is the very disturbing factor that many of us as doctors experience,â says Dr. Rangaswamy.
Cyberchondriacs are a small slice of patients overall, she says. âA large majority wants reassurance. In fact, theyâll tell you, âItâs been so good talking to you, I feel much better.ââ
However, not many have awareness or access to a mental health professional.
Algorithm multipliers
Dr. Sahai also points to issues of distrust of the medical system. For this untrusting slice of patients, social media algorithms can be a force multiplier. Sanju, for instance, had tried to access medical help.
For social media companies, one of the measures of success is how long a user â yes, companies use vocabulary from addiction phraseology â stays on their platform. A time-tested way of doing that is to recommend content similar to what a person is engaging with.
Digvijay Singh, co-founder of the online content safety start-up, Contrails AI, explains, âPeople are often not really searching for very specific things. Theyâll search for and watch, letâs say, a video on a mouth-related ailment. Now the recommendation engine, which is driven by the userâs viewing history and its recency, will place more such videos on the home page and the related videos section.â As they watch more, the process compounds, he says.
There are some safeguards to help users avoid falling into these rabbit holes, Singh says. âYouTube in particular will prompt users with mental health helpline information if theyâre watching a lot of videos on suicide and depression.â
Sprinklr, a company that provides enterprise solutions, describes the social media algorithm as âcomplex rule sets powered by machine learning to decide what content appears in your feedâ.
It talks of how these work. âEvery social platform aims to deliver the most relevant content at the right time and place. To do this, they use algorithms powered by user actions: likes, follows, comments, and more. The more relevant the content, the higher the engagement, which creates a fresh tranche of data to fuel the next round of recommendations. And this cycle goes on and on.â
From pre-2015âs âchronological feedsâ, social media was driven by âengagement-based sortingâ between 2016 and 2020. Then came âAI-powered feedsâ, with 2025 seeing âreal-time personalisationâ that âadjusts as you scrollâ.
This means that even if a person pauses for a moment over a video, that will be recorded and millions used to âpredict what content youâll engage withâ.
With algorithmic push, social media content is far more successful than its truthful competition. Researchers at Chennaiâs Sathyabama Dental College and Hospital wrote in the Journal of Pharmacy and Bioallied Sciences in 2024 that âmisleading information had more positive engagement metric than the useful informationâ, and that âthere was a large amountâ of oral health misinformation on YouTube that came with a simple search.
Credentials also seem to matter very little. âAbout 75% of the videos containing misleading information were created by non-professionals and only about 15% of the videos containing misleading information were created by medical professionals,â the research paper stated.
Black box information
Cyberchondriacs feed on both badly contextualised information and medical misinformation. Hansika Kapoor, a psychologist and researcher at the research firm Monk Prayogshala, says at its root, medical misinformation was a function of trusting authority, but that this came with distortions for India. âWe live in a nation that is highly susceptible to influence by authority, and authority is whatever you perceive as being authority,â Kapoor says in a phone interview from Mumbai.
Conspiratorial thinking, Kapoor says, âoffers people a way to make meaning, it offers them some kind of comfort, and a sense-making ability for an absurd thing that has happened to them, which is highly improbable, but is possible.â
Medicine is one of those fields that can feel like a âblack boxâ for a large part of the population â therefore, the slide into rabbit holes is primed. And all a cyberchondriac who is making sense of an absurdity has to do is show their face and the rabbit hole sucks them in.
Kapoor calls structures like governments and science as âblack box institutionsâ. âYou donât really understand how or why they operate. This fuels greater conspiratorial thinking.â
This makes people more susceptible to getting oversimplified information online. Medical misinformation research calls this âbullshit susceptibilityâ, she says.
Big tech trouble
Social media platforms have policies against health misinformation in force. Meta, for instance, says it prohibits â[p]romoting or advocating for harmful miracle cures for health issuesâ, and posts can be taken down if they are âlikely to directly contribute to the risk of imminent physical harmâ. Cyberchondria is not addressed.
YouTube prohibits content âthat contradicts health authority guidance on treatments for specific health conditionsâ and frequently shows pop-ups for medical misinformation videos. Neither Google, which owns YouTube, nor Meta, which owns Instagram and Facebook, responded to questions from The Hindu.
Itâs not like Big Tech firms are not aware that accurate medical information needs to be provided. Indeed, Google signed a partnership with Apollo Hospitals, as far back as 2018, to surface reliable and doctor-written information when users search for symptoms in India. But cyberchondriacs go beyond the first result, potentially crowding out credible sources.
âAt a time when, according to a recent survey, 33% of Gen Z turned to TikTok before their doctors for information about health, one must question where this is going to take us,â Aparna Sridhar, a clinical professor at UCLA Health, wrote on its website in 2023.
âCyberchondria is very real. As professional healthcare providers, we must understand its implications, both for our patients and our practices, and be prepared to address cyberchondria as a part of our educational toolkit for the future.â
mohammed.iqbal@thehindu.co.in
aroon.deep@thehindu.co.in
(If youâre in distress, do reach out to these helplines: Aasra 022-27546669 and TeleMANAS 1-8008914416.)