Image for representational purposes only.
| Photo Credit: Getty Images
With personal healthcare data being so freely used, stored and shared on digital systems and AI chatbots, the ever-present threat is of safety of such data. A recent recce by a professional agency has revealed that regulated data, including patient records and medical information, is especially at risk, accounting for 89% of all data policy violations occurring in the area of generative AI usage, significantly higher than the cross-industry average of 31%.
Researchers at Netskope Threat Labs monitoring key cyber threats facing healthcare organisations and their employees in the last 13 months, released their annual healthcare report on Tuesday and threw a spanner in the works. The report, which collected data between December 1, 2024, and December 31, 2025, with prior authorisation, indicated that deployment and usage of internal AI tools, which require stringent security guardrails, is already accelerating, and flagged the risks thereof.
With healthcare staff adopting and using GenAI at a higher frequency than ever before, the risk of leaking sensitive patient data through prompts and documents shared online is very high. What makes the scenario worse is the use of personal GenAI accounts for verifying information.
Why is this important to curb? Nearly 43% of healthcare workers are still using personal accounts at work, making it impossible for security systems to detect leaks, the report claims, while adding that healthcare institutions are trying to modify behaviour by getting employees to use approved proprietory software. As a result, the proportion of users using GenAI applications managed by the organisation has also increased during the same period, outpacing such development across industries.
Protective steps
In healthcare, the report claimed, almost two in three organisations are detecting API (application programming interface) traffic to OpenAI and AssemblyAI (63% and 62% respectively), and more than a third (36%) to Anthropic. Over the past year, more than half of healthcare organisations (56%) that deployed such policies, blocked users from uploading files to personal Google Drive accounts, illustrating the frequency of potential data exposure in popular personal cloud applications. Google Drive was followed by Google Gmail (39%) and OneDrive (30%). This is important because attackers also continue to take advantage of the inherent trust employees put in cloud applications, and the files they might find in them. In healthcare, researchers identified several platforms that are frequently exploited by attackers for malware distribution.
Ray Canzanese, director, Netskope Threat Labs, said: “While building defences against external threats is essential for healthcare organisations that have historically been prime targets for cybercriminals, addressing internal risk is equally important, especially in such a highly regulated industry and in the context of fast-paced cloud and AI adoption.” He added that deployment of company-approved applications along with relevant security tools that offer full visibility and control over usage and data movements, should be a priority for healthcare organisations.
Published – March 03, 2026 09:46 pm IST




