Threat Intelligence

The 2026 Healthcare Threat Landscape: AI Poisoning and the Rise of State-Backed RaaS

Healthcare organizations face a convergent risk environment defined by state-backed ransomware adoption and novel AI safety vulnerabilities. Lazarus Group and affiliates are weaponizing Medusa RaaS, while adversaries exploit memory poisoning in clinical AI systems.

TLT
Threat Landscape Team
2026-02-278 min read

As we move through the first quarter of 2026, the healthcare industry faces a "convergent risk" environment. The era of episodic ransomware has evolved into a landscape defined by persistent, state-backed extortion and the emergence of safety-critical vulnerabilities within clinical AI systems. Our analysis of recent events from early 2026 reveals a tactical shift among North Korean (DPRK) actors and a novel class of threats targeting the very intelligence hospitals now rely on for patient care.

The DPRK Pivot: Medusa RaaS and the Industrialization of Extortion

A significant trend in early 2026 is the adoption of the Medusa Ransomware-as-a-Service (RaaS) by state-linked North Korean clusters, specifically Lazarus Group, Stonefly (Andariel), and subgroups like Spearwing and Pompilus (Diamond Sleet).

Historically known for espionage and high-value financial heists, these actors are now leveraging the "deniability" of the RaaS model to target U.S. healthcare organizations, mental health facilities, and specialized educational centers. By combining custom nation-state backdoors like Comebacker and Blindingcan with commodity ransomware, they have created a modular supply chain for cybercrime.

  • Key Insight: Ransom demands have reached as high as $15 million, though the average sits closer to $260,000, indicating a high-volume, "middle-market" extortion strategy.

The Silent Sabotage: Memory Poisoning in Clinical AI

Perhaps the most concerning development in 2026 is the identification of novel failure modes in healthcare AI agents. As hospitals integrate autonomous agents for patient interaction and clinical decision support, adversaries—and even well-intentioned but ambiguous inputs—are leading to Persistent Memory Poisoning.

Unlike traditional malware that crashes a system, these attacks use authorized-tool chaining and context manipulation to create persistent clinical errors. By poisoning the "long-term memory" of an AI agent, an attacker can induce delayed harm or facilitate data exfiltration without ever triggering traditional endpoint detection rules.

Technical Threat Correlation

The following table highlights the primary actors and the specialized toolkits they are deploying against healthcare targets in 2026:

Threat Actor & Malware Mapping

Threat ActorAssociated Malware/ToolsTargeted SectorPrimary Motivation
Lazarus GroupMedusa, Comebacker, Blindingcan, MimikatzU.S. Healthcare, NGOsFinancial / Espionage
Stonefly (Andariel)Medusa, Custom BackdoorsDefense, Tech, HealthData Theft / Extortion
Spearwing / PompilusMedusa, ChromeStealer, RP_ProxyMental Health, Non-ProfitsFinancial Gain
UAT-8837AI Prompt Injectors, Zero-DaysCritical InfrastructurePersistence / Disruption

MITRE ATT&CK Behavioral Mapping

Technique IDTechnique NameContext in Healthcare (2026)
T1566PhishingPrimary entry for credential theft and RaaS deployment.
T1059.001PowerShellUsed for executing custom loaders and backdoors like Comebacker.
T1647Adversarial AIMemory poisoning and tool-chaining in clinical AI agents.
T1486Data Encrypted for ImpactMedusa RaaS deployments targeting patient record systems.
T1555Credentials from Password StoresUse of ChromeStealer to harvest clinical portal credentials.

Critical Vulnerabilities (2026 Focus)

CVE IDDescriptionThreat Association
CVE-2025-10035Critical vulnerability exploited by Medusa for initial access.Medusa RaaS Affiliates
CVE-2025-61882Remote Code Execution (RCE) vulnerability used in early 2026 campaigns.APT Clusters

Analyst Perspective: The Path to Resilience

The 2026 threat landscape proves that traditional perimeter defense is no longer sufficient for healthcare. The integration of AI has opened a "safety-privacy" gap that requires runtime guardrails and PHI detection specifically for agentic workflows.

Operational Recommendations:

  1. AI Governance: Implement memory validation and "crisis escalation" protocols for all patient-facing AI agents to prevent poisoning.
  2. Ransomware Mitigation: Monitor for indicators of RP_Proxy and Blindingcan, which often precede a full-scale Medusa deployment.
  3. Patch Management: Prioritize remediation of CVE-2025-10035, as it has become a staple for RaaS affiliates targeting medical infrastructure.

The information in this report is grounded in events published through February 2026. For real-time updates on these indicators, consult the THREATLANDSCAPE.AI platform.

Ready to Transform Your Threat Intelligence?

See how Threat Landscape can reduce alert fatigue and improve your security operations