Friday, February 27, 2026

< + > Taming Healthcare’s Wild West: A Governance-First Approach to AI

The following is a guest article by Alex Tyrrell, Head of Advanced Technology and CTO, Health at Wolters Kluwer

Generative artificial intelligence (GenAI) has the potential to influence several aspects of care, from clinical assessment and diagnosis to patient communication and operational functions. By easing administrative load, advancing clinical accuracy, and helping fill talent and resource gaps, GenAI can meaningfully improve care quality, patient satisfaction, and clinician well-being, among other benefits.

Yet, today’s environment often feels like the healthcare industry is navigating the Wild West of large language models (LLMs). The pace of adoption is quickly outpacing the guardrails needed to manage it responsibly, a trend underscored by an alarming lack of oversight regarding GenAI use in healthcare organizations. According to a 2025 survey of healthcare professionals, only 18% of respondents were aware of formal organizational policies governing GenAI use, and only 20% were required to take structured training on authorized GenAI use.

Unmonitored and unauthorized GenAI use in healthcare thwarts not only true GenAI advancement, but also compromises patient safety and organizational longevity, a direct contradiction to healthcare’s “do no harm” objective. To ensure a safer future, GenAI use must be governed by an intentional, patient-first approach.

Unmonitored GenAI and Its Risks to Patient Health Information (PHI)

Cybersecurity and IT teams at healthcare organizations can only effectively monitor the AI software that they are aware of. However, this is a task that has only become more challenging with the rise of shadow AI, the use of unauthorized AI tools by clinicians and healthcare staff. 

Shadow AI often emerges in response to operational strains like chronic understaffing, complex clinical needs, and high patient volumes that make it challenging for healthcare professionals to meet the demands of their role through human efforts alone. These underlying gaps don’t just drive workaround behavior; they open the door to a series of risks that healthcare leaders can’t afford to ignore: 

  • Reidentification: AI models may be initially trained on deidentified patient data, but key patient information can still be inferred using carefully crafted prompts. For example, patients in rare disease groups may be a key at-risk population
  • Security Breaches: Data security challenges remain prevalent in healthcare, and the introduction of AI software may expand an organization’s attack surface
  • HIPAA Violations: General-purpose GenAI models are developed by commercial entities that are not specialized in healthcare, and thus, are not governed by the same privacy principles; healthcare organizations should know exactly how and for what purpose PHI is used when they adopt third-party AI platforms, but in the case of shadow AI, this safeguard is bypassed

Shadow AI is a key indicator that an organization’s authorized technology stack is not meeting the real needs of professionals at the frontlines of care. Mitigating these risks requires stronger alignment between leadership, staff, and clinicians.

Internal Governance Gaps Inhibit AI Oversight and Deepen Patient Privacy Risks

GenAI is rapidly advancing, and current policies that govern patient data privacy may not effectively address new AI use cases. Federal AI regulatory guidance, such as the HTI-1 Final Rule, offers a starting point for more in-depth policies, but key AI applications fall outside formal regulatory oversight.

Several states are also beginning to introduce their own frameworks, such as California’s Transparency in Frontier AI Act, which emphasizes risk disclosure, transparency, and mitigation, and the Colorado Artificial Intelligence Act (CAIA), which is designed to prevent algorithmic discrimination.

Each healthcare organization also faces unique operational circumstances related to its patient population, services offered, and status as a public or private entity. When these realities meet the rapidly evolving pace of AI, they can expose several gaps:

  • Compliance vs. Innovation Tension: Healthcare organizations are facing increasing pressure to offer competitive, customer-centered services; this pressure may push organizations to pursue AI innovation without proper oversight or take shortcuts to bring solutions to market faster
  • Fragmented Accountability: Organizational leaders are at the forefront of AI policies, but they are often not the individuals leveraging these tools; governance responsibilities should be shared across clinical, operational, compliance, and IT leadership to ensure policies accurately reflect workplace challenges and considerations
  • Workforce Training and Development: As underscored by only 20% of healthcare professionals required to take structured training on authorized GenAI use, limited training can contribute to lower levels of AI literacy; this means that even well-intentioned clinicians and staff may make critical errors
  • Data Transparency: Many third-party AI solutions may lack transparency about how data is used, stored, and shared

Best Practices: Devising a Robust AI Governance Framework

Building a responsible foundation for AI in healthcare starts with a governance framework that protects patients, guides clinicians, and evolves along with the technology. When developing that framework, organizations should consider the following elements:

  • Data Standards: Ensure that training data is representative of patient populations, sourced via authorized methods, and deidentified
  • Ethical Use: State clear use cases for how AI should be used to influence patient care and prohibit uses that violate ethical standards, such as using AI to deny care or prioritize efficiency over quality.
  • Vendor Transparency: Healthcare organizations must work together with preferred vendors to ensure AI solutions are aligned with ethical and data standards; AI decision-making processes should be transparent at both the developer and user level
  • Continuous Review and Feedback: Establish ongoing communication channels for both leadership and staff to provide feedback on current AI tools, including pain points and emerging risks
  • Establish “Trusted Zones”: Create designated environments where staff can safely experiment or interact with AI tools that are pre-vetted, compliant, and secure for specific workflows; this can help mitigate the exposure of PHI to untrusted platforms

Embracing AI in Healthcare Responsibly

AI governance in healthcare must go beyond broad-scale regulation. These guardrails must include robust protocols across patient privacy, data security, and clinical ethics. Establishing governance frameworks is a critical imperative due to the increasing level of integration of AI solutions with electronic health records (EHR) and influence on patient care decisions.

A collaborative approach between IT, compliance, and clinician leadership teams offers a stronger foundation for AI governance compared to siloed decision-making. When strong governance is prioritized, healthcare organizations can experience transformed efficiency, cost savings, and care outcomes, without undermining patient safety. And while it may feel like uncharted territory, or the Wild West, a clearer path emerges as organizations put these foundations in place.

About Alex Tyrrell

Alex Tyrrell, PhD, serves as Head of Advanced Technology at Wolters Kluwer and Chief Technology Officer for Wolters Kluwer Health and oversees the Wolters Kluwer AI Center of Excellence, focused on accelerating innovation across all Wolters Kluwer divisions in the areas of GenAI, Agentic, machine learning, and data analytics.



No comments:

Post a Comment

< + > AI Insights in Pharma with Protai – Life Sciences Today Podcast Episode 50

We’re excited to be back for another episode of the Life Sciences Today Podcast by Healthcare IT Today. My guest today is Kiril Pevsner, Co...