The following is a guest article by Matt Murren, Co-Founder and CEO at True North ITG, Inc.
The AI revolution is already happening inside healthcare organizations—whether they’ve planned for it or not. As of 2024, 29% of healthcare organizations were already using generative AI tools, and that number is expected to rise.
But for many of these organizations, that AI usage might be limited to transcribing Teams meetings or using Copilot to draft an email. For multispecialty and ambulatory healthcare groups looking to embrace AI in a more comprehensive way, where should they start? If they want to harness the efficiencies of AI with less risk than implementing AI in clinical workflows, the answer lies in data retrieval. And for patient safety, cybersecurity, and sustainable growth, custom language models offer the most secure and effective way to do it.
The Danger of Public Models
In many organizations, employees across divisions are already using large language models like ChatGPT to speed up their workflow. A 2024 report from Microsoft and LinkedIn found that 78% of AI users reported using their own AI tools at work.
Employees acting in good faith may enter sensitive information into public AI models, unaware of their regulatory risk. The hazard is that these public models are not secure or HIPAA-compliant. And when a healthcare group has a data breach, the consequences are even more dire than for other organizations as sensitive patient data is compromised. Along with paying hefty fines, organizations are required to notify all users of the breach and face reputational damage.
The threat of an AI breach may seem like an abstract concern, but it is a surprisingly routine occurrence. In 2024, 77% of businesses experienced an AI-related security breach. Cybercriminals are constantly developing new methods to exploit security gaps, and public AI models are a breeding ground for those opportunities. One reason is their expanded attack surface. As the number of people using an AI model increases, so does the number of potential entry points for attackers. There is also the “black box” problem. Public AI models are massive and complex, making it difficult for IT departments to track data flow and identify vulnerabilities. Security breaches happen before anyone can detect a threat.
The Risks of Public AI in Healthcare
- Not HIPAA-compliant or secure
- Employees often input sensitive data unknowingly
- 77% of businesses experienced AI-related breaches in 2024
- Complex “black box” models make threat detection difficult
- Regulatory violations lead to costly fines and reputational damage
Islands of AI
Another problem with public models is that they are not interoperable across an organization. Healthcare groups often have “islands of AI” – their data system is a patchwork of disparate methods improvised for different departments. Employees might be entering data into different places, creating redundancies and feeding into a system where there is no way to perform a unified search across the whole organization. This lack of a centralized structure hinders an organization’s ability to conduct their analytics efficiently and safely.
Aside from huge efficiency losses, using public AI models leads to compromised analytics. Generative AI is only as strong as the information inputted into it, and it requires the right training and monitoring to work correctly. According to a 2024 Deloitte survey, 78% of organizations that relied on public AI systems reported making incorrect business decisions due to inaccurate or irrelevant outputs. In some cases, that information led to security breaches.
To overcome these barriers, organizations must build their data foundation with a single, centralized model. The solution? A HIPAA-informed, cybersecurity-based large language model that is custom-made for each organization. For patient safety, cybersecurity, and efficiency, going private is the only option.
Streamlining Workflows Through Data Aggregation
Using a custom data aggregation and retrieval model empowers healthcare organizations to securely centralize their information and make it easily searchable. Think of it as building your own private Google—a private, secure tool tailored to your systems, data, and workflows.
Many clinics still rely on static Word or Excel documents that accumulate over time and lack even basic search functionality. Even digitally accessible documents—such as PDFs buried in intranet pages, files stored in SharePoint folders, or files stored on departmental file servers—are often siloed, fragmented, and difficult to access. These disconnected systems require manual navigation and searching, creating daily friction for staff.
Consider a common Human Resources example: an employee spends 10 minutes navigating a file server to find a policy in the employee handbook—like how to submit PTO or access a benefit. With aggregated data and a custom-trained private language model, that same employee could get a direct answer in under 30 seconds. Multiply that small gain across hundreds of staff interactions each week, and the value of every saved minute quickly adds up.
Now extend that to other common, repetitive information retrieval tasks—finding procedures, forms, application links, or training materials. These micro-efficiencies accumulate into significant reductions in administrative waste.
One particularly impactful example is an organization that combined data aggregation with a voice AI solution to automate responses to frequent contact center inquiries. This integrated approach has delivered an estimated savings of 5 to 20 full-time employees (FTEs) by offloading high-volume, repetitive tasks like referral checks, appointment confirmations, and insurance verification.
Back-office operations such as billing, referral processing, and authorization requests are also ideal candidates for data aggregation and automation. These tasks typically involve pulling information from multiple systems, verifying accuracy, and manually entering data into downstream platforms. With aggregated data and AI tools trained on the specific workflows of your organization, these time-consuming processes can be dramatically streamlined, freeing staff to focus on higher-value work.
The key is the centralization and interoperability of the system, getting away from islands of AI.
The Transformation Process
Google’s search power has always relied on machine learning to deliver exactly what you need, instantly. So, how can you bring that same power in-house, tuned specifically to your people, your documents, and your systems through a private, secure, always-on assistant?
It starts with an interview of key organizational stakeholders and an assessment of goals. Group IT and operational leaders can map out where the data currently lives and identify workflows and repetitive tasks. From there, a specialized healthcare IT managed service provider (MSP) leads the development of an AI solution road map to thoughtfully collect information, build custom AI solutions, and automate tasks.
The foundational first step is information retrieval. This includes an assessment of data sources and common data inquiry tasks. The next step is determining manual processes and developing a custom language model to streamline these workflows. Once data is retrieved and a custom language model is built to access it, a test phase follows to ensure the AI-powered automated workflows are running smoothly.
In the beginning, embracing a private AI solution powered by a custom language model will be a new overlay on top of current workflows. But it quickly leads to such significant labor savings that group leaders and healthcare providers find the transition well worth the investment. Organizations may view adopting the custom AI model as a shift from not having an AI system into a comprehensive system. But the truth is that their team may already be using AI, whether sanctioned by IT or not. As that number increases, organizations will only take on more risk and potential losses. Organizations that have adopted private, custom AI models have seen an 82% decrease in information security incidents and a 64% improvement in response accuracy.
The Benefits of a Private, Custom AI Model
- HIPAA-informed and security-first design
- Centralized, searchable data architecture
- Eliminates redundancies across departments
- 82% reduction in information security incidents
- 64% increase in response accuracy
Looking to the Future
Custom data retrieval models will soon become the norm. They already play an integral role at the nation’s leading healthcare organizations, and smaller groups and practices have the ability to compete if they are willing to make the investment. The earliest adopters will have the biggest advantage – and see the time and cost savings in their organization.
The potential applications for AI in healthcare are endless. It may be overwhelming to keep pace with the rapid evolution of AI, but data retrieval with custom language models should be the foundational step for healthcare organizations beginning their AI journeys.
No comments:
Post a Comment