Friday, April 17, 2026

< + > Why BMJ Group is Embedding 200 Years of Evidence Directly into Clinical Workflows

Clinicians are drowning in data but starved for actual answers. What if we stopped making them hunt for evidence? The era of standalone clinical reference portals is fading. Bringing verified knowledge directly into the daily clinical workflow is the only path forward.

Healthcare IT Today sat down with Derrick Leung, Business Development Director at BMJ Group, to discuss the challenge of delivering relevant clinical knowledge without adding friction to a provider’s day. The conversation highlighted how the organization is rethinking the delivery of medical evidence.

What This Conversation Revealed

  • Integration beats standalone applications. Clinicians do not have time to switch screens to search for medical evidence. Delivering information directly into their existing workflow is now a necessity.
  • Human curation acts as an AI guardrail. Large language models are prone to hallucination. Grounding AI tools with deeply-vetted, expert-curated content provides a necessary safety net for clinical decision-making.
  • Relationships perform better than rigid rules. Patients present with complex symptoms rather than neat data points. Using a knowledge graph to map these relationships creates adaptable and accurate clinical decision support.

Stop Forcing Clinicians to Search for Clinical Information

The BMJ (formerly British Medical Journal), the renowened peer-reviewed medical journal by BMJ Group has a nearly 200-year history of publishing medical evidence. Now, they are moving that knowledge base directly into the clinical workflow via an API. The goal is to eliminate the friction of forcing doctors to open yet another application.

Leung pointed out the value of this approach, “Instead of a clinician going to a different application to search for evidence, which they can still do, they now have that evidence served up within the workflow. Now they aren’t disrupted from their work. That’s powerful.”

By embedding knowledge where physicians already are, organizations can reduce cognitive load.

Human Curation as an AI Guardrail

Technology companies are racing to deploy large language models in healthcare. Yet these models desperately need reliable anchors to ensure patient safety. BMJ is positioning its massive library of evidence as a foundational layer to keep AI-driven clinical tools accurate and relevant.

Leung sees this human curated content base as an important hedge against hallucinations which are inherent to AI tools. “We’re still doing human curated content so that BMJ Group can act as a guardrail for AI technologies.”

Relationships Over Rigid Rules

Traditional clinical decision support systems are often built on rigid rules. But patients present with messy symptoms, not neat, binary data points.

Recognizing this reality, BMJ Group is using a dynamic knowledge graph. This architecture allows applications to organically map a patient’s real-world presentation to the right clinical evidence.

Leung described the architectural this way: “Our knowledge graph is not like traditional clinical decision support which is rules-based. Our knowledge graph is based on relationships.”

Knowledge graphs are a structured, graph-based representation of entities and their relationships. This makes them stronger at storing complex, connected facts whereas rule-based systems are better at arriving at a defined answer based on if-then logic. By using knowledge graphs, BMJ Group allows for a more flexible and realistic approach to diagnostic support.

The Bottom Line

Delivering clinical evidence is no longer just about publishing accurate information. It is about making that evidence useful. It needs to be in the right format, presented in the right context, and be as easily accessible as possible (aka zero disruption to clinicians). By embedding knowledge directly into workflows, grounding AI with human curation, and mapping relationships instead of rules, BMJ Group is positioning itself as the go-to partner for clinical evidence in the age of AI in healthcare.

What Healthcare IT Leaders Are Asking

How can health systems deliver clinical evidence without disrupting workflows?
Rather than forcing clinicians to log into separate reference portals, organizations are integrating evidence directly into the electronic health record via APIs. This embedded approach surfaces relevant medical data precisely when the clinician needs it, reducing cognitive load and saving valuable time.

What is the best way to prevent hallucinations in clinical AI tools?
AI models must be anchored to verified, human-curated medical content. By using an established, peer-reviewed knowledge base as a strict guardrail, technology vendors can prevent their tools from generating unsafe or inaccurate clinical recommendations.

Why are knowledge graphs replacing rule-based clinical decision support?
Rule-based systems rely on strict if-then logic, which struggles with the complex, overlapping symptoms of real patients. Knowledge graphs map the dynamic relationships between symptoms, diseases, and treatments to provide a more flexible and accurate diagnostic pathway.

Learn more about BMJ Group at https://bmjgroup.com/

Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.

And for an exclusive look at our top stories, subscribe to our newsletter and YouTube.

Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.



< + > When Healthcare AI Assistants Go Wrong: The Cyber Risk Leaders Are Overlooking

The following is a guest article by Dror Zelber, VP of Product Marketing at Radware

Healthcare organizations are rapidly deploying AI virtual assistants to help patients schedule appointments, understand medical information, and prepare for a visit. This is incredibly helpful for hospitals facing staff shortages and overloaded call centers, as these tools help provide faster service and a better patient experience.

At the same time, the Large Language Models (LLMs) being used by the AI assistants introduce new risks, especially if the systems behind them aren’t protected adequately. Recent research shows how a healthcare AI assistant used in a pilot program in Utah was manipulated to spread vaccine conspiracy theories, recommend methamphetamine as a treatment for social withdrawal, generate SOAP notes that tripled a patient’s baseline OxyContin dosage, and even provide instructions for cooking methamphetamine.

Cybersecurity is already a constant challenge for healthcare leaders, and AI is now adding another dimension to that risk. When AI assistants interact directly with patients, they create a new type of attack surface. Instead of targeting infrastructure such as servers or databases, attackers may now be able to manipulate the behavior of the system itself through conversation.

A Different Kind of Vulnerability

Traditional healthcare cybersecurity focuses on protecting infrastructure. Security teams work to safeguard networks, medical devices, electronic health records, and other systems that store or transmit sensitive patient information.

LLMs operate differently. These systems generate responses based on instructions embedded in system prompts that guide how the AI should behave. Developers use LLM prompts to define tone and rules about what the assistant is allowed to say, what it should avoid, and how to handle sensitive topics. In healthcare, those instructions often include guardrails such as avoiding diagnosis, referencing trusted sources, or escalating sensitive questions to human clinicians.

However, language models do not distinguish between legitimate and malicious instructions. Models are designed to please users and execute their instructions. This weakness enables what security researchers call prompt injection and model manipulation attacks. In a prompt injection scenario, an attacker hides instructions inside what appears to be a normal user message. The AI assistant processes the message as text and may follow the attacker’s instructions alongside the user’s legitimate instructions.

The attacker does not need to breach the hospital network or bypass authentication controls. The interaction takes place entirely through the chatbot interface.

When an AI Assistant Is Manipulated

Consider how many healthcare organizations are beginning to integrate AI assistants into patient portals, telehealth systems, and digital front doors. If an attacker successfully manipulates the system’s prompt behavior, the consequences may not appear immediately as a technical breach. The hospital’s servers may remain intact and patient records untouched.

Instead, the impact appears in the system’s responses. The assistant may generate misleading medical explanations or present fabricated information as legitimate clinical guidance. It could incorporate false regulatory updates or manipulated treatment guidelines into its recommendations. As the above example illustrates, the system may even generate structured medical documentation, such as SOAP notes that incorporate manipulated information and present it to clinicians as authoritative context.

While none of these scenarios require access to sensitive patient data, they can still influence medical conversations and decision-making. In healthcare, trust plays a central role in patient relationships. If digital tools provide inaccurate or manipulated information, confidence in the institution behind those tools can erode quickly.

Why Healthcare Faces Unique Risks

Many industries are experimenting with AI assistants, but healthcare carries particularly high stakes. Patients tend to view hospital systems as trusted authorities. When information appears on an official hospital website or patient portal, people often assume it has been medically reviewed.

That assumption creates a dangerous dynamic if an AI assistant is manipulated. Even subtle misinformation can influence how patients interpret symptoms, manage medications, or decide whether to seek care. While the system may not be issuing formal diagnoses, its responses still shape patient decisions.

In this sense, AI assistants are becoming part of the clinical information environment. Their outputs influence conversations between patients and providers, which makes their integrity a security issue as much as a technical one.

Key Security Practices for Healthcare AI Systems

Healthcare organizations deploying AI assistants should treat them as operational software systems rather than simple digital chat tools. Since these systems interact directly with patients and clinicians, their behavior must be governed with the same rigor applied to other clinical technologies.

Several security practices can significantly reduce the risk of manipulation.

  • Validate and Sanitize User Inputs: Prompt injection attacks often rely on hidden instructions embedded in normal-looking messages; filtering and validating user inputs before they reach the model can reduce the likelihood that malicious instructions will be processed
  • Separate System Instructions from User Conversations: System prompts should be isolated from user input so that attackers cannot easily override the guardrails that define how the AI should behave; clear separation between system instructions and conversational content makes prompt manipulation more difficult
  • Monitor AI Outputs for Anomalies: AI assistants should be monitored continuously for abnormal responses or behavior patterns; logging and reviewing outputs can help identify situations where the system may be generating misleading or manipulated information
  • Conduct Adversarial Testing Before Deployment: Security teams should simulate prompt injection attempts during development and staging environments; red-team exercises can reveal weaknesses in prompt design and system architecture before the AI system interacts with patients
  • Adopt Emerging AI Security Frameworks: Guidance such as the OWASP Top 10 for Large Language Model Applications provides a useful framework for understanding common AI risks, including prompt injection, data leakage, and model manipulation; these frameworks help organizations incorporate AI risks into their broader security strategy

As healthcare organizations expand the use of AI-driven patient engagement tools, these practices can help ensure that innovation does not come at the expense of safety, reliability, or trust.

AI Innovation Must Be Secured

AI assistants have the potential to improve healthcare by reducing administrative burdens and helping patients access information more quickly. However, these systems also introduce a new category of cyber risk. Healthcare organizations must treat AI assistants with the same level of scrutiny applied to other clinical technologies. As AI adoption accelerates, ensuring these systems remain trustworthy will require strong governance, security testing, and continuous monitoring.

About Dror Zelber

Dror Zelber is VP of Product Marketing and formerly VP of Management at Radware and a 30-year veteran of the high-tech industry specializing in security, networking, and mobility technologies. He holds a bachelor’s degree in computer science and an MBA from Tel Aviv University.



< + > ESO Acquires d2i | D2 Solutions Acquires ProModRx

Check out today’s featured companies who have recently completed an M&A deal, and be sure to check out the full list of past healthcare IT M&A.


ESO Acquires d2i, Accelerating Emergency Intelligence to Drive Performance and Improve Outcomes Across Fire, EMS, and Health Systems

The Combined Platform will Create Connected Intelligence Across the Full Emergency Lifecycle

ESO Solutions, Inc., a leading data services and software company serving fire departments, EMS, hospitals, and government agencies, today announced it has acquired d2i, a healthcare performance improvement company that transforms siloed electronic medical record (EMR), revenue cycle management, scheduling, and patient experience data into operationally actionable insights for emergency departments and hospitals.

Health systems and emergency services generate vast amounts of data across dispatch, field care, and hospital settings. However, that data remains fragmented, limiting the ability to understand what drives patient outcomes and experience, or operational improvements such as emergency department boarding, throughput, or provider performance. ESO’s acquisition of d2i advances ESO’s vision of building end-to-end emergency intelligence—from understanding community risk, dispatch, response, incident management, prehospital, emergency department, hospital, and post-acute care.

“This acquisition enables ESO to be the first in our industry to offer connected intelligence from the community risk and initial call through the emergency department, hospital care, and beyond,” said Eric Beck, CEO at ESO. “d2i is a recognized leader in ED and hospital performance analytics. With ESO and d2i coming together, the combined data represents the largest integrated prehospital and hospital data asset available to drive emergency intelligence. Together, we can close the loop between what happens in the field and what happens in the hospital and in the post-acute setting, unlocking a level of insight that simply has not been available to emergency services until now.”

The acquisition builds on ESO’s existing partnerships with more than 3,000 hospitals as well as ESO Health Data Exchange, a leading EMS interoperability solution that connects hospitals with emergency services providers in real-time across the U.S. and the globe. d2i supports more than 60 million hospital encounters and manages 10 billion data points in its warehouse…

Full release here, originally announced April 7th, 2026.


D2 Solutions Acquires ProModRx to Expand Technology and Services Supporting Enhanced Patient Access and Engagement

ProModRx Adds Infrastructure and Capabilities that Strengthen D2’s Support from Initial Access through Ongoing Patient Services

D2 Solutions, a healthcare consulting and technology company focused on market access, reimbursement, and patient support, has acquired ProModRx, a cloud-based technology platform designed to help speed patient access to prescription medications.

The acquisition expands D2’s services to better assist pharmaceutical and medical device manufacturers, traditional hubs, and pharmacies across the full scope of the patient journey. That support includes therapy initiation, prescription capture, electronic benefits verification, prior authorization support, dispensing coordination, and ongoing patient engagement.

The need for stronger connectivity between manufacturers, prescribers, and patients is significant. In recent D2 research, 21% of adults delayed starting a prescribed medication because of confusion or access issues. In addition, 11% of respondents did not pick up their medication from the pharmacy, 12% skipped doses, and 8% stopped treatment earlier than planned.

The deal also comes as healthcare stakeholders – especially patients, providers, and manufacturers – face growing pressure to manage access, reimbursement, and fulfillment amid a more complex pricing and policy environment.

“D2’s focus has always been responding to market demands and the needs of our partners by helping address gaps in market access, operational support, and patient services,” said Dean Erhardt, D2 Founder and CEO…

Full release here, originally announced April 16th, 2026.



Thursday, April 16, 2026

< + > Standard AI is a Black Box. Here is Why RAAPID Built a Glass One for Risk Adjustment.

The problem with AI in the revenue cycle is transparency. It is powerful, but it usually operates as an unpredictable black box. In risk adjustment, you simply can’t afford to guess how an algorithm arrived at a billing code. You need a glass box. You need absolute defensibility. Here is how that is finally becoming a reality.

Healthcare IT Today sat down with Chetan Parikh, Founder and CEO of RAAPID, to explore the evolution of risk adjustment technology. We discussed the challenges of relying on standard NLP and why organizations need technology that balances accurate coding with strict regulatory compliance.

What This Conversation Revealed

  • Neuro-symbolic AI offers a glass box approach. By combining large language models with proprietary knowledge graphs, organizations gain high accuracy and fully defensible evidence without the risk of hallucinations.
  • AI reduces the mental load on medical coders. Highly accurate AI tools allow coding teams to stop sweating the small details and start operating at the top of their license.
  • Technology must balance revenue and compliance. The right AI ensures providers get paid for the services they deliver while preventing the regulatory risks of over-billing.

Neuro-Symbolic AI Provides Defensible Evidence

Standard natural language processing casts a wide net but often struggles with precision in complex clinical documentation. Health IT leaders know that adopting large language models brings risks of hallucination, making pure generative AI difficult to trust for revenue cycle applications. The solution, according to Parikh, lies in neuro-symbolic AI.

Parikh explained how RAAPID addresses this industry hurdle by marrying large language models with proprietary knowledge graphs. He noted that their technology focuses on “taking full advantage of the large language models and at the same time making sure that we are not hallucinating”. Parikh further detailed that this approach is all about “converting from a black box to a glass box, where everything is defensible and evidence based.”

Elevating the Role of Medical Coders

Finding and retaining highly skilled medical coding talent is a persistent challenge for provider organizations. When legacy NLP systems only deliver moderate out-of-the-box accuracy, human coders are forced to spend excessive time verifying outputs.

However, with RAAPID’s neuro-symbolic powered AI systems, organizations can achieve more than ninety percent accuracy. This dramatically improves the entire workflow for coding staff. Parikh highlighted this by  stating that “when you have an AI that is as accurate as 91 – 92% out of the box, the coder’s mental load is significantly reduced, and the coders are now operating at the top of the license rather than they trying to identify everything.”

Hitting the Sweet Spot Between Revenue and Compliance

Risk adjustment requires walking a tightrope. If an organization under-codes, they will not capture the true value of care delivered. Conversely, aggressively capturing codes without sufficient documentation triggers intense scrutiny from federal regulators.

“If your AI is unable to identify codes that are truly billable, then you did the work, you provided the service, but you are not getting paid for it,” Parikh summarized. “But you have to make sure to not be overcoding and overbilling.”

Health systems need a middle ground where they capture accurate reimbursement while remaining securely within regulatory boundaries.

The Bottom Line

Risk adjustment technology needs to move beyond good-enough AI with opaque models. As organizations evaluate new AI tools for their revenue cycle, the focus must be on accuracy, defensibility, and operational efficiency. Implementing AI that provides clear evidence pathways, like what RAAPID offers, protects the organization from compliance risks while ensuring fair reimbursement for care delivered.

What Healthcare IT Leaders Are Asking

What is neuro-symbolic AI in healthcare? Neuro-symbolic AI combines the pattern recognition capabilities of large language models with the structured logic of proprietary knowledge graphs. This hybrid approach provides the broad contextual understanding of generative AI while anchoring the outputs in factual, evidence-based rules to prevent hallucinations.

How does AI impact medical coding compliance? Advanced AI improves coding compliance by linking suggested codes directly to documented clinical evidence. By surfacing only defensible codes, the technology helps organizations avoid over-billing while ensuring they capture all appropriate revenue for services rendered.

Why is a “glass box” approach important for risk adjustment? A glass box approach allows human auditors to see exactly how an AI model arrived at a specific coding conclusion. In highly regulated areas like risk adjustment, being able to trace a suggested code back to the exact clinical documentation is essential for defending claims during audits.

Learn more about RAAPID at https://www.raapidinc.com/

Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.

And for an exclusive look at our top stories, subscribe to our newsletter and YouTube.

Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.

RAAPID is a sponsor of Healthcare Scene



< + > When Phones Aren’t an Option: How UCHealth Modernized Meal Ordering in a Behavioral Health Unit

Designing a modern behavioral health unit often means intentionally leaving out bedside phones – a standard fixture elsewhere in the hospital, but a safety and security concern in psychiatric care.  But when patient’s can’t call in their own meal orders, what then? For UCHealth, that meant taking advantage of an technology that was not yet being used and changing their workflow to accommodate it. Better experiences and higher productivity were the result.

Healthcare IT Today sat down with Jenna Sampson, Nutrition Systems Coordinator at UCHealth. When their new behavioral health unit came online, Sampson and her team had to rethink their organization’s standard phone-based meal ordering process, ultimately deploying an existing app.

What This Conversation Revealed

  • Intentional constraints drive digital workflows. Behavioral health unites routinely exclude bedside phones for safety reasons and because of that UCHealth moved to meal ordering onto a mobile app which ended up being better for everyone.
  • Safety requires strict EHR boundaries. To handle risky free-text allergies in Epic, the system implements a hard stop when allergies are found to be in free-text in the EHR. Nothing moves forward until those allergies are coded discretely into Epic.
  • Frontline tools boost system metrics. Providing nurses with direct mobile access turned a potential chore into a preferred workflow, driving regional app adoption to record highs.

Building Patient Workflows Around Intentional Constraints

When UCHealth opened a new 55-bed behavioral health unit in Fort Collins, the facility featured a very specific design choice. The rooms intentionally lacked bedside phones. Standard practice across the health system relied on patients calling the kitchen to order their meals.

Without patient phones, the burden would fall entirely on the nursing staff to call in the orders. That alternative would have meant nurses calling orders into the call center which in turn would create longer wait times for other patients and bog down the call center with additional, unnecessary call volumes.

The solution was already in their technology stack – the Illumia (formerly CBORD) Patient App.

“We already had the Patient App,” shared Sampson. “We decided to explore it and made it successful.”

Giving nurses direct mobile access transformed the ordering process and avoided a massive call center bottleneck.

Baking Safety into the Epic Workflow

A major challenge in dietary ordering is handling allergies entered manually in the electronic health record. UCHealth uses Epic, which allows clinical staff to input allergies as free text in the “other allergy” field.

A free text entry like “strawberry”, for example,  will not trigger the automated dietary compliance system, creating a serious patient safety risk. The team solved this by creating a new operational workflow. “Any patient that has [something entered into] the other allergy field are ineligible to order through the app to ensure patient safety,” shared Sampson. “When this happens, our staff go into Epic and codify the allergy properly and remove the data from the ‘other allergy’ field. Now that patient’s meals can be ordered through the app.”

The Bottom Line

Tying the mobile app directly to Epic’s allergy compliance engine ensures patient safety remains the top priority. While nurses initially had some resistance, they quickly came around – Sampson reports they now “fight over who gets to put the orders in.” The unit is inputting 99 percent of patient meal orders through the app. That localized success drove the region’s overall patient meal ordering from less than 1 percent to 19 percent. The unit also avoided adding a full FTE to the call center, translating to significant cost savings. The new unit is serving as a potential model for all other units at UCHealth.

What Healthcare IT Leaders Are Asking

How do you handle free-text food allergies in digital ordering?
Free text fields in an electronic health record fail to map to automated dietary compliance systems. The safest approach is to restrict digital meal ordering for any patient with an “other” allergy listed. Clinicians must manually review the chart and convert the free text into a coded, system-recognized allergy before the patient can use self-service apps.

What is the best way to drive clinical adoption of a mobile app?
Removing friction at the point of care is the fastest path to adoption. Pre-loading the required applications directly onto corporate-issued mobile devices ensures immediate access for nursing staff. When a tool genuinely saves time compared to calling a busy contact center, clinical teams will naturally gravitate toward it.

Can localized digital workflows impact system-wide metrics?
Testing a distinct workflow in a controlled environment provides a blueprint for broader rollouts. A near-perfect adoption rate in a single unit can generate enough volume to significantly move regional utilization metrics. This localized data serves as compelling proof to secure buy-in from other clinical departments.

Learn more about UCHealth at https://www.uchealth.org/

Learn more about Illumia at https://illumiatech.com/

Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.

And for an exclusive look at our top stories, subscribe to our newsletter and YouTube.

Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.



< + > Jimini Health Raises $17M | Ambient Clinical Analytics Secures $5M Strategic Investment

Check out today’s featured companies who have recently raised a round of funding, and be sure to check out the full list of past healthcare IT fundings.


Jimini Health Raises $17M as Behavioral Health Systems Face Growing Pressure to Manage Patient AI Use with Clinical-Grade Infrastructure

Jimini Addresses a Major Market Need as Patients Turn to General-Purpose AI for Mental Health Support, Providers Face Rising Pressure and Opportunity to Implement Safe, Clinician-Supervised Solutions

Funding will Enable AI Development Across Additional Clinical Settings and Expansion with Large Clinical Partners Nationwide

Jimini Health today announced $17 million in seed funding from M13, Town Hall Ventures, LionBird, Zetta Venture Partners, and OneMind, bringing total funding to more than $25 million. The company is building clinician-supervised patient-facing AI infrastructure for large behavioral health provider organizations, enabling providers to deploy patient-facing AI safely, compliantly, and at scale, with licensed clinicians maintaining oversight of every patient interaction.

The Reality Health Systems Can No Longer Ignore

Patients at large behavioral health systems are already using AI for mental health support between appointments, without clinician visibility or control. This cultural shift is happening regardless of whether providers participate, placing new clinical, operational, and legal pressure on behavioral health organizations to respond.

More than 5.4 million U.S. adolescents and young adults now use AI chatbots for mental health advice. More than 1 million people a week have conversations with ChatGPT that include explicit indicators of suicidal planning or intent. Character.AI and Google have already settled wrongful death lawsuits brought by families of teenagers who died by suicide following unsupervised AI interactions.

“When 1 million people a week are discussing suicide with a product that was never designed to handle it, that’s not an edge case, it’s a systemic gap,” said Morgan Blumberg, Partner at M13…

Full release here, originally announced March 31st, 2026.


Ambient Clinical Analytics Secures $5M Strategic Investment and Appoints Brian Tufts as CEO to Accelerate Growth

Ambient Clinical Analytics, a pioneer in software that combines real-time clinical analytics with clinical decision support and workflow tools, today announced the successful closing of a $5 million strategic funding round with key investments from Mairs & Power Venture Capital as well as a Fortune 500 strategic MedTech firm. The company also announced the appointment of Brian Tufts as Chief Executive Officer, signaling a new phase of growth and market expansion.

Healthcare providers today work in a complex environment in which critical patient data is not always readily available and easily interpreted in real time. In addition, care coordination amongst teams, especially in sepsis care, is challenging; often leading to variability in care.

Ambient Clinical Analytics addresses these challenges with an integrated offering that delivers real-time clinical analytics, clinical decision support, and integrated workflow automation to enable clinicians to see the full picture of a patient’s condition as it evolves.

The new capital will be used to accelerate innovation, expand adoption across health systems, and scale Ambient’s product, including its FDA Class II-cleared AWARE platform system. Built on clinically validated algorithms and Mayo Clinic–licensed technology, the company’s platform transforms complex clinical data into intuitive, actionable insights that support faster, more informed decision-making. Additionally, hospitals have leveraged the integrated workflow tools to improve sepsis protocol adherence, which has been correlated to improve both clinical outcomes and financial measures.

Brian Tufts joins Ambient Clinical Analytics with extensive leadership experience from Vantive and Baxter, where he led growth across complex healthcare environments. As CEO, Tufts will focus on expanding Ambient’s market presence, deepening strategic partnerships, and advancing the company’s mission to redefine how care teams operate in high-acuity environments.

“Hospitals do not lack data; yet doctors and nurses have few resources that deliver clarity in a dynamic clinical environment,” said Brian Tufts, Chief Executive Officer…

Full release here, originally announced March 30th, 2026.



Wednesday, April 15, 2026

< + > CommonWell Expands Data Exchanges in Volume and in Purpose

In this video, Paul L Wilder, Executive Director of the CommonWell Health Alliance, discusses the spread of health data exchange as it involves not just providers but new actors such as payers, public health, and patients themselves.

CommonWell, a nonprofit QHIN that started in 2013 and has an enormous reach today, contains IT vendors ranging from startups to big EHR vendors, and providers now as well. For a long time, Wilder says, EHRs supported only unidirectional data exchange: they would allow it to be extracted but not inserted. Now it’s more bidirectional.

While CommonWell is still investing in and supporting FHIR, Wilder noted that the ability of AI to extract key data from plain text documents, and to convert data between formats, makes the FHIR standard less important. Many sites go from source document to their own storage without an intermediate FHIR step.

However, FHIR is valuable for segmenting data and extracting just a few fields. This is important in public health, because transferring complete records on huge numbers of patients creates the security risk of a “honeypot” that could attract attackers.

In contrast, most providers want complete records. Patients do too, although Wilder suggests that in a few cases (such as private notes created by psychotherapists) the provider might be without some data. In our discussion, Wilder describes patients’ interest in their data, and advises that at the very least, they should examine their data for errors.

Wilder also discusses the benefits and challenges of two recent government policies: TEFCA and CMS-aligned networks. One observation he made is that TEFCA applies to covered entities, and thus excludes some important institutions such as free clinics. He also noted that CMS-aligned networks require bilateral agreements, which is very cumbersome to arrange among large groups of institutions.

However, some institutions have vastly increased data exchanges through TEFCA, and therefore increased the number of documents by orders of magnitude. For instance, more errors are being reported to the government because it’s technically easier to report the data. Some providers say they’re getting too much data, but Wilder has little sympathy for that complaint.  He did suggest that as the volume of documents increases, it’s also easier for malicious breaches to get lost and go unnoticed.  That’s a challenge that the industry is going to have to work on.

Check out our interview with Paul Wilder from CommonWell to learn more about the latest on healthcare interopeability.

Learn more about CommenWell Health Alliance: https://www.commonwellalliance.org/

Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.

And for an exclusive look at our top stories, subscribe to our newsletter and YouTube.

Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.

CommonWell Health Alliance is a proud sponsor of Healthcare Scene.



< + > Why BMJ Group is Embedding 200 Years of Evidence Directly into Clinical Workflows

Clinicians are drowning in data but starved for actual answers. What if we stopped making them hunt for evidence? The era of standalone clin...