It is an incorrect assumption that AI should function as a single, all-knowing system. Healthcare keeps learning that clinical work is too specific, too regulated, and too context-dependent for a monolithic model to reliably keep up. NextGen Healthcare is taking a different path, leaning into specialization.
A New Way to Think About Clinical AI Workflows
At NextGen Healthcare (NextGen), Chief Technology Officer Jacob Sims outlined why the future of AI in healthcare will look less like a single engine and more like a coordinated team. His work centers on specialization, fairness in model design, and creating a workplace where AI experimentation is normal instead of exceptional.
Healthcare IT Today sat down with him at the company’s annual user group meeting #NextGenUGM25
Key Takeaways
- Coordinated teams of specialized AI agents is better than an all-in-one solution. Rather than relying on broad, generalized AI, NextGen is betting on a multi-agent model where each agent is deeply trained in a narrow domain and orchestrated by a lead agent NextGen® Intelligent Agent (Nia).
- Ethics in AI is really a data problem. Ethical AI is the discipline of ensuring fairness by interrogating the completeness, diversity, and depth of the data used to train models.
- Building an AI-native organization requires cultural permission—not just tooling. AI adoption is taking off at NextGen because teams are encouraged to experiment, learn, and share wins—not because a training program told them to.
Multi-agent Orchestration Emerges as a Practical Model for Clinical AI
Sims described a shift toward AI systems built from multiple domain-specific agents that each carry deep knowledge in their own slice of the clinical or operational workflow. As Sims put it, “NextGen® Intelligent Agent (Nia) is like a team lead for a suite of agents. It is really that orchestrator that’s connecting all of those other agents that have deep, deep, deep domain expertise in their individual area.”
These agents are coordinated by NextGen® Intelligent Agent (Nia) and adapts to how different users work, whether they are providers, medical assistants, billers, or schedulers.
The mental picture Sims drew is like how a well-run clinical team works: each member focused on what they do best rather than expecting a single clinician to do everything.
Why AI Ethics and Fairness Need Scrutiny of Training Data
As healthcare increasingly relies on AI, the issue of ethical use of the technology will loom larger for clinical and operational leaders. When asked about “Ethical AI”, Sims reframed the issue and focused on the raw inputs into AI.
“Ethics is really about creating fairness in the world of AI, especially early in the design process,” he said. “We should be asking: What is the data that I’m using to train the AI? Is the data complete? Does it represent a diverse set of information?”
The danger, according to Sims, is when AI is trained on data that is incomplete or missing swaths of the population that it is intended to help. Blind spots and biases can unknowingly be propagated when AI training data is not scrutinized.
To Sims, ethical AI means taking the time to check for representation and question the underlying assumptions of the AI/ML/LLM algorithms BEFORE proceeding.
An AI-First Culture Grows When People Have Permission to Experiment
Sims sees the biggest gains from AI not in the tools themselves but in the behavior they unlock. He is working internally at NextGen to create a culture where people use AI daily, share what works, and build confidence through practice rather than classroom-style instruction.
“We’re creating a set of innovation challenges tied to real-world needs and culture goals,” shared Sims. “We’re seeing gains in a lot of areas. It is really rewarding to encourage our staff to lean into AI.”
This cultural acceptance of AI is quickly becoming an important leadership challenge. If staff are wary of AI technology, adoption will be slow and difficult. Organizations that resist AI may be quickly left behind by competitors who invested in building a culture of innovation.
A More Grounded Path for AI in Healthcare
What emerges from Sims’ comments is a more grounded path for AI in healthcare. Specialized agents follow the natural contours of clinical work. Fair data practices reduce the blind spots that can affect patient care. And a culture that welcomes experimentation keeps progress tied to real needs. The picture Sims paints looks less like a dramatic overhaul and more like a better-organized team learning to work together.
Learn more about NextGen at https://www.nextgen.com/
Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.
And for an exclusive look at our top stories, subscribe to our newsletter and YouTube.
Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.
No comments:
Post a Comment