The following is a guest article by Steve Biko Onyambu, MD, Critical Care Physician at Abbott Northwestern Hospital
Hospital leaders invest heavily in throughput tools and AI-enabled analytics: Expected Discharge Date (EDD), discharge milestones, command-center dashboards, AMPAC scores, and ranked priority lists. These tools are essential for coordination at scale. Yet nearly every executive, nurse leader, and physician leader recognizes the same problem: these tools are not relied upon for prospective decision-making. They exist, but are often relegated to after-the-fact documentation, confirming what clinicians already knew rather than informing earlier action.
The result is a paradox. Hospitals have more dashboards and analytics than ever. Yet coordination happens late, staffing absorbs avoidable strain, and throughput gains erode before they materialize.
This is not a tooling problem. It is an infrastructure problem. Many hospitals are deploying AI-driven operational tools faster than they are validating the clinical signals those tools depend on.
The Hidden Failure of Throughput Systems
Most throughput tools are not wrong. They are simply late.
Expected Discharge Dates stabilize only after clinical uncertainty resolves. Milestones become reliable only once they are nearly complete. Command-center views become actionable only when variability has already collapsed. By the time the signal is trustworthy, the window for early coordination has already closed.
Frontline teams have learned this pattern. Early signals oscillate, reverse, or conflict with clinical reality. Over time, clinicians adapt rationally. They stop relying on these tools for decision-making. The tools become lagging indicators, trailing behind clinical judgment rather than informing it. Decisions are made at the bedside; dashboards document them afterward.
From an operational perspective, this creates a predictable failure mode with direct implications for length of stay, staffing utilization, and operating margin:
- Early coordination is deferred
- Transfers and discharges compress into narrow windows
- Staffing mismatches grow
- Weekend and handoff cascades intensify
The system becomes optimized for late certainty, not early coordination.
Why Better Dashboards Do Not Solve the Problem
Many organizations respond by adding more analytics, more fields, or more predictive models. This rarely works.
In an era where hospitals are turning to predictive and AI-driven operational tools, the integrity of the upstream signal layer determines whether those systems create coordination or amplify volatility.
The reason is structural. Throughput artifacts are coordination representations, not clinical truth. They are downstream of the patient’s evolving physiologic trajectory. No amount of visualization, and no amount of machine learning, can fix a signal that arrives too late or lacks grounding in a clinical state.
Most enterprise tools infer readiness from administrative events: orders placed, milestones completed, consults signed. But clinical readiness emerges earlier, along discernible trajectories, long before those events occur. By the time administrative markers appear, the clinical trajectory has already declared itself. The dashboard is simply catching up.
Without an upstream signal layer, dashboards are forced to infer readiness indirectly, and uncertainty leaks through as volatility. Adding AI on top of unreliable inputs produces sophisticated predictions built on unstable foundations.
A Different Approach: Clinical Signal Infrastructure
What hospitals are missing is not another artifact, but a pre-artifact layer.
Clinical Signal Infrastructure for Throughput introduces a simple but powerful shift: Patient trajectory → clinical proto-signals → enterprise artifacts → operational decisions
Instead of asking artifacts to guess readiness, this infrastructure computes early, bounded signals directly from the patient’s evolving clinical trajectory.
These signals do not assert final readiness. They answer a different question: Is this patient’s trajectory converging toward readiness, and with what confidence?
By design, they are deterministic, explainable, time-aware, and bounded. They are recomputed continuously as data evolves, and they explicitly surface confidence and stability.
This makes them suitable for early coordination rather than retrospective documentation. Critically, it provides AI and predictive tools with trustworthy upstream inputs instead of volatile administrative proxies.
Safe Failure Matters More than Early Accuracy
A common concern with early signals is safety. What happens when they are wrong?
Clinical Signal Infrastructure addresses this directly through safe failure modes.
Every signal carries metadata about data completeness and recency, trajectory stability versus volatility, and explicit indeterminate states. When uncertainty increases, the system degrades gracefully. It flags drift, suppresses acceleration recommendations, and surfaces missing or unstable inputs instead of forcing a binary answer.
Unsafe early signals do not just cause errors; they destroy trust. Once frontline teams learn that early signals cannot fail safely, they stop relying on them altogether. Bounded uncertainty, by contrast, preserves trust while still enabling earlier coordination.
Measuring What Actually Matters: Frozen-Time Validation
Traditional analytics ask, “Was the prediction correct?”
Throughput operations need a different question: Did the signal become available early enough to matter, without hindsight bias?
Clinical Signal Infrastructure uses frozen-time validation. Signals are evaluated only with information available at a given decision point, mirroring real-world conditions. This allows leaders to measure lead time, stability, and slippage detectability. The framework evaluates signal integrity, not predictive performance.
Shadow-Mode Deployment: Reducing Adoption Risk
This infrastructure does not require a disruptive workflow change.
It is designed for shadow-mode deployment: read-only ingestion from EHR, FHIR, and HL7v2 feeds; no automated execution of irreversible actions; and parallel review alongside existing dashboards. Shadow-mode allows organizations to build evidence, calibrate thresholds, and assess safety before operational reliance.
The Executive Takeaway
Hospitals do not lack dashboards. They lack early, trustworthy signals that allow those dashboards to inform decisions rather than document them after the fact.
Clinical Signal Infrastructure for Throughput reframes the problem. Instead of forcing coordination artifacts to work earlier than they safely can, it supplies an upstream signal layer grounded in clinical and disposition trajectory. This approach does not promise outcomes. It defines the infrastructure and measurement needed to earn them, and provides the foundation AI-driven tools require before they can deliver on their promise.
A full technical description and reproducible framework are available via Zenodo (DOI: 10.5281/zenodo.18029429).

Steve Biko Onyambu, MD, is a critical care physician at Abbott Northwestern Hospital in Minneapolis. He works at the intersection of clinical informatics, hospital operations, and capacity management, with a focus on translating patient trajectory into earlier, safer coordination signals. His work examines how deterministic, explainable signal infrastructure can support throughput, staffing, and discharge planning in complex inpatient environments. He is a practicing intensivist.

About Rohan Patil