The following is a guest article by Nabila El-Bassel, Ph.D. DSW, Founding Director of the Social Intervention Group at the Columbia University School of Social Work
What if a physician assumed her patient was healthy, just because he seldom came to the clinic?
Researchers uncovered serious flaws in an artificial intelligence (AI) tool used by a UnitedHealthcare unit, which consistently ranked black patients as healthier than white patients with the same conditions–not because they were healthier, but because they incurred lower healthcare costs. It failed to recognize that lower spending was driven by barriers to healthcare access.
This is not an isolated finding; it is a warning about a larger problem affecting AI in healthcare. Unless designed with meaningful patient and community input from the start, AI will risk excluding the most vulnerable people and replicating existing biases like the above.
The Urgent Need for AI in Healthcare
Never has AI been more needed in healthcare, as Medicaid and other health programs are slashed, jeopardizing health coverage for more than 10 million of the most vulnerable Americans. Recently, the Trump Administration unveiled an AI Action Plan, but it failed to include, per The Brookings Institute “mechanisms such as co-creation [and] participatory design…” to “serve citizens and humanity in fair, transparent, and accountable ways.”
I’ve spent more than three decades designing and testing global public health interventions and conducting research funded extensively by the National Institute of Health. My expertise is working in close partnership with communities, including people with lived experience during intervention analysis, design, implementation, publication, and presentation of the findings.
Why the Lack of Patient Input?
When I see how AI is developing without patient input, I’m concerned. Unfortunately, when it comes to AI, those most impacted are rarely invited to help shape the technologies deciding their futures. A 2024 scoping review of 10,880 articles describing AI or machine learning healthcare applications found that fewer than 0.2% included any form of community engagement. Over 99% of so-called health “innovations” were created without consulting the people most affected by them.
In contrast, traditional health technologies like medical devices often involve patients in the process close to half of the time. Devices like this insulin pump and cardiac monitor must undergo rigorous FDA review, including clinical validation, user testing, and post-market monitoring. The pace of AI may have outstripped regulation—but that’s no excuse. If anything, its scale and reach demand more scrutiny.
My colleagues and I built a blueprint for community-engaged public health research. In our project to reduce overdose deaths, we developed communications campaigns with communities who could address unique location-specific factors. Without communities as true partners from the outset, AI risks replicating and even worsening inequities.
Woebot, a therapy bot, was launched in 2017 to improve mental health through chats. Woebot was designed by clinical psychologists but excluded community members. Though a 2021 study shared Woebot’s promising results at eight weeks, the majority of users were white females employed full-time, thus missing key demographics: under- or unemployed people and non-whites facing structural barriers to care. This exclusion is particularly harmful when AI is deployed in settings already marked by deep health inequities. Since Woebot, like many chatbots, was trained on largely uniform data, its lack of cultural, racial, and socioeconomic nuance means it often misreads or ignores how distress is expressed across different backgrounds.
In addiction treatment, AI systems may flag missed appointments as noncompliance, without recognizing barriers like caregiving responsibilities or lack of transportation. Further, due to a lack of existing data and poor data borne of disparities, bias may flourish in every new algorithm, impacting healthcare, social services, and drug treatment programs.
Barriers to Including Input
Researchers and designers may be wary of including community input in the design process for two main reasons. First, AI designers may not know how to bring their ideas to the community. In fact, a researcher recently asked me for best practices in compensating participants (a must) and where to find these community co-designers. Where to look depends on the issue of focus.
Second, AI designers may fear potential ethical and confidentiality issues relating to client or participant data. The public, too, may be wary of their own participation due to similar concerns of privacy. Fortunately, frameworks to ensure these protections in AI already exist and improve over time.
Best Practices to Include Input
Principles to help AI researchers ensure community-defined goals, values, and needs are met are included in the publication A Participatory Approach Towards AI for Social Good. Further, a model I developed for providers and researchers advocates ethical community engagement through every phase of AI design. The model includes targeted questions to help safeguard data confidentiality, include community voices, and align AI tools with community expectations. Researchers, designers, and potential consumers can use both frameworks to ensure equitable, effective, cost-effective, and safe AI design.
To further ensure AI is deployed for social good, academic institutions should support initiatives providing modeling for these efforts. At Columbia University, we launched the Artificial Intelligence for Social Good and Society Initiative to train a new generation of AI researchers across public health, social work, and data science. The research will be available to other universities and anyone interested in equitable AI. Open calls for collaborations will also include faculty outside Columbia.
Certainly, researchers and scientists face challenges in the wake of reduced or eliminated funding at academic institutions. In reaction to the aforementioned AI Action Plan, The Brookings Institution also stressed the need to fund research and development at institutions of higher education in order to retain a competitive advantage and ensure continued innovations for the public good. By reducing federal funding for research, researchers will increasingly have to turn to industry grants for funding, possibly incentivizing commercial rather than public interests.
If AI in healthcare and mental health is to live up to its potential, we must reject token engagement and embrace co-design and community-engaged research at every stage of development. From data collection and algorithm training to deployment and evaluation, lived experience is not a “nice to have” but essential to building equitable, effective, and trusted systems.

Dr. Nabila El-Bassel is a University Professor and the Willma and Albert Musher Professor of Social Work at Columbia University. She is an internationally recognized intervention scientist whose work spans HIV/AIDS prevention, substance use and addiction, gender-based violence, and health inequities affecting marginalized communities. She is the founding director of the Social Intervention Group, a leading interdisciplinary research center established in 1990 that develops and tests evidence-based interventions for HIV, substance use, and violence. Most recently, she launched the Artificial Intelligence for Social Good and Society Initiative and developed and published a model for providers and researchers advocating ethical community engagement through every phase of AI design.
No comments:
Post a Comment