“With Artificial Intelligence we now have the ability to erode trust at scale”
Reggie Townsend, Vice President of Data Ethics at SAS made that statement to a room full of clients, data analysts, AI experts, and media at the SAS Innovate conference. That attention-grabbing line was Townsend’s opening to an engaging keynote on the need for “Trustworthy AI” and thoughtful incorporation of ethics into the AI conversation.
Healthcare IT Today had the opportunity to sit down one-on-one with Townsend to learn how SAS is infusing ethics and trustworthiness into their operations. We also wanted to hear more about the company’s pioneering AI work with Erasmus University Medical Center (Erasmus MC) and Delft University of Technology (TU Delft).
Trustworthy AI
Townsend’s provocative statement comes from his concern over the potential use of AI to spread misinformation – both intentionally and unintentionally – and how that can erode trust across all industries and throughout society. He used deep-fake technology (where AI is used to generate visuals of people saying/doing things that are realistic in appearance) as an example.
Not long ago a deep-fake photo of the Pope wearing a puffer jacket went viral. That photo was clearly a joke, but it illustrated the power that AI-generated visuals can have on public perception.
“It doesn’t take too many leaps to think about what happens if we can’t trust what we see and hear,” said Townsend. “This is unsettling to me and I think it is important we recognize this and that we start to raise our overall understanding about AI.”
Townsend also strongly recommends that organizations become champions of “trustworthy AI” as a way to prevent this erosion of trust at scale.
“Trustworthy AI is central to us here at SAS,” stated Townsend. “We want to make sure that we are building a platform that is worthy of the trust of our customers so that our customers, in turn, can use our platform to build applications that are responsible.”
Making Ethics a Central Operating Pillar
To build a trustworthy AI platform, SAS has adopted several internal initiatives designed to help staff become more aware of and deal with ethical challenges when developing or deploying AI.
One of these initiatives was the establishment of a set of six principles of ethical AI use:
- Human centricity
- Transparency
- Inclusivity
- Privacy and Security
- Robustness
- Accountability
These principles help guide the product roadmap and the implementation of new features in the SAS platform.
SAS employees all receive training on these principles. That training is designed to help staff navigate through the “gray areas” that may come up during sales discussions and client engagements. These gray areas can be quite tricky – especially in healthcare.
For example, what is the right action to take if you learn that the AI algorithm that a researcher has been spending months on has an under-representation of women in the training dataset? And yet the preliminary results from the AI show that it has the potential to significantly improve the outcomes of patients.
The training helps staff work through these types of murky situations.
Trustworthy AI in Healthcare
SAS has established a collaborative partnership with Erasmus MC, one of Europe’s leading academic hospitals, and TU Delft, home of the TU Delft Digital Ethics Centre. Together, these three organizations created the Responsible and Ethical AI in Healthcare Lab (REAHL).
According to a SAS company statement, the REAHL aims to address the ethical concerns and challenges related to developing and implementing AI technologies in health care. This includes ensuring that AI systems are unbiased, transparent and accountable, and used in ways that respect patients’ rights and values. The REAHL seeks to create a framework for ethical AI in health care that will serve as a model for medical centers and regions around the world.
“With REAHL, we have a multidisciplinary group of experts coming together to think about things related to medicine, digital ethics policy, and the use of AI,” said Townsend.
An example of the work emerging from REAHL is the use of AI to identify the length of stay that patients may need post-surgery. For a particular type of surgery, the national standard may be six days in the hospital followed by home monitoring. But what happens if a patient is healing at a faster rater and is able to go home after just two days? Now what if there was an AI algorithm that could identify this type of patient? That would potentially free up 4 bed days. That extra capacity has an impact on patient access.
“Based on the early findings, we can say confidently that we have saved 250 patient days already,” said Townsend. “Not only does this have an effect on the patient, but think about it from an insurance perspective. Not having to pay for another 250 days is a pretty good thing. Even in a single payer system, where the government pays, this is a significant savings.”
Watch the interview with Townsend to learn:
- The challenge Townsend issued to the media with respect to AI news coverage
- Why completely removing bias from datasets that AI algorithms are trained on is impractical and how acknowledging and being transparent about that bias might be a better way forward
Learn more about SAS at https://www.sas.com/
Listen and subscribe to the Healthcare IT Today Interviews Podcast to hear all the latest insights from experts in healthcare IT.
And for an exclusive look at our top stories, subscribe to our newsletter.
Tell us what you think. Contact us here or on Twitter at @hcitoday. And if you’re interested in advertising with us, check out our various advertising packages and request our Media Kit.
SAS covered the expenses for Healthcare IT Today to travel to their conference.
No comments:
Post a Comment