We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


With the rapid growth of healthcare AI, algorithms are often overlooked when it comes to addressing fair and equitable patient care. I recently attended a conference on Applied AI (CAAI): AI responsible for healthcare, hosted by the University of Chicago Booth School of Business. The conference brought together healthcare leaders in many aspects of the business with the goal of discussing and finding effective ways to reduce algorithmic bias in healthcare. Different stakeholder groups are needed to identify AI biases and have an impact on ensuring consistent results.

If you are reading this, you are probably already familiar with AI bias, which is a positive step forward. If you’ve seen movies like The Social Dilemma or Coded Bias, you’re off to a good start. If you have read articles and papers like Dr. Ziad Obermeyer’s racial bias in healthcare algorithms, the better. What these resources explain is that among other everyday digital interactions, algorithms play a key role in determining what movies we want, what social posts we see, and what healthcare services are recommended. These algorithms often include biases related to race, gender, socio-economic, sexual orientation, demographics and more. There has been a significant increase in interest related to AI bias. For example, between 2019-2021, the number of data science papers on arXiv’s website mentioning racial bias has doubled.

We’ve seen interest from researchers and the media, but what can we really do about it in health care? How can we apply these principles?

Before we apply these principles, let’s look at what happens if we don’t.

The Impact of Prejudice in Healthcare

Let’s take, for example, a patient who has been suffering from various health problems for a long time. Their healthcare system has a special program designed to provide early intervention for people at high risk for cardiovascular needs. Excellent results have been seen for those enrolled in the program. However, the patient has not heard of this. Other sick patients were notified and despite being registered, somehow they were not included in the list for outreach. Eventually, he visits the emergency room, and his heart condition worsens.

It is the experience of being a minority and invisible from whatever approach the health system is using. It doesn’t even have to be AI. A common approach to cardiovascular outreach is to include only men aged 45+ and women aged 55+. If you were excluded because you are a woman who has not cut age, the result is the same.

How are we addressing it?

Chris Bevolo’s Jo Public 2030 is a 10-year look at the future of healthcare, reported by Mayo Clinic, Gazinger, Johns Hopkins Medicine and many other leaders. It does not seem promising to address healthcare inequality. For about 40% quality measures, blacks and natives received worse care than whites. Uninsured people had worse care for quality measures at 62%, and Hispanic and black people had less access to insurance.

“We’re still dealing with the same issues we’ve been dealing with since the ’80s, and we can’t find them,” said Adam Braze, executive director of strategic intelligence at the Mayo Clinic. “Over the last 10 years, these have only evolved as issues, which are increasingly worrying.”

Why the data did not solve the problem of bias in AI

No progress since the 80s? But things have changed a lot since then. We are collecting Huge The amount of data. And we know that data never lies, do we? No, not quite. Let’s remember that data is not just a thing on a spreadsheet. It lists examples of how people tried to alleviate their pain or better care for them.

As we confuse and harass spreadsheets, the data does what we tell it to do. The problem is what we ask the data to do. We may ask for data to help increase volume, increase services, or reduce costs. However, unless we are clearly asking him to address the disparities in supervision, he will not do so.

Attending the conference changed how I view bias in AI, and so on.

That’s not enough to address the bias in algorithms and AI. To address healthcare inequalities, we need to be committed to the very top. The conference brought together technologists, strategists, legalists and others. It’s not about technology. So this is a call to algorithms to fight and help fight prejudice in healthcare! So what does it look like?

Call to fight prejudice using algorithms

Let’s start by talking about when AI fails and when AI succeeds in organizations as a whole. MIT and Boston Consulting Group surveyed 2,500 executives who worked with AI projects. Overall, 70% of these executives said their projects had failed. What was the biggest difference between the 70% who failed and the 30% who succeeded?

That is whether the AI ​​project was supporting the organizational goal. To help make it clearer, here are some project ideas and whether they pass or fail.

  • Buy the most powerful natural language processing solution.

Failure Natural language processing can be extremely powerful, but it lacks context for how this goal will help the business.

  • Increase the amount of our primary care by sensibly allocating patients at risk.

Passport. There is a goal that requires technology, but that goal is tied to the overall business objective.

We understand the importance of defining the business objectives of the project, but what are these two goals missing? They are missing any mention of addressing bias, inequality and social inequality. As a healthcare leader, our overall goal is where we need to start.

Remember that successful projects start with organizational goals and find AI solutions to support them. This gives you a place to start as a healthcare leader. The KPIs you are defining for your portfolios may very well contain specific goals to increase access for undersended. “Increase the volume by x%,” for example, might very well include, “Increase the volume by y% from the underrepresented minority groups.”

How do you reach good metrics for goal setting? It starts with asking tough questions about your patient population. What is the division of the communities around you by race and gender? This is a great way to put numbers and sizes into a healthcare gap that needs to be addressed.

This top-down focus should include actions such as holding vendors and algorithmic experts accountable to help achieve these goals. We need to address more here, however, for whom this is all. The patient, your community, your consumer, are the ones who lose the most.

Innovation at the speed of faith

At the conference, Anish Chopra, Barack Obama’s former chief technology officer, addressed this directly: “Innovation can only come at the speed of trust.” That’s a big statement. Most of us in healthcare already ask for information about race and ethnicity. Many of us now ask for information about sexual orientation and gender identity.

Without these data points, bias is extremely difficult to address. Unfortunately, many people in underserved groups do not trust healthcare enough to provide that information. I will be honest, for most of my life, including me. I did not know why I was being asked for that information, what would be done with it, or whether it would be used to discriminate against me. So I refused to answer. I was not alone in this. We need to know the number of people in the hospital who have identified their gender and ethnicity. Usually one in four people do not.

I talked to behavioral scientist Beka Nissan about ideas42, and it turns out that there is no more scientific literature on how to address this. So, this is my personal request: partner with your patients. If someone has experienced bias, it is difficult to see the opposite in providing details that people have used to discriminate against you.

Partnership is a relationship built on trust. Here are some of the steps:

  • Be able to partner with. There must be a real commitment to fighting prejudice and personalizing healthcare or asking for data is useless.
  • Tell us what you will do. Consumers are fed up with the gossip and spam they get as a result of sharing their data. Level up with them. Be transparent about how you use data. If it’s to personalize the experience or better address healthcare concerns, own it. We are fed up with being surprised by algorithms.
  • Follow by Confidence is not really gained unless there is a follow-through. Don’t let us down.

Conclusion

If you’re creating, launching, or using responsive AI, it’s important to be around other people doing the same. Here are some best practices for human-influenced projects or campaigns:

  • Have a diverse team. Groups that lack diversity do not ask if the model is biased.
  • Collect appropriate data. Without known values ​​for race and ethnicity, gender, income, gender, sexual preference, and other social determinants of health, there is no way to test and control justice.
  • Consider how a particular matrix may have a hidden bias. The idea of ​​healthcare spending from a 2019 study shows just how problematic this metric can be for certain populations.
  • Measure the probability of the target variable to enter bias. As with any metric, label or variable, it is important to examine its effect and distribution across race, gender, sex and other factors.
  • Ensure that the methods used do not create bias for other populations. Teams should design a metric of appropriateness applicable to all groups and constantly test against it.
  • Set benchmarks and track progress. After the model launches and gets used, constantly monitor for changes.
  • Leadership support. You need your leadership to buy, it can’t be just one person or team.
  • “Responsible AI” is not the end, It’s not just about justifying algorithms. This should be part of a broader organizational commitment to combat prejudice as a whole.
  • Partners with patients. We should go deeper into how we partner with patients and get involved in the process. What can they tell us about how they want to use their data?

As a person who has chosen the field of data science, I am incredibly optimistic about the future, and have the opportunity to make a real impact for healthcare clients. We have a lot of work to do to ensure the impact is fair and available to everyone, but I believe we are on the right track through these conversations.

Chris Hamphil is VP of Growth at Applied AI and Actium Health.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published.