Patient Perspectives

What do patients think about artificial intelligence?

In “Patient apprehensions about the issue of artificial intelligence [AI] in healthcare” researchers from the Mayo Clinic Rochester report on what they learned from a series of focus groups they held in late 2019 to early 2020. The goal of the research was “to understand how patients view the use of AI in the healthcare.” In conveying that information, they hope to help AI developers and healthcare organizations maximize the value of AI and manage implementation in ways that support patient safety and foster trust with the populations they serve.

In their introduction, the authors note that “…to date, there has been very little engagement with patients” whose health will be informed and affected by AI, which typically runs in the background, invisible to those who will feel its effects and whose clinical data enables the technology. At this point, it is well known that AI can introduce bias and discrimination that, even when discovered, is not easy to remedy. Already, trust is an issue (see suggested readings below). “As in other areas of medical innovation,” the authors note, “proactive patient engagement is an essential component of implementing healthcare AI in an ethical manner.”

The Mayo researchers are aware that beyond a moral imperative to be open and transparent with patients about the role of AI, developing trust is important to ongoing development of the technology. Referred to as “AI winter,” there have been periods of pessimism during which AI has gone dormant after its benefits were overhyped and overpromised.

It is disappointing, as the authors acknowledge, that the focus groups lacked diversity. Recruited from a large pool of primary care patients at the Mayo Clinic in Minnesota, the 87 people who participated in the focus  groups were 91% white and 94% non-Hispanic/Latino, with higher levels of education than found in many communities, and nearly half were employed in healthcare. That said, this research represents at least a start on what should be a much larger, ongoing effort across the country to engage patients in the development and use of AI in healthcare.

The results are described in detail in the article (in the open-access journal npj Digital Medicine) under six themes of patient concern:

  • Participants were excited about healthcare AI but wanted assurances about safety
  • Patients expect their clinicians to ensure AI safety
  • Preservation of patient choice and autonomy
  • Concerns about healthcare costs and insurance coverage
  • Ensuring data integrity
  • Risks of technology-dependent systems

Suggested readings

Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19

Bias in Artificial Intelligence

Dissecting racial bias in an algorithm used to manage the health of populations [firewall]

Who is making sure the A.I. machines aren’t racist?

 

Tags:

Susan Carr Susan Carr is a medical editor and writer specializing in patient safety and engagement. In addition to curating the EngagingPatients blog, she produces publications for the Betsy Lehman Center in Boston and the Society to Improve Diagnosis in Medicine. Susan lives and works in Lunenburg, Massachusetts.

Susan Carr has 185 post(s) at EngagingPatients.org

Check out my: Twitter


Leave a Reply

Your email address will not be published. Required fields are marked *