Can AI Help Reduce Disparities in General Medical and Mental Health Care?

AMA J Ethics. 2019 Feb 1;21(2):E167-179. doi: 10.1001/amajethics.2019.167.

Abstract

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems' data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Publication types

  • Comparative Study
  • Research Support, N.I.H., Extramural

MeSH terms

  • Adult
  • Aged
  • Aged, 80 and over
  • Artificial Intelligence*
  • Delivery of Health Care / organization & administration*
  • Delivery of Health Care / statistics & numerical data
  • Female
  • Healthcare Disparities / organization & administration*
  • Healthcare Disparities / statistics & numerical data*
  • Humans
  • Intensive Care Units / statistics & numerical data*
  • Male
  • Mental Health Services / organization & administration*
  • Mental Health Services / statistics & numerical data
  • Middle Aged
  • Mortality
  • Patient Readmission / statistics & numerical data*
  • Sex Factors