Article Text

Artificial intelligence and health inequities in primary care: a systematic scoping review and framework
  1. Alexander d'Elia1,
  2. Mark Gabbay2,
  3. Sarah Rodgers1,
  4. Ciara Kierans1,
  5. Elisa Jones1,
  6. Irum Durrani3,
  7. Adele Thomas3 and
  8. Lucy Frith4
  1. 1Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK
  2. 2Primary Care and Mental Health, University of Liverpool, Liverpool, UK
  3. 3ARC NWC, University of Liverpool, Liverpool, UK
  4. 4Centre for Social Ethics & Policy, The University of Manchester, Manchester, UK
  1. Correspondence to Dr Alexander d'Elia; adelia{at}


Objective Artificial intelligence (AI) will have a significant impact on healthcare over the coming decade. At the same time, health inequity remains one of the biggest challenges. Primary care is both a driver and a mitigator of health inequities and with AI gaining traction in primary care, there is a need for a holistic understanding of how AI affect health inequities, through the act of providing care and through potential system effects. This paper presents a systematic scoping review of the ways AI implementation in primary care may impact health inequity.

Design Following a systematic scoping review approach, we searched for literature related to AI, health inequity, and implementation challenges of AI in primary care. In addition, articles from primary exploratory searches were added, and through reference screening.

The results were thematically summarised and used to produce both a narrative and conceptual model for the mechanisms by which social determinants of health and AI in primary care could interact to either improve or worsen health inequities.

Two public advisors were involved in the review process.

Eligibility criteria Peer-reviewed publications and grey literature in English and Scandinavian languages.

Information sources PubMed, SCOPUS and JSTOR.

Results A total of 1529 publications were identified, of which 86 met the inclusion criteria. The findings were summarised under six different domains, covering both positive and negative effects: (1) access, (2) trust, (3) dehumanisation, (4) agency for self-care, (5) algorithmic bias and (6) external effects. The five first domains cover aspects of the interface between the patient and the primary care system, while the last domain covers care system-wide and societal effects of AI in primary care. A graphical model has been produced to illustrate this. Community involvement throughout the whole process of designing and implementing of AI in primary care was a common suggestion to mitigate the potential negative effects of AI.

Conclusion AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. This review summarises these effects from a system tive and provides a base for future research into responsible implementation.

  • Health Equity
  • General Practice
  • Healthcare Disparities

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

View Full Text

Supplementary materials


  • Twitter @alexanddelia

  • Contributors Ad’E designed the review, conducted the searches, screened all articles, conducted the analysis, drafted the manuscript and acts as the guarantor for the overall content. MG, SR and CK assisted in designing the review and reviewing the manuscript. EJ coscreened 10% of the abstracts and 10% of the full-length articles. ID and AT provided feedback on the design as public advisors, and each coscreened 10 % of the abstracts and 10% of the full-length articles. They also provided feedback on the analysis and the manuscript. LF assisted in designing the review as the primary PhD-supervisor of the first author Ad’E, and assisted in reviewing the manuscript.

  • Funding This review was conducted as part of the PhD project 'Artificial Intelligence and Health Inequities in Primary Care', by Alexander d'Elia. The PhD project is funded by Applied Research Collaboration North West Coast (ARC NWC), in turn funded by the UK National Institute for Health Research (NIHR). The views expressed in this publication are those of the authors and not necessarily those of the NIHR.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.