Article Text
Abstract
Objective Artificial intelligence (AI) will have a significant impact on healthcare over the coming decade. At the same time, health inequity remains one of the biggest challenges. Primary care is both a driver and a mitigator of health inequities and with AI gaining traction in primary care, there is a need for a holistic understanding of how AI affect health inequities, through the act of providing care and through potential system effects. This paper presents a systematic scoping review of the ways AI implementation in primary care may impact health inequity.
Design Following a systematic scoping review approach, we searched for literature related to AI, health inequity, and implementation challenges of AI in primary care. In addition, articles from primary exploratory searches were added, and through reference screening.
The results were thematically summarised and used to produce both a narrative and conceptual model for the mechanisms by which social determinants of health and AI in primary care could interact to either improve or worsen health inequities.
Two public advisors were involved in the review process.
Eligibility criteria Peer-reviewed publications and grey literature in English and Scandinavian languages.
Information sources PubMed, SCOPUS and JSTOR.
Results A total of 1529 publications were identified, of which 86 met the inclusion criteria. The findings were summarised under six different domains, covering both positive and negative effects: (1) access, (2) trust, (3) dehumanisation, (4) agency for self-care, (5) algorithmic bias and (6) external effects. The five first domains cover aspects of the interface between the patient and the primary care system, while the last domain covers care system-wide and societal effects of AI in primary care. A graphical model has been produced to illustrate this. Community involvement throughout the whole process of designing and implementing of AI in primary care was a common suggestion to mitigate the potential negative effects of AI.
Conclusion AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. This review summarises these effects from a system tive and provides a base for future research into responsible implementation.
- Health Equity
- General Practice
- Healthcare Disparities
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
WHAT IS ALREADY KNOWN ON THIS TOPIC
There is a need for a comprehensive, holistic, conceptual framework of how the implementation of artificial intelligence (AI) can affect health inequity in primary care.
WHAT THIS STUDY ADDS
AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
This review summarises these effects from a system-wide perspective and provides a base for future research into responsible implementation.
Introduction
Artificial intelligence (AI) can be described as a computer system performing tasks typically requiring human intelligence. Everyday examples include predicting preferences in social media feeds and recognising faces in photos.1 It is a rapidly expanding field, and AI-augmented interventions are high on the agenda across healthcare, where current application include interpreting X-rays and ECGs. Current implementation of AI-augmented systems within healthcare is currently low but advocated widely as the future and in strategic solutions. Thus, AI systems of varying kinds are expected to be widely implemented across the healthcare system over the next decade, and primary care is no exception.2
At the same time, health inequities (HI) are being increasingly discussed, not least in the context of the ongoing COVID-19 pandemic.3 Through potentially freeing up resources and enabling more personalised care, AI is described as an enabler for more equitable health and healthcare.2 However, AI interacts with socioeconomic, gender and ethnic HI on many different levels and could both increase or decrease inequities, depending on application and implementation.4 5
Primary care holds a unique role in tackling HI. Primary care can both be a source and a magnifier of inequities, as well as a platform for mitigation.6 For the purpose of this review, primary care is defined as primary care services provided to individual patients, not including wider public health policy.7 Primary care can be inaccessible to certain groups and thus worsen HI, but at the same time is usually the first contact point for socioeconomically disadvantaged populations with either health or social needs. While the theoretical access to primary care and clinical management has been shown to be relatively equal across groups, outcomes still differ, with more affluent patients of majority ethnicity enjoying better health.8 This is a consequence of external factors causing poorer baseline health status and through differences in effectiveness of the care given, due to adherence to treatment and advice, economic barriers and so on; the social determinants health (SDH).9 Consequently, as care need increases by deprivation, more primary care resource is needed to provide adequate care in disadvantaged areas and communities.10 To summarise, the role of primary care in reducing HI is not just through addressing inequities within primary care, but to leverage its unique position in society to mitigate underlying differences in health outcomes.10 This is reflected in the way AI could affect inequities both in and through primary care.
However, as this review shows, research on how AI may affect HI in primary care is limited, and is largely confined to either observations around accessibility or concerns over biased algorithms.
Applying a systematic scoping review approach, this article takes a holistic approach to create a comprehensive model for how AI can affect HI, in and through primary care. As such, we intend it to serve as guidance to develop future research, regulations and policies surrounding AI, primary care and HI. This review assumes a predominantly publicly funded, general access primary care system, such as the British National Health System (hereafter NHS), however, certain mechanisms described may be applicable in other primary care systems as well.
As research into the practical implications of AI on healthcare provision is still relatively limited, our objectives were intentionally broad to capture as much of the field as possible. Thus, a scoping review was chosen as the appropriate methodology to meet our study aims. This allowed for an iterative strategy, with the objectives adjusted as the field was explored.11
Specifically, our review sought to answer the following questions (hereafter discussed as objectives):
What research currently exists on the effect of AI on primary care equity?
How does the evidence-based match a provisional conceptual framework that we developed from our initial exploratory searches?
Through which methodologies have the topic of AI and primary care equity been studied?
How is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for primary care equity?
How can the implementation of AI in primary care affect wider population inequity?
Methods
This review was informed by the scoping review framework originally described by Arksey and O’Malley,12 and subsequent developments.11 13 As the searches in this review were conducted following a systematic approach, we chose to describe the methodology as a systematic scoping review, in line with previous guidance.13 The report was structured and written in accordance with the scoping-review reporting standards as set out by PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses).14
EndNote15 was used to manage the selection process, while Microsoft Excel16 was used for charting and extraction.
Public involvement
Two reimbursed public advisors (members of the public recruited through the National Institute for Health and Care Research Applied Research Collaboration North West Coast; NIHR ARCNWC) were involved in this review, both belonging to traditionally marginalised populations (one of British Asian ethnicity and one registered disabled and member of the LGBT (Lesbian, Gay, Bisexual, and Transgender) community). They participated in proofreading and approving of the protocol, assisted in selecting and extracting publications, and commented on the analysis and the findings. Public advisor involvement is intended to increase relevance and clarity of the review, and offering a non-academic perspective and interpretation. Given the review’s focus on equity and inclusion, this was seen as particularly relevant.
Public involvement is reported throughout this text, following the GRIPP2 framework,17 and as a checklist (online supplemental annex 4).
Supplemental material
A provisional conceptual framework
Having an initial conceptual framework of the topic is a useful tool to guide the review process.11 12 From the initial exploratory searches, which consisted of targeted internet searches and reading based on the experience of the authors, we constructed a provisional framework for how AI could affect healthcare equity in a primary care setting (online supplemental annex 1). We drew on work by the WHO Commission on SDH,9 Marmot et al,18 Dahlgren and Whitehead,19 and Veinot et al4 on how SDH may affect equity in and through primary care, and applied a layer of how AI may affect the various steps of the care process.
Supplemental material
Demographic characteristics of patients (here on pragmatic grounds limited to socioeconomic status, gender and ethnicity) both give rise to baseline HI and affect the way the patient interacts with the healthcare system, through SDH.9 For the purpose of this review, we considered these effects through a model developed by Veinot et al,4 where HI in care provision arise from either access, uptake, adherence or effectiveness.
Effects of AI were, using this framework, divided into intrinsic effects from the actual AI (such as biased outputs) and extrinsic covering potential effects on the wider healthcare provision, outside of the direct implications of the algorithm (such as making access to care easier or harder for disadvantaged groups).
n addition, in our provisional framework, we acknowledged that the implementation of AI in care provision is likely to have complex, system-wide effects which in turn will affect the care systems’ ability to mitigate HI.
Eligibility and inclusion
Initial searches indicated a distinct lack of robust primary empirical research (with a few notable exceptions) as well as little research conducted using secondary data, for example, data that were initially collected for direct care purposes or reviews. Thus, we decided to widen the scope and included descriptive sources including discussion articles and policy documents, to seek empirical evidence and construct our model. For the primary objective (current state of research) and tertiary objective (impact of implementation), searches included all forms of healthcare to maximise yield, with selection of articles relevant for primary care taking place in the next step. For example, Obermeyer et al’s article on resource allocation for multimorbidity care20 does not cover primary care, but was included as the equity-related concepts are transferable to the primary care context.
AI was for the purpose of this review limited to clinical applications, following Shaw et al’s21 typology of AI in healthcare. This includes AI-driven decision support systems and automated healthcare (such as automation of insulin or autonomous advice given to patients without human involvement), but not operational such as planning patient flows or staffing needs. This includes both on-site and telehealth applications of AI, with the defining feature being AI-driven decision-making affecting patient care directly. Primary care was defined as primary care services provided to individual patients, not including wider public health policy, as per Muldoon et al.7 HI was defined widely as socioeconomic, gender or ethnic inequities in health outcomes, as outlined in the provisional conceptual framework and reflected in the search terms (online supplemental annex 2).
Supplemental material
Searches were limited to the last 10 years (26 October 2011 to 26 October 2021), because AI was not being delivered in practice in primary care before that date.
We only considered publications in English and Scandinavian languages, due to the main author being bilingual in Swedish and English. Other languages were excluded due to resource limitations.
See table 1 for inclusion criteria.
Information sources
Electronic databases were searched using a set of keywords with varying syntax depending on database, MeSH (medical subject headings) terms when possible. Three major databases for medical and implementation research were searched; PubMed, Scopus and JSTOR. Grey literature in the form of reports and white papers by major governmental and non-governmental organisations was included. The complete search terms are listed in online supplemental annex 2.
To maximise the number of publications retrieved, we followed the systematic searches with secondary reference screening from the references of the included articles. The publications identified through this method were scanned for inclusion in the same way as the articles initially identified. At the end, articles found through initial exploratory searches were included, and their references scanned for relevant literature.
Selection process
We conducted screening and selection in two stages: first, abstracts were screened and reasons for exclusion recorded. The remaining articles were read in their entirety and reasons for those then excluded were recorded.
Initial screening of the first 100 abstracts were conducted jointly with two public advisors, building a joint understanding of the selection criteria. These discussions clarified and simplified our criteria. The remaining titles and abstracts were primarily screened by the main author (Ad’E). Thirty per cent of the abstracts were double screened by the two public advisors and a coauthor (EJ) (10% each). The same process was repeated for full-text screening. Disagreements were decided through consensus, leaning towards inclusion.
Data extraction
We based the data charting form on this provisional framework and review objectives, and included six topics (table 2) (complete extraction table in online supplemental annex 3). Themes were based on the provisional framework (online supplemental annex 1) with a low threshold for introducing new themes. The main author was responsible for the data extraction at large and charted all included sources. In addition, the two public advisors together extracted ten percent of the total yield, after which a meeting was held with the main author to discuss the process and the results to improve the consistency of the extraction process.
Supplemental material
Absence of critical appraisal
Given the wide scope as well as the lack of a large body of original research on AI and HI, most results from the searches were non-empirical papers. Our objectives did not include an appraisal of the quality of evidence, as appropriate for the lack of original research, and we did not give preference to specific types of sources, which was reflected in the narrative interpretation of the results.
Synthesis of results
For the primary objective, we summarised the charted data in relation using thematic analysis, described among others by Levac et al.11 Themes were based on the provisional conceptual framework (online supplemental annex 1), which combined established theory on SDH9 19 (ie, a positivist sociological approach) and inequity in health technology,4 the latter building on the health-system-inequity model by Tanahashi.22 Following the thematic analysis model,11 the main author reviewed the charted data and analysed it against the themes of the provisional framework, keeping a low threshold for introducing new themes or modifying the framework. The result of the synthesis was discussed among all authors for clarity. The two public advisors were invited to comment on a draft of the review and contributed with clarifications. The results are presented as a graphical model of a conceptual framework for how AI affects health equity in primary care, as well as a narrative description of the state of the research in the field and scope for future work. For the secondary (patient–doctor relationship) and tertiary (impact of implementation) objectives, data were summarised thematically for respective objective, to inform how of AI in primary care can be implemented as force for good from a HI perspective.
Results
Selection of publications
We found 1504 publications in the initial searches. After exclusions, 164 publications were read in full, of which 67 fulfilled the inclusion criteria, 19 further secondary references were identified from the reference lists of these 62 articles, of which 13 were included. Finally, we included six key publications found during the initial exploratory searches for completeness. See figure 1 for PRISMA14 chart. Discussions with public advisors contributed to two additional inclusions.
Characteristics of publications
The most common type of publication (n=45) were discussion articles, followed by original research (n=18), reviews (n=17) and reports/policy documents (n=6). Of the original research sources, eleven reported on quantitative studies, while seven used a qualitative methodology. Of the 17 reviews, 15 were narrative reviews and 2 were quantitative systematic reviews. The USA was the most common country of origin (n=40), followed by the UK (n=23) and Canada (n=9). Publications were all recent; the publication years ranged from 2017 to 2021 (mean=2019.9, median=2020). As previously noted, searchers were not limited to sources discussing primary care, but included sources discussing other kinds of healthcare covering concepts applicable to equity in primary care. Approximately half of the publications discussed healthcare on a general level (n=48), 20 discussed primary care (which was explicitly searched for) and 6 discussed psychiatry, followed by smaller topics with fewer papers. Five articles discussed AI on a general society level (table 3).
Summary of findings
The themes were not necessarily discrete, and one specific concept may fit under several themes. For example, a lack of diverse representation in developing an AI system may lead to unintended inequities through: (1) a lack of an equity-lens during development, enabling an unfair problem formulation,20 and (2) lead to unfair system effects external to the algorithm.4
The findings are summarised below, and in a graphical conceptual model of how AI could affect socioeconomic, ethical and gender-based inequities in primary care (figure 2).
Objective 1: in what ways may AI effect HI in a primary care setting?
Algorithmic bias
Algorithmic bias was discussed in 59 publications. Biased outcomes stemming from the AI itself (in contrast to the AI system’s interaction with external factors) can broadly be categorised within two categories; unrepresentative datasets and underlying biases reflected in the datasets. Under-representation of various populations in the datasets used to train AI algorithms may result in less accurate outcomes for these groups, for example, ethnic minorities. The main concerns relate to skewed outcomes when an AI is better fitted for one group than for another. Among others, Chen et al showed this in relation to intensive-care-mortality prediction, which was shown to be more accurate for Caucasian men compared with women and patients of minority ethnicities.23
A fundamental concept was reiterated across the literature identified: SDH are present in society, and when a model is based on real-life data, it may reflect and potentially reinforce the effect of SDH (ie, HI). Examples include Obermeyer et al who found that a widely used AI system for selecting multimorbid health insurance patients for extra resources (in order to prevent future deterioration and costly care) requires African-American patients to be significantly more ill to access resources.20 The issue was not in the quality of the dataset, but that the system developers used historical healthcare costs as a proxy for current morbidity. The authors showed that African American patients use less care resources for the same morbidity, and the AI thus perceived them to be less ill compared with their Caucasian counterparts. Samorani et al24 described how ethnic minority patients are given worse time slots by automatic primary care booking systems due to higher rates on non-attendance, leading to even less attendance. Their study thus serves as an example of how biases could reinforce and perpetuate inequities already present in society.
Increased access and the digital divide
Accessibility aspects were discussed in 21 publications. AI may lead to increased access as an enabler for more equal healthcare provision. However, increased access also brings a risk for the healthcare system being overwhelmed by the ‘worried well’.5 Fiske et al25 discussed the risk of creating a two-tier system, where AI-augmented psychiatry disenables the option to provide ‘human services’ in rural and underserved areas.
Conversely, the ‘digital divide’ was frequently discussed, not just regarding digital availability but also functional access. This was not only an issue of possessing the technology and infrastructure needed to interact with a digitalised care system, but also having the skills to fully make use of it, as well access to a private room.26 27
Related to accessibility, Clark et al28 highlighted the opportunity of using AI to predict population-wide morbidity and identify the social determinants driving HI from a primary care and psychiatry perspective. Thus, AI could, in this application, help to address these factors and subsequently improve equity.
Trust of patients
Trust aspects were discussed in ten of the publications. A recurring theme was that historically discriminated groups may be less inclined to trust and thus take advantage of AI. Veinot et al argued that ethnic minorities are more sceptical to digital health interventions than the majority population,4 a conclusion shared with Marcus et al, who in their review, stated that privacy and security issues are major causes for distrust in AI among minority ethnicities.29 Involving the effected communities in the development and implementation of the AI tools were again held as a key to mitigate this, among others by Howard and Borenstein.30
In contrast, Bigman et al found that the tendency to prefer the AI over a human doctor increased with the patient’s perceived underlying societal inequity; African-Americans became more likely to prefer the AI compared with Caucasians when there was a higher perceived ‘background’ of inequity.31
Dehumanisation and biomedicalisation
Dehumanisation was discussed in 19 publications. As Coiera32 states, more biomedicalised healthcare system may have adverse impacts on patients with complex needs, who disproportionally are from socioeconomically disadvantaged groups and of a minority ethnicity. The only included empirical study on impacted populations was by Miller et al,33 who surveyed users of a primary-care-triage AI-driven chatbot. Older patients with co-morbidities were less likely to both use and to appreciate the interventions, compared with young and healthy participants. Given that the prevalence of psychosocial morbidity is known to follow a socioeconomic gradient, it can be extrapolated that such developments would increase HI.8
However, Fiske et al25 hypothesise that such developments may have a beneficial effect on certain inequity issues relating to acceptability and perceived stigma, for example, concerning sexual health and psychiatric illness.
Agency for self-care
Four publications discussed patient agency and HI. HI may arise from an increased focus on patient-managed healthcare because of increased AI-utilisation. Kerr and Klonoff discussed the issue in relation to diabetes management, where there is an established difference in attitudes and ability to self-care in-between socioeconomic groups at the present.34 Such socioeconomic gaps may widen unless AI interventions are properly tailored for the populations that they are deployed in. This closely aligned with the established concept of downstream interventions being inherently inequitable.4
Objective 2: how is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for healthcare equity?
The topic was discussed in 13 sources. As aforementioned, AI may lead to a shifting emphasis from social circumstances (including wider social determinants of health) to measurable, objective observations. Such developments may worsen inequities, in particular with regard to morbidity related to psychosocial factors. Romero-Brufau et al35 conducted qualitative interviews with primary care staff before and after the implementation of an AI-driven diabetes support tool; the AI tool was perceived to give biomedically sound recommendations but overlooked psychosocial factors that may have led to suboptimal diabetes control in some patients and was seen as not providing equitable care by the staff.
Surveying general practitioners (GPs) directly, Blease et al36 found that 94% of GPs believed that AI would be unable to replace GPs in roles requiring empathic ability, over any time scale, a perception shared with informaticians interviewed by the same team.37 Along the same lines, Holford38 and Powell39 claimed that an integral part of the role of the doctor inevitably gets lost if the practice is translated into an algorithm. Using an anthropological perspective, Holford discussed this as the loss of deep knowledge, experience and intuition in relation to AI and technological progress. As tasks and jobs are broken up into simple standardised lists, implicit knowledge and intuition inevitably are lost, and it is currently impossible to replicate using AI. This would subsequently affect those in most need of compassion and holistic care.
Objective 3: how can the implementation of AI affect inequity?
Implementation aspects were discussed in 47 publications.
Participatory approaches and community involvement
Lack of diverse participation and community involvement was a risk factor for unequitable AI-interventions in healthcare, both in development and implementation of AI systems in the existing primary care system.20 40 Involvement of the target community throughout the whole implementation chain, from idea and problem formulation via data collection, datasets and regulatory environment all the way through implementation and end-users was key for equitable AI in general healthcare and primary care.
Alami et al41 and Clark et al42 argued that there is an urgent need to ‘mainstream’ a fundamental understanding of AI and its potential effects on healthcare and health equity among both clinicians and policy makers. This serves both to build trust and enable an understanding of when and how a specific AI intervention is suitable and what can be done to optimise the equity effects of its implementation. Holzmeyer43 emphasises a comprehensive equity analysis as the starting point of all system interventions in healthcare: what is the root cause behind what we are trying to address; what are the relevant SDH; what are the historical contexts; and to what extent do stakeholders agree on these issues?
Acceptance from care providers, loss of opportunity and equity
Failed implementation may affect inequities both through loss of potentially equity-improving AI systems, and through pushing new technology to uncontrolled consumer products such as smartphone apps, leaving the traditional health system unable to manage increased health anxiety and care-seeking.2 44 45 Williams et al45 created a framework specifically focused on ensuring sustainable AI-implementation, emphasising the need to consider the system-wide external effects from new interventions.
Primary care clinicians may be too busy and lack organisational resources to effectively adopt new technologies, risking poor uptake and leaving the field open to the commercial sector, more likely to cater to the ‘young and well’.40 45 46 Clinicians faced with an AI system perceived to not take SDH and personal circumstances into consideration may lose trust in AI technology at large, and object to further implementation, as discussed by Romero-Brufau et al.35 Alternatively, resistance will occur if they perceive that an AI intervention is pushed on them ‘for the sake of it’ rather than to solve a specified problem, as noted by Shaw et al.21
Ferryman et al47 suggested that an overemphasis on agility and rapid change in the regulatory environment causes a risk of equity-adverse products being implemented in the healthcare system. The potential conflict of interest between a fast-paced regulatory environment and a healthcare system inherently focused on safety and thorough evaluation was also highlighted in a recent NHSX (a digital innovation arm of NHS) report2 and by the WHO,48 among others. As discussed in the previous section, this may result in a loss of opportunity to improve healthcare equity, and again ‘handing over the ball’ to the commercial sector.
Overconfidence in AI, fuelled by the perception of AI as a novel, exciting and superior technology delivered by commercial companies developing the systems, as well as a public ‘mythology’ around its superiority (as expressed by Keyes et al49) may displace other more effective programmes for addressing HI, such as addressing the SDH directly, and working with community groups.48
In a wider context, upstream interventions such as public health measures and direct action to SDH have been proven to be more effective in reducing inequities than downstream interventions, such as changes in care provision or new therapeutic options. As such, like any intervention without explicit equity focus, AI interventions in primary care may be intrinsically unequitable.4
Discussion
Building on the themes identified above, the graphical conceptual model (figure 2) emphasises AI’s potential HI effects both inside and outside of the patient journey. That is, outside the patient journey meaning mechanisms not directly related to how patients interact with the primary care system. This highlights the importance of a system-wide perspective and of the concept of HI to be mainstreamed throughout the development and implementation process.
While there was limited research connecting AI with the dehumanisation of primary care (a trend towards replacing clinicians with AI-augmented technology) and HI, a few assumptions can be made, in particular:
The role of primary care as a mitigator and improver of HI is dependent on primary care clinicians being able to contextualise the care provided, work ‘outside the box’ and see to the social factors influencing patients’ health. This may involve recognising that a patient may not be able to stop smoking because she is currently worried about becoming homeless, or it may be necessary for a GP to deliver health motivating messages adapted to the individual’s unique circumstances.
The prevalence of illnesses with a psychosocial component is heavily associated with low socioeconomic status,50 and to effectively support such patients requires understanding of, and the ability to deal with, the underlying causes. A purely biochemical approach to medicine is insufficient, particularly within more disadvantaged communities.
Conclusively, there is a risk that such developments, if done without equity in mind, would unduly affect the healthcare of socioeconomically disadvantaged communities, and thereby worsen HI.
The way AI is implemented is integral to how well it interacts with the current systems and societal context, and by extension how it affects HI. Multiple publications discussed the risk of AI-augmented interventions being directed towards the young, healthy and well-off. This is because the disruptive traits of AI enables commercial providers to expand beyond comparatively costly and complicated human clinicians, for example, by smartphone apps. A recent case from Babylon Health is a GP at Hand system, where an AI-driven smartphone app enables users to be triaged, diagnosed or forwarded to a clinician directly from their phone. Initially, GP at Hand explicitly blocked patients with complex health needs from registering with them. Babylon Health was consequently accused of ‘cherry picking’ patients for whom their AI could care for sufficiently, leaving the complex patients to the traditional primary care centres, who in turn, would see an increase workload while being drained for resources.27 While this clearly was a regulatory loophole which subsequently was addressed, it highlights the risk of AI being used to disrupt and commercialise the primary care system, and the inherent tendency to go after the ‘easy’, tech-savvy patients first.
Social participation in developing and implementing AI interventions was prominent in the publications, as a way of promoting locally appropriate adaptation. While specific methods were not discussed in detail in the reviewed publications, a recent ‘citizen’s jury’ on AI and explainability provides an example of how it could be done.51 A similar approach could also be used to ensure regulatory frameworks for AI in healthcare aligns with the affected populations.
The need to ‘mainstream’ health equity throughout the whole implementation chain was a clear finding. Ensuring a system-wide basic understanding of SDH, HI and the role of primary care in addressing HI could help identify and avoid adverse effects.
Finally, there is clearly a need to look outside of the isolated clinical context in assessing the impact of AI in primary care on HI. Most of society’s HI occurs outside of the primary care system as a consequence of SDH, and that is also where interventions to address inequities are bound to be most effective. Downstream interventions, such as clinical AI, by default tend to worsen inequities because more advantaged groups usually benefit the most. As Holzmeyer43 put it, the most important goal of AI in terms of HI is thus to do no bad, which by extension means it has to be explicitly and actively equity-promoting. More research is needed on the most effective ways of how to both design and assess new interventions from such holistic perspectives. We suggest that a useful output of such research could be guidance in the form of considered steps or a framework that includes equity considerations to prevent fundamental mistakes being made that invertedly generate wider inequalities.
As outlined above, two public advisors made a significant contribution to the review, both through discussions on inclusion criteria and publication selection and through contributing with an outside perspective.
The review set out to cover HI related to ethnicity, gender and socioeconomic status. Most included publications discuss HI generally, focusing on concepts applicable to various forms of HI. We recognise that while the fundamental mechanisms in which inequity occur are shared across disadvantaged demographic groups, there is a further need to specifically study discrimination by specific characteristics, also including wider ranges of marginalised populations.
Finally, available resources limited us in doing further secondary and tertiary reference screening, as well as more detailed searches with lower-level terminology, and so there was a small risk that articles were not included that would have been eligible. Nine articles initially identified could not be retrieved, introducing a risk of selection bias, although proportionally small. Resource limitations also limited the searches to the English and Scandinavian languages. Nonetheless, we are confident that this review provides a representative and largely comprehensive summary of the current state of research.
Conclusion
Using a systematic scoping review methodology, we have mapped the current research on AI and HI in the context of primary care, and synthesised the findings into a conceptual framework; a theory of change. At the centre of this framework is the graphical depiction (figure 2), which combines established research on SDH and HI with themes identified in the reviewed literature and provides a holistic overview of the mechanisms at play.
We highlight the complexity of assessing such a diverse concept as AI. While AI in primary care covers a wide array of current and potential applications, there are common traits inherent to AI as a technology. AI can be considered a core component of an ongoing paradigm shift in healthcare provision, perhaps most comparable to the rapid biomedical and pharmacological progress of the beginning and middle of the last century.
From the findings, we note that academics as well as the regulatory establishment are still finding their way around AI in healthcare. We identified a relative wealth of publications covering algorithmic bias, but in terms of original research, very few publications discussed the wider impact of AI on patient care and the primary care system at large. Given the intersectoral and dynamic nature of HI and SDH, a wider perspective is needed to properly assess the potential effect of widespread AI implementation in primary care. No interventions can be implemented in isolation and the role of the surrounding society, organisational infrastructure and regulatory frameworks cannot be overstated. All aspects need to be considered to implement equitable AI in an environment conductive for improving equity.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
Ethics statements
Patient consent for publication
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Twitter @alexanddelia
Contributors Ad’E designed the review, conducted the searches, screened all articles, conducted the analysis, drafted the manuscript and acts as the guarantor for the overall content. MG, SR and CK assisted in designing the review and reviewing the manuscript. EJ coscreened 10% of the abstracts and 10% of the full-length articles. ID and AT provided feedback on the design as public advisors, and each coscreened 10 % of the abstracts and 10% of the full-length articles. They also provided feedback on the analysis and the manuscript. LF assisted in designing the review as the primary PhD-supervisor of the first author Ad’E, and assisted in reviewing the manuscript.
Funding This review was conducted as part of the PhD project 'Artificial Intelligence and Health Inequities in Primary Care', by Alexander d'Elia. The PhD project is funded by Applied Research Collaboration North West Coast (ARC NWC), in turn funded by the UK National Institute for Health Research (NIHR). The views expressed in this publication are those of the authors and not necessarily those of the NIHR.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.