Artificial intelligence and health inequities in primary care: a systematic scoping review and framework ======================================================================================================== * Alexander d'Elia * Mark Gabbay * Sarah Rodgers * Ciara Kierans * Elisa Jones * Irum Durrani * Adele Thomas * Lucy Frith ## Abstract **Objective** Artificial intelligence (AI) will have a significant impact on healthcare over the coming decade. At the same time, health inequity remains one of the biggest challenges. Primary care is both a driver and a mitigator of health inequities and with AI gaining traction in primary care, there is a need for a holistic understanding of how AI affect health inequities, through the act of providing care and through potential system effects. This paper presents a systematic scoping review of the ways AI implementation in primary care may impact health inequity. **Design** Following a systematic scoping review approach, we searched for literature related to AI, health inequity, and implementation challenges of AI in primary care. In addition, articles from primary exploratory searches were added, and through reference screening. The results were thematically summarised and used to produce both a narrative and conceptual model for the mechanisms by which social determinants of health and AI in primary care could interact to either improve or worsen health inequities. Two public advisors were involved in the review process. **Eligibility criteria** Peer-reviewed publications and grey literature in English and Scandinavian languages. **Information sources** PubMed, SCOPUS and JSTOR. **Results** A total of 1529 publications were identified, of which 86 met the inclusion criteria. The findings were summarised under six different domains, covering both positive and negative effects: (1) access, (2) trust, (3) dehumanisation, (4) agency for self-care, (5) algorithmic bias and (6) external effects. The five first domains cover aspects of the interface between the patient and the primary care system, while the last domain covers care system-wide and societal effects of AI in primary care. A graphical model has been produced to illustrate this. Community involvement throughout the whole process of designing and implementing of AI in primary care was a common suggestion to mitigate the potential negative effects of AI. **Conclusion** AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. This review summarises these effects from a system tive and provides a base for future research into responsible implementation. * Health Equity * General Practice * Healthcare Disparities #### WHAT IS ALREADY KNOWN ON THIS TOPIC * There is a need for a comprehensive, holistic, conceptual framework of how the implementation of artificial intelligence (AI) can affect health inequity in primary care. #### WHAT THIS STUDY ADDS * AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. #### HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY * This review summarises these effects from a system-wide perspective and provides a base for future research into responsible implementation. ## Introduction Artificial intelligence (AI) can be described as a computer system performing tasks typically requiring human intelligence. Everyday examples include predicting preferences in social media feeds and recognising faces in photos.1 It is a rapidly expanding field, and AI-augmented interventions are high on the agenda across healthcare, where current application include interpreting X-rays and ECGs. Current implementation of AI-augmented systems within healthcare is currently low but advocated widely as the future and in strategic solutions. Thus, AI systems of varying kinds are expected to be widely implemented across the healthcare system over the next decade, and primary care is no exception.2 At the same time, health inequities (HI) are being increasingly discussed, not least in the context of the ongoing COVID-19 pandemic.3 Through potentially freeing up resources and enabling more personalised care, AI is described as an enabler for more equitable health and healthcare.2 However, AI interacts with socioeconomic, gender and ethnic HI on many different levels and could both increase or decrease inequities, depending on application and implementation.4 5 Primary care holds a unique role in tackling HI. Primary care can both be a source and a magnifier of inequities, as well as a platform for mitigation.6 For the purpose of this review, primary care is defined as primary care services provided to individual patients, not including wider public health policy.7 Primary care can be inaccessible to certain groups and thus worsen HI, but at the same time is usually the first contact point for socioeconomically disadvantaged populations with either health or social needs. While the theoretical access to primary care and clinical management has been shown to be relatively equal across groups, outcomes still differ, with more affluent patients of majority ethnicity enjoying better health.8 This is a consequence of external factors causing poorer baseline health status and through differences in effectiveness of the care given, due to adherence to treatment and advice, economic barriers and so on; the social determinants health (SDH).9 Consequently, as care need increases by deprivation, more primary care resource is needed to provide adequate care in disadvantaged areas and communities.10 To summarise, the role of primary care in reducing HI is not just through addressing inequities within primary care, but to leverage its unique position in society to mitigate underlying differences in health outcomes.10 This is reflected in the way AI could affect inequities both in and through primary care. However, as this review shows, research on how AI may affect HI in primary care is limited, and is largely confined to either observations around accessibility or concerns over biased algorithms. Applying a systematic scoping review approach, this article takes a holistic approach to create a comprehensive model for how AI can affect HI, in and through primary care. As such, we intend it to serve as guidance to develop future research, regulations and policies surrounding AI, primary care and HI. This review assumes a predominantly publicly funded, general access primary care system, such as the British National Health System (hereafter NHS), however, certain mechanisms described may be applicable in other primary care systems as well. As research into the practical implications of AI on healthcare provision is still relatively limited, our objectives were intentionally broad to capture as much of the field as possible. Thus, a scoping review was chosen as the appropriate methodology to meet our study aims. This allowed for an iterative strategy, with the objectives adjusted as the field was explored.11 Specifically, our review sought to answer the following questions (hereafter discussed as objectives): 1. What research currently exists on the effect of AI on primary care equity? 1. How does the evidence-based match a provisional conceptual framework that we developed from our initial exploratory searches? 2. Through which methodologies have the topic of AI and primary care equity been studied? 2. How is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for primary care equity? 3. How can the implementation of AI in primary care affect wider population inequity? ## Methods This review was informed by the scoping review framework originally described by Arksey and O’Malley,12 and subsequent developments.11 13 As the searches in this review were conducted following a systematic approach, we chose to describe the methodology as a systematic scoping review, in line with previous guidance.13 The report was structured and written in accordance with the scoping-review reporting standards as set out by PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses).14 EndNote15 was used to manage the selection process, while Microsoft Excel16 was used for charting and extraction. ### Public involvement Two reimbursed public advisors (members of the public recruited through the National Institute for Health and Care Research Applied Research Collaboration North West Coast; NIHR ARCNWC) were involved in this review, both belonging to traditionally marginalised populations (one of British Asian ethnicity and one registered disabled and member of the LGBT (Lesbian, Gay, Bisexual, and Transgender) community). They participated in proofreading and approving of the protocol, assisted in selecting and extracting publications, and commented on the analysis and the findings. Public advisor involvement is intended to increase relevance and clarity of the review, and offering a non-academic perspective and interpretation. Given the review’s focus on equity and inclusion, this was seen as particularly relevant. Public involvement is reported throughout this text, following the GRIPP2 framework,17 and as a checklist (online supplemental annex 4). ### Supplementary data [[fmch-2022-001670supp004.pdf]](pending:yes) ### A provisional conceptual framework Having an initial conceptual framework of the topic is a useful tool to guide the review process.11 12 From the initial exploratory searches, which consisted of targeted internet searches and reading based on the experience of the authors, we constructed a provisional framework for how AI could affect healthcare equity in a primary care setting (online supplemental annex 1). We drew on work by the WHO Commission on SDH,9 Marmot *et al*,18 Dahlgren and Whitehead,19 and Veinot *et al*4 on how SDH may affect equity in and through primary care, and applied a layer of how AI may affect the various steps of the care process. ### Supplementary data [[fmch-2022-001670supp001.pdf]](pending:yes) Demographic characteristics of patients (here on pragmatic grounds limited to socioeconomic status, gender and ethnicity) both give rise to baseline HI and affect the way the patient interacts with the healthcare system, through SDH.9 For the purpose of this review, we considered these effects through a model developed by Veinot *et al*,4 where HI in care provision arise from either access, uptake, adherence or effectiveness. Effects of AI were, using this framework, divided into intrinsic effects from the actual AI (such as biased outputs) and extrinsic covering potential effects on the wider healthcare provision, outside of the direct implications of the algorithm (such as making access to care easier or harder for disadvantaged groups). n addition, in our provisional framework, we acknowledged that the implementation of AI in care provision is likely to have complex, system-wide effects which in turn will affect the care systems’ ability to mitigate HI. ### Eligibility and inclusion Initial searches indicated a distinct lack of robust primary empirical research (with a few notable exceptions) as well as little research conducted using secondary data, for example, data that were initially collected for direct care purposes or reviews. Thus, we decided to widen the scope and included descriptive sources including discussion articles and policy documents, to seek empirical evidence and construct our model. For the primary objective (current state of research) and tertiary objective (impact of implementation), searches included all forms of healthcare to maximise yield, with selection of articles relevant for primary care taking place in the next step. For example, Obermeyer *et al*’s article on resource allocation for multimorbidity care20 does not cover primary care, but was included as the equity-related concepts are transferable to the primary care context. AI was for the purpose of this review limited to clinical applications, following Shaw *et a*l’s21 typology of AI in healthcare. This includes AI-driven decision support systems and automated healthcare (such as automation of insulin or autonomous advice given to patients without human involvement), but not operational such as planning patient flows or staffing needs. This includes both on-site and telehealth applications of AI, with the defining feature being AI-driven decision-making affecting patient care directly. Primary care was defined as primary care services provided to individual patients, not including wider public health policy, as per Muldoon *et al*.7 HI was defined widely as socioeconomic, gender or ethnic inequities in health outcomes, as outlined in the provisional conceptual framework and reflected in the search terms (online supplemental annex 2). ### Supplementary data [[fmch-2022-001670supp002.pdf]](pending:yes) Searches were limited to the last 10 years (26 October 2011 to 26 October 2021), because AI was not being delivered in practice in primary care before that date. We only considered publications in English and Scandinavian languages, due to the main author being bilingual in Swedish and English. Other languages were excluded due to resource limitations. See table 1 for inclusion criteria. View this table: [Table 1](http://fmch.bmj.com/content/10/Suppl_1/e001670/T1) Table 1 Inclusion criteria ### Information sources Electronic databases were searched using a set of keywords with varying syntax depending on database, MeSH (medical subject headings) terms when possible. Three major databases for medical and implementation research were searched; PubMed, Scopus and JSTOR. Grey literature in the form of reports and white papers by major governmental and non-governmental organisations was included. The complete search terms are listed in online supplemental annex 2. To maximise the number of publications retrieved, we followed the systematic searches with secondary reference screening from the references of the included articles. The publications identified through this method were scanned for inclusion in the same way as the articles initially identified. At the end, articles found through initial exploratory searches were included, and their references scanned for relevant literature. ### Selection process We conducted screening and selection in two stages: first, abstracts were screened and reasons for exclusion recorded. The remaining articles were read in their entirety and reasons for those then excluded were recorded. Initial screening of the first 100 abstracts were conducted jointly with two public advisors, building a joint understanding of the selection criteria. These discussions clarified and simplified our criteria. The remaining titles and abstracts were primarily screened by the main author (Ad’E). Thirty per cent of the abstracts were double screened by the two public advisors and a coauthor (EJ) (10% each). The same process was repeated for full-text screening. Disagreements were decided through consensus, leaning towards inclusion. ### Data extraction We based the data charting form on this provisional framework and review objectives, and included six topics (table 2) (complete extraction table in online supplemental annex 3). Themes were based on the provisional framework (online supplemental annex 1) with a low threshold for introducing new themes. The main author was responsible for the data extraction at large and charted all included sources. In addition, the two public advisors together extracted ten percent of the total yield, after which a meeting was held with the main author to discuss the process and the results to improve the consistency of the extraction process. ### Supplementary data [[fmch-2022-001670supp003.pdf]](pending:yes) View this table: [Table 2](http://fmch.bmj.com/content/10/Suppl_1/e001670/T2) Table 2 Data charting ### Absence of critical appraisal Given the wide scope as well as the lack of a large body of original research on AI and HI, most results from the searches were non-empirical papers. Our objectives did not include an appraisal of the quality of evidence, as appropriate for the lack of original research, and we did not give preference to specific types of sources, which was reflected in the narrative interpretation of the results. ### Synthesis of results For the primary objective, we summarised the charted data in relation using thematic analysis, described among others by Levac *et al*.11 Themes were based on the provisional conceptual framework (online supplemental annex 1), which combined established theory on SDH9 19 (ie, a positivist sociological approach) and inequity in health technology,4 the latter building on the health-system-inequity model by Tanahashi.22 Following the thematic analysis model,11 the main author reviewed the charted data and analysed it against the themes of the provisional framework, keeping a low threshold for introducing new themes or modifying the framework. The result of the synthesis was discussed among all authors for clarity. The two public advisors were invited to comment on a draft of the review and contributed with clarifications. The results are presented as a graphical model of a conceptual framework for how AI affects health equity in primary care, as well as a narrative description of the state of the research in the field and scope for future work. For the secondary (patient–doctor relationship) and tertiary (impact of implementation) objectives, data were summarised thematically for respective objective, to inform how of AI in primary care can be implemented as force for good from a HI perspective. ## Results ### Selection of publications We found 1504 publications in the initial searches. After exclusions, 164 publications were read in full, of which 67 fulfilled the inclusion criteria, 19 further secondary references were identified from the reference lists of these 62 articles, of which 13 were included. Finally, we included six key publications found during the initial exploratory searches for completeness. See figure 1 for PRISMA14 chart. Discussions with public advisors contributed to two additional inclusions. ![Figure 1](http://fmch.bmj.com/https://fmch.bmj.com/content/fmch/10/Suppl_1/e001670/F1.medium.gif) [Figure 1](http://fmch.bmj.com/content/10/Suppl_1/e001670/F1) Figure 1 PRISMA chart of search and selection process. *If not included in database searches. **Criteria 1: Discussing artificial intelligence interventions in healthcare with an explicit focus on equity, either in or applicable to primary care (objective 1), AI in primary care provision (objective 2) or practical implementation of AI in a system and the subsequent role of the infrastructures, organisational processes and personnel involved (objective 3). All records retrieved met criteria 3: Published by either a peer-reviewed journal or by a major governmental or non-governmental organisation. AI, artificial intelligence; PRISMA, preferred reporting items for systematic reviews and meta-analyses. ### Characteristics of publications The most common type of publication (n=45) were discussion articles, followed by original research (n=18), reviews (n=17) and reports/policy documents (n=6). Of the original research sources, eleven reported on quantitative studies, while seven used a qualitative methodology. Of the 17 reviews, 15 were narrative reviews and 2 were quantitative systematic reviews. The USA was the most common country of origin (n=40), followed by the UK (n=23) and Canada (n=9). Publications were all recent; the publication years ranged from 2017 to 2021 (mean=2019.9, median=2020). As previously noted, searchers were not limited to sources discussing primary care, but included sources discussing other kinds of healthcare covering concepts applicable to equity in primary care. Approximately half of the publications discussed healthcare on a general level (n=48), 20 discussed primary care (which was explicitly searched for) and 6 discussed psychiatry, followed by smaller topics with fewer papers. Five articles discussed AI on a general society level (table 3). View this table: [Table 3](http://fmch.bmj.com/content/10/Suppl_1/e001670/T3) Table 3 Characteristics of publications ### Summary of findings The themes were not necessarily discrete, and one specific concept may fit under several themes. For example, a lack of diverse representation in developing an AI system may lead to unintended inequities through: (1) a lack of an equity-lens during development, enabling an unfair problem formulation,20 and (2) lead to unfair system effects external to the algorithm.4 The findings are summarised below, and in a graphical conceptual model of how AI could affect socioeconomic, ethical and gender-based inequities in primary care (figure 2). ![Figure 2](http://fmch.bmj.com/https://fmch.bmj.com/content/fmch/10/Suppl_1/e001670/F2.medium.gif) [Figure 2](http://fmch.bmj.com/content/10/Suppl_1/e001670/F2) Figure 2 A conceptual framework for how AI could affect inequities in primary care. AI, artificial intelligence. ### Objective 1: in what ways may AI effect HI in a primary care setting? #### Algorithmic bias Algorithmic bias was discussed in 59 publications. Biased outcomes stemming from the AI itself (in contrast to the AI system’s interaction with external factors) can broadly be categorised within two categories; unrepresentative datasets and underlying biases reflected in the datasets. Under-representation of various populations in the datasets used to train AI algorithms may result in less accurate outcomes for these groups, for example, ethnic minorities. The main concerns relate to skewed outcomes when an AI is better fitted for one group than for another. Among others, Chen *et al* showed this in relation to intensive-care-mortality prediction, which was shown to be more accurate for Caucasian men compared with women and patients of minority ethnicities.23 A fundamental concept was reiterated across the literature identified: SDH are present in society, and when a model is based on real-life data, it may reflect and potentially reinforce the effect of SDH (ie, HI). Examples include Obermeyer *et al* who found that a widely used AI system for selecting multimorbid health insurance patients for extra resources (in order to prevent future deterioration and costly care) requires African-American patients to be significantly more ill to access resources.20 The issue was not in the quality of the dataset, but that the system developers used historical healthcare costs as a proxy for current morbidity. The authors showed that African American patients use less care resources for the same morbidity, and the AI thus perceived them to be less ill compared with their Caucasian counterparts. Samorani *et al*24 described how ethnic minority patients are given worse time slots by automatic primary care booking systems due to higher rates on non-attendance, leading to even less attendance. Their study thus serves as an example of how biases could reinforce and perpetuate inequities already present in society. #### Increased access and the digital divide Accessibility aspects were discussed in 21 publications. AI may lead to increased access as an enabler for more equal healthcare provision. However, increased access also brings a risk for the healthcare system being overwhelmed by the ‘worried well’.5 Fiske *et al*25 discussed the risk of creating a two-tier system, where AI-augmented psychiatry disenables the option to provide ‘human services’ in rural and underserved areas. Conversely, the ‘digital divide’ was frequently discussed, not just regarding digital availability but also functional access. This was not only an issue of possessing the technology and infrastructure needed to interact with a digitalised care system, but also having the skills to fully make use of it, as well access to a private room.26 27 Related to accessibility, Clark *et al*28 highlighted the opportunity of using AI to predict population-wide morbidity and identify the social determinants driving HI from a primary care and psychiatry perspective. Thus, AI could, in this application, help to address these factors and subsequently improve equity. #### Trust of patients Trust aspects were discussed in ten of the publications. A recurring theme was that historically discriminated groups may be less inclined to trust and thus take advantage of AI. Veinot *et al* argued that ethnic minorities are more sceptical to digital health interventions than the majority population,4 a conclusion shared with Marcus *et al*, who in their review, stated that privacy and security issues are major causes for distrust in AI among minority ethnicities.29 Involving the effected communities in the development and implementation of the AI tools were again held as a key to mitigate this, among others by Howard and Borenstein.30 In contrast, Bigman *et al* found that the tendency to prefer the AI over a human doctor increased with the patient’s perceived underlying societal inequity; African-Americans became more likely to prefer the AI compared with Caucasians when there was a higher perceived ‘background’ of inequity.31 #### Dehumanisation and biomedicalisation Dehumanisation was discussed in 19 publications. As Coiera32 states, more biomedicalised healthcare system may have adverse impacts on patients with complex needs, who disproportionally are from socioeconomically disadvantaged groups and of a minority ethnicity. The only included empirical study on impacted populations was by Miller *et al*,33 who surveyed users of a primary-care-triage AI-driven chatbot. Older patients with co-morbidities were less likely to both use and to appreciate the interventions, compared with young and healthy participants. Given that the prevalence of psychosocial morbidity is known to follow a socioeconomic gradient, it can be extrapolated that such developments would increase HI.8 However, Fiske *et al*25 hypothesise that such developments may have a beneficial effect on certain inequity issues relating to acceptability and perceived stigma, for example, concerning sexual health and psychiatric illness. #### Agency for self-care Four publications discussed patient agency and HI. HI may arise from an increased focus on patient-managed healthcare because of increased AI-utilisation. Kerr and Klonoff discussed the issue in relation to diabetes management, where there is an established difference in attitudes and ability to self-care in-between socioeconomic groups at the present.34 Such socioeconomic gaps may widen unless AI interventions are properly tailored for the populations that they are deployed in. This closely aligned with the established concept of downstream interventions being inherently inequitable.4 ### Objective 2: how is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for healthcare equity? The topic was discussed in 13 sources. As aforementioned, AI may lead to a shifting emphasis from social circumstances (including wider social determinants of health) to measurable, objective observations. Such developments may worsen inequities, in particular with regard to morbidity related to psychosocial factors. Romero-Brufau *et al*35 conducted qualitative interviews with primary care staff before and after the implementation of an AI-driven diabetes support tool; the AI tool was perceived to give biomedically sound recommendations but overlooked psychosocial factors that may have led to suboptimal diabetes control in some patients and was seen as not providing equitable care by the staff. Surveying general practitioners (GPs) directly, Blease *et al*36 found that 94% of GPs believed that AI would be unable to replace GPs in roles requiring empathic ability, over any time scale, a perception shared with informaticians interviewed by the same team.37 Along the same lines, Holford38 and Powell39 claimed that an integral part of the role of the doctor inevitably gets lost if the practice is translated into an algorithm. Using an anthropological perspective, Holford discussed this as the loss of deep knowledge, experience and intuition in relation to AI and technological progress. As tasks and jobs are broken up into simple standardised lists, implicit knowledge and intuition inevitably are lost, and it is currently impossible to replicate using AI. This would subsequently affect those in most need of compassion and holistic care. ### Objective 3: how can the implementation of AI affect inequity? Implementation aspects were discussed in 47 publications. #### Participatory approaches and community involvement Lack of diverse participation and community involvement was a risk factor for unequitable AI-interventions in healthcare, both in development and implementation of AI systems in the existing primary care system.20 40 Involvement of the target community throughout the whole implementation chain, from idea and problem formulation via data collection, datasets and regulatory environment all the way through implementation and end-users was key for equitable AI in general healthcare and primary care. Alami *et al*41 and Clark *et al*42 argued that there is an urgent need to ‘mainstream’ a fundamental understanding of AI and its potential effects on healthcare and health equity among both clinicians and policy makers. This serves both to build trust and enable an understanding of when and how a specific AI intervention is suitable and what can be done to optimise the equity effects of its implementation. Holzmeyer43 emphasises a comprehensive equity analysis as the starting point of all system interventions in healthcare: what is the root cause behind what we are trying to address; what are the relevant SDH; what are the historical contexts; and to what extent do stakeholders agree on these issues? #### Acceptance from care providers, loss of opportunity and equity Failed implementation may affect inequities both through loss of potentially equity-improving AI systems, and through pushing new technology to uncontrolled consumer products such as smartphone apps, leaving the traditional health system unable to manage increased health anxiety and care-seeking.2 44 45 Williams *et al*45 created a framework specifically focused on ensuring sustainable AI-implementation, emphasising the need to consider the system-wide external effects from new interventions. Primary care clinicians may be too busy and lack organisational resources to effectively adopt new technologies, risking poor uptake and leaving the field open to the commercial sector, more likely to cater to the ‘young and well’.40 45 46 Clinicians faced with an AI system perceived to not take SDH and personal circumstances into consideration may lose trust in AI technology at large, and object to further implementation, as discussed by Romero-Brufau *et al*.35 Alternatively, resistance will occur if they perceive that an AI intervention is pushed on them ‘for the sake of it’ rather than to solve a specified problem, as noted by Shaw *et al*.21 Ferryman *et al*47 suggested that an overemphasis on agility and rapid change in the regulatory environment causes a risk of equity-adverse products being implemented in the healthcare system. The potential conflict of interest between a fast-paced regulatory environment and a healthcare system inherently focused on safety and thorough evaluation was also highlighted in a recent NHSX (a digital innovation arm of NHS) report2 and by the WHO,48 among others. As discussed in the previous section, this may result in a loss of opportunity to improve healthcare equity, and again ‘handing over the ball’ to the commercial sector. Overconfidence in AI, fuelled by the perception of AI as a novel, exciting and superior technology delivered by commercial companies developing the systems, as well as a public ‘mythology’ around its superiority (as expressed by Keyes *et al*49) may displace other more effective programmes for addressing HI, such as addressing the SDH directly, and working with community groups.48 In a wider context, upstream interventions such as public health measures and direct action to SDH have been proven to be more effective in reducing inequities than downstream interventions, such as changes in care provision or new therapeutic options. As such, like any intervention without explicit equity focus, AI interventions in primary care may be intrinsically unequitable.4 ## Discussion Building on the themes identified above, the graphical conceptual model (figure 2) emphasises AI’s potential HI effects both inside and outside of the patient journey. That is, outside the patient journey meaning mechanisms not directly related to how patients interact with the primary care system. This highlights the importance of a system-wide perspective and of the concept of HI to be mainstreamed throughout the development and implementation process. While there was limited research connecting AI with the dehumanisation of primary care (a trend towards replacing clinicians with AI-augmented technology) and HI, a few assumptions can be made, in particular: The role of primary care as a mitigator and improver of HI is dependent on primary care clinicians being able to contextualise the care provided, work ‘outside the box’ and see to the social factors influencing patients’ health. This may involve recognising that a patient may not be able to stop smoking because she is currently worried about becoming homeless, or it may be necessary for a GP to deliver health motivating messages adapted to the individual’s unique circumstances. The prevalence of illnesses with a psychosocial component is heavily associated with low socioeconomic status,50 and to effectively support such patients requires understanding of, and the ability to deal with, the underlying causes. A purely biochemical approach to medicine is insufficient, particularly within more disadvantaged communities. Conclusively, there is a risk that such developments, if done without equity in mind, would unduly affect the healthcare of socioeconomically disadvantaged communities, and thereby worsen HI. The way AI is implemented is integral to how well it interacts with the current systems and societal context, and by extension how it affects HI. Multiple publications discussed the risk of AI-augmented interventions being directed towards the young, healthy and well-off. This is because the disruptive traits of AI enables commercial providers to expand beyond comparatively costly and complicated human clinicians, for example, by smartphone apps. A recent case from Babylon Health is a GP at Hand system, where an AI-driven smartphone app enables users to be triaged, diagnosed or forwarded to a clinician directly from their phone. Initially, GP at Hand explicitly blocked patients with complex health needs from registering with them. Babylon Health was consequently accused of ‘cherry picking’ patients for whom their AI could care for sufficiently, leaving the complex patients to the traditional primary care centres, who in turn, would see an increase workload while being drained for resources.27 While this clearly was a regulatory loophole which subsequently was addressed, it highlights the risk of AI being used to disrupt and commercialise the primary care system, and the inherent tendency to go after the ‘easy’, tech-savvy patients first. Social participation in developing and implementing AI interventions was prominent in the publications, as a way of promoting locally appropriate adaptation. While specific methods were not discussed in detail in the reviewed publications, a recent ‘citizen’s jury’ on AI and explainability provides an example of how it could be done.51 A similar approach could also be used to ensure regulatory frameworks for AI in healthcare aligns with the affected populations. The need to ‘mainstream’ health equity throughout the whole implementation chain was a clear finding. Ensuring a system-wide basic understanding of SDH, HI and the role of primary care in addressing HI could help identify and avoid adverse effects. Finally, there is clearly a need to look outside of the isolated clinical context in assessing the impact of AI in primary care on HI. Most of society’s HI occurs outside of the primary care system as a consequence of SDH, and that is also where interventions to address inequities are bound to be most effective. Downstream interventions, such as clinical AI, by default tend to worsen inequities because more advantaged groups usually benefit the most. As Holzmeyer43 put it, the most important goal of AI in terms of HI is thus to do no bad, which by extension means it has to be explicitly and actively equity-promoting. More research is needed on the most effective ways of how to both design and assess new interventions from such holistic perspectives. We suggest that a useful output of such research could be guidance in the form of considered steps or a framework that includes equity considerations to prevent fundamental mistakes being made that invertedly generate wider inequalities. As outlined above, two public advisors made a significant contribution to the review, both through discussions on inclusion criteria and publication selection and through contributing with an outside perspective. The review set out to cover HI related to ethnicity, gender and socioeconomic status. Most included publications discuss HI generally, focusing on concepts applicable to various forms of HI. We recognise that while the fundamental mechanisms in which inequity occur are shared across disadvantaged demographic groups, there is a further need to specifically study discrimination by specific characteristics, also including wider ranges of marginalised populations. Finally, available resources limited us in doing further secondary and tertiary reference screening, as well as more detailed searches with lower-level terminology, and so there was a small risk that articles were not included that would have been eligible. Nine articles initially identified could not be retrieved, introducing a risk of selection bias, although proportionally small. Resource limitations also limited the searches to the English and Scandinavian languages. Nonetheless, we are confident that this review provides a representative and largely comprehensive summary of the current state of research. ## Conclusion Using a systematic scoping review methodology, we have mapped the current research on AI and HI in the context of primary care, and synthesised the findings into a conceptual framework; a theory of change. At the centre of this framework is the graphical depiction (figure 2), which combines established research on SDH and HI with themes identified in the reviewed literature and provides a holistic overview of the mechanisms at play. We highlight the complexity of assessing such a diverse concept as AI. While AI in primary care covers a wide array of current and potential applications, there are common traits inherent to AI as a technology. AI can be considered a core component of an ongoing paradigm shift in healthcare provision, perhaps most comparable to the rapid biomedical and pharmacological progress of the beginning and middle of the last century. From the findings, we note that academics as well as the regulatory establishment are still finding their way around AI in healthcare. We identified a relative wealth of publications covering algorithmic bias, but in terms of original research, very few publications discussed the wider impact of AI on patient care and the primary care system at large. Given the intersectoral and dynamic nature of HI and SDH, a wider perspective is needed to properly assess the potential effect of widespread AI implementation in primary care. No interventions can be implemented in isolation and the role of the surrounding society, organisational infrastructure and regulatory frameworks cannot be overstated. All aspects need to be considered to implement equitable AI in an environment conductive for improving equity. ## Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information. ## Ethics statements ### Patient consent for publication Not applicable. ## Footnotes * Twitter @alexanddelia * Contributors Ad’E designed the review, conducted the searches, screened all articles, conducted the analysis, drafted the manuscript and acts as the guarantor for the overall content. MG, SR and CK assisted in designing the review and reviewing the manuscript. EJ coscreened 10% of the abstracts and 10% of the full-length articles. ID and AT provided feedback on the design as public advisors, and each coscreened 10 % of the abstracts and 10% of the full-length articles. They also provided feedback on the analysis and the manuscript. LF assisted in designing the review as the primary PhD-supervisor of the first author Ad’E, and assisted in reviewing the manuscript. * Funding This review was conducted as part of the PhD project 'Artificial Intelligence and Health Inequities in Primary Care', by Alexander d'Elia. The PhD project is funded by Applied Research Collaboration North West Coast (ARC NWC), in turn funded by the UK National Institute for Health Research (NIHR). The views expressed in this publication are those of the authors and not necessarily those of the NIHR. * Competing interests None declared. * Provenance and peer review Not commissioned; externally peer reviewed. * Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. * © Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/) This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/). ## References 1. Stuart R, Peter N. Artificial intelligence: a modern approach Prentice Hall, Upper Saddle River, NJ; 2020. 2. Joshi I, Morley J. Artificial Intelligence: How to get it right. In: Putting policy into practice for safe data-driven innovation in health and care, 2019. 3. Bambra C, Riordan R, Ford J, et al. The COVID-19 pandemic and health inequalities. J Epidemiol Community Health 2020;74:964–8.[doi:10.1136/jech-2020-214401](http://dx.doi.org/10.1136/jech-2020-214401)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32535550 [Abstract/FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiamVjaCI7czo1OiJyZXNpZCI7czo5OiI3NC8xMS85NjQiO3M6NDoiYXRvbSI7czoyOToiL2ZtY2gvMTAvU3VwcGxfMS9lMDAxNjcwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 4. Veinot TC, Mitchell H, Ancker JS. Good intentions are not enough: how informatics interventions can worsen inequality. J Am Med Inform Assoc 2018;25:1080–8.[doi:10.1093/jamia/ocy052](http://dx.doi.org/10.1093/jamia/ocy052)pmid:http://www.ncbi.nlm.nih.gov/pubmed/29788380 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1093/jamia/ocy052&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 5. Academy of Royal Medical Colleges. Artificial intelligence in healthcare Academy of Royal Medical Colleges; 2018. 6. Popay J, Kowarzik U, Mallinson S, et al. Social problems, primary care and pathways to help and support: addressing health inequalities at the individual level. Part I: the GP perspective. J Epidemiol Community Health 2007;61:966–71.[doi:10.1136/jech.2007.061937](http://dx.doi.org/10.1136/jech.2007.061937)pmid:http://www.ncbi.nlm.nih.gov/pubmed/17933954 [Abstract/FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiamVjaCI7czo1OiJyZXNpZCI7czo5OiI2MS8xMS85NjYiO3M6NDoiYXRvbSI7czoyOToiL2ZtY2gvMTAvU3VwcGxfMS9lMDAxNjcwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 7. Muldoon LK, Hogg WE, Levitt M. Primary care (PC) and primary health care (PHC). Can J Public Health 2006;97:409–11.[doi:10.1007/BF03405354](http://dx.doi.org/10.1007/BF03405354) [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1007/BF03405354&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=17120883&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 8. Lueckmann SL, Hoebel J, Roick J, et al. Socioeconomic inequalities in primary-care and specialist physician visits: a systematic review. Int J Equity Health 2021;20:1–19.[doi:10.1186/s12939-020-01375-1](http://dx.doi.org/10.1186/s12939-020-01375-1) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 9. WHO. A conceptual framework for action on the social determinants of health; 2010. 10. Hutt P, Gilmour S. Tackling inequalities in general practice. London: The King’s Fund, 2010: 1–37. 11. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010;5:69.[doi:10.1186/1748-5908-5-69](http://dx.doi.org/10.1186/1748-5908-5-69)pmid:http://www.ncbi.nlm.nih.gov/pubmed/20854677 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1186/1748-5908-5-69&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=20854677&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 12. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19–32.[doi:10.1080/1364557032000119616](http://dx.doi.org/10.1080/1364557032000119616) [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1080/1364557032000119616&link_type=DOI) 13. Peters MDJ, Godfrey CM, Khalil H, et al. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc 2015;13:141–6.[doi:10.1097/XEB.0000000000000050](http://dx.doi.org/10.1097/XEB.0000000000000050)pmid:http://www.ncbi.nlm.nih.gov/pubmed/26134548 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1097/XEB.0000000000000050&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=26134548&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 14. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467–73.[doi:10.7326/M18-0850](http://dx.doi.org/10.7326/M18-0850)pmid:http://www.ncbi.nlm.nih.gov/pubmed/30178033 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.7326/M18-0850&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=30178033&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 15. The Endnote Team. EndNote. In: Endnote X9. Philadelphia, PA, USA: Clarivate, 2013. 16. Corporation M. Microsoft Excel. 16.0 ed; 2018. 17. Staniszewska S, Brett J, Simera I, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ 2017;358:j3453.[doi:10.1136/bmj.j3453](http://dx.doi.org/10.1136/bmj.j3453)pmid:http://www.ncbi.nlm.nih.gov/pubmed/28768629 [Abstract/FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNTgvYXVnMDJfNS9qMzQ1MyI7czo0OiJhdG9tIjtzOjI5OiIvZm1jaC8xMC9TdXBwbF8xL2UwMDE2NzAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 18. Marmot M. Social determinants of health inequalities. Lancet 2005;365:1099–104.[doi:10.1016/S0140-6736(05)71146-6](http://dx.doi.org/10.1016/S0140-6736(05)71146-6)pmid:http://www.ncbi.nlm.nih.gov/pubmed/15781105 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1016/S0140-6736(05)71146-6&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=15781105&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) [Web of Science](http://fmch.bmj.com/lookup/external-ref?access_num=000227731800034&link_type=ISI) 19. Dahlgren G, Whitehead M. Policies and strategies to promote social equity in health; 1991. 20. Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447–53.[doi:10.1126/science.aax2342](http://dx.doi.org/10.1126/science.aax2342)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31649194 [Abstract/FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzNjYvNjQ2NC80NDciO3M6NDoiYXRvbSI7czoyOToiL2ZtY2gvMTAvU3VwcGxfMS9lMDAxNjcwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 21. Shaw J, Rudzicz F, Jamieson T, et al. Artificial intelligence and the implementation challenge. J Med Internet Res 2019;21:e13659.[doi:10.2196/13659](http://dx.doi.org/10.2196/13659)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31293245 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.2196/13659&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=31293245&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 22. Tanahashi T. Health service coverage and its evaluation. Bull World Health Organ 1978;56:295–303.pmid:http://www.ncbi.nlm.nih.gov/pubmed/96953 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=96953&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) [Web of Science](http://fmch.bmj.com/lookup/external-ref?access_num=A1978FC35500016&link_type=ISI) 23. Chen IY, Szolovits P, Ghassemi M. Can AI help reduce disparities in general medical and mental health care? AMA J Ethics 2019;21:167–79.[doi:10.1001/amajethics.2019.167](http://dx.doi.org/10.1001/amajethics.2019.167) 24. Samorani M, Harris SL, Blount LG, et al. Overbooked and overlooked: machine learning and racial bias in medical appointment scheduling. In: Manufacturing & Service Operations Management, 2021. 25. Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res 2019;21:e13216.[doi:10.2196/13216](http://dx.doi.org/10.2196/13216)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31094356 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 26. Boers SN, Jongsma KR, Lucivero F, et al. Series: eHealth in primary care. Part 2: exploring the ethical implications of its application in primary care practice. Eur J Gen Pract 2020;26:26–32.[doi:10.1080/13814788.2019.1678958](http://dx.doi.org/10.1080/13814788.2019.1678958)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31663394 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 27. McCartney M. Margaret McCartney: General practice can't just exclude sick people. BMJ 2017;359:j5190.[doi:10.1136/bmj.j5190](http://dx.doi.org/10.1136/bmj.j5190)pmid:http://www.ncbi.nlm.nih.gov/pubmed/29122776 [FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE4OiIzNTkvbm92MDlfMTQvajUxOTAiO3M6NDoiYXRvbSI7czoyOToiL2ZtY2gvMTAvU3VwcGxfMS9lMDAxNjcwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 28. Clark CR, Ommerborn MJ, Moran K, et al. Predicting self-rated health across the life course: health equity insights from machine learning models. J Gen Intern Med 2021;36:1181–8.[doi:10.1007/s11606-020-06438-1](http://dx.doi.org/10.1007/s11606-020-06438-1)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33620624 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 29. Marcus JL, Sewell WC, Balzer LB, et al. Artificial intelligence and machine learning for HIV prevention: emerging approaches to ending the epidemic. Curr HIV/AIDS Rep 2020;17:171–9.[doi:10.1007/s11904-020-00490-6](http://dx.doi.org/10.1007/s11904-020-00490-6)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32347446 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1007/s11904-020-00490-6&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 30. Howard A, Borenstein J. The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Sci Eng Ethics 2018;24:1521–36.[doi:10.1007/s11948-017-9975-2](http://dx.doi.org/10.1007/s11948-017-9975-2)pmid:http://www.ncbi.nlm.nih.gov/pubmed/28936795 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10/gfjbvz&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 31. Bigman YE, Yam KC, Marciano D, et al. Threat of racial and economic inequality increases preference for algorithm decision-making. Comput Human Behav 2021;122:106859.[doi:10.1016/j.chb.2021.106859](http://dx.doi.org/10.1016/j.chb.2021.106859) 32. Coiera E. The price of artificial intelligence. Yearb Med Inform 2019;28:14–15.[doi:10.1055/s-0039-1677892](http://dx.doi.org/10.1055/s-0039-1677892)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31022746 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 33. Miller S, Gilbert S, Virani V, et al. Patients' utilization and perception of an artificial intelligence-based symptom assessment and advice technology in a British primary care waiting room: exploratory pilot study. JMIR Hum Factors 2020;7:e19713.[doi:10.2196/19713](http://dx.doi.org/10.2196/19713)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32540836 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 34. Kerr D, Klonoff DC. Digital diabetes data and artificial intelligence: a time for humility not hubris. J Diabetes Sci Technol 2019;13:123–7.[doi:10.1177/1932296818796508](http://dx.doi.org/10.1177/1932296818796508)pmid:http://www.ncbi.nlm.nih.gov/pubmed/30182736 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 35. Romero-Brufau S, Wyatt KD, Boyum P, et al. A lesson in implementation: a pre-post study of providers' experience with artificial intelligence-based clinical decision support. Int J Med Inform 2020;137:104072.[doi:10.1016/j.ijmedinf.2019.104072](http://dx.doi.org/10.1016/j.ijmedinf.2019.104072)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32200295 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 36. Blease C, Bernstein MH, Gaab J, et al. Computerization and the future of primary care: a survey of general practitioners in the UK. PLoS One 2018;13:e0207418.[doi:10.1371/journal.pone.0207418](http://dx.doi.org/10.1371/journal.pone.0207418)pmid:http://www.ncbi.nlm.nih.gov/pubmed/30540791 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 37. Blease C, Kharko A, Locher C, et al. US primary care in 2029: a Delphi survey on the impact of machine learning. PLoS One 2020;15:e0239947.[doi:10.1371/journal.pone.0239947](http://dx.doi.org/10.1371/journal.pone.0239947)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33031411 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 38. Holford WD. The repression of mètis within digital organizations. Prometheus 2020;36:253–76.[doi:10.13169/prometheus.36.3.0253](http://dx.doi.org/10.13169/prometheus.36.3.0253) 39. Powell J. Trust me, I'm a Chatbot: how artificial intelligence in health care fails the Turing test. J Med Internet Res 2019;21:e16222.[doi:10.2196/16222](http://dx.doi.org/10.2196/16222)pmid:http://www.ncbi.nlm.nih.gov/pubmed/31661083 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 40. Leslie D, Mazumder A, Peppin A, et al. Does "AI" stand for augmenting inequality in the era of covid-19 healthcare? BMJ 2021;372:n304.[doi:10.1136/bmj.n304](http://dx.doi.org/10.1136/bmj.n304)pmid:http://www.ncbi.nlm.nih.gov/pubmed/33722847 [FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNzIvbWFyMTVfMTEvbjMwNCI7czo0OiJhdG9tIjtzOjI5OiIvZm1jaC8xMC9TdXBwbF8xL2UwMDE2NzAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 41. Alami H, Rivard L, Lehoux P, et al. Artificial intelligence in health care: laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Global Health 2020;16:52.[doi:10.1186/s12992-020-00584-1](http://dx.doi.org/10.1186/s12992-020-00584-1)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32580741 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.1186/s12992-020-00584-1&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 42. Clark CR, Wilkins CH, Rodriguez JA, et al. Health care equity in the use of advanced analytics and artificial intelligence technologies in primary care. J Gen Intern Med 2021;36:3188–93.[doi:10.1007/s11606-021-06846-x](http://dx.doi.org/10.1007/s11606-021-06846-x)pmid:http://www.ncbi.nlm.nih.gov/pubmed/34027610 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 43. Holzmeyer C. Beyond ‘AI for Social Good’ (AI4SG): social transformations—not tech-fixes—for health equity. Interdiscip Sci Rev 2021;46:94–125.[doi:10.1080/03080188.2020.1840221](http://dx.doi.org/10.1080/03080188.2020.1840221) 44. Moreau JT, Baillet S, Dudley RW. Biased intelligence: on the subjectivity of digital objectivity. BMJ Health Care Inform 2020;27.[doi:10.1136/bmjhci-2020-100146](http://dx.doi.org/10.1136/bmjhci-2020-100146)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32830107 45. Williams C. A health rights impact assessment guide for artificial intelligence projects. Health Hum Rights 2020;22:55.pmid:http://www.ncbi.nlm.nih.gov/pubmed/33390693 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 46. Blease C, Kaptchuk TJ, Bernstein MH, et al. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners' views. J Med Internet Res 2019;21:e12802.[doi:10.2196/12802](http://dx.doi.org/10.2196/12802)pmid:http://www.ncbi.nlm.nih.gov/pubmed/30892270 [CrossRef](http://fmch.bmj.com/lookup/external-ref?access_num=10.2196/12802&link_type=DOI) [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 47. Ferryman K. Addressing health disparities in the food and drug administration's artificial intelligence and machine learning regulatory framework. J Am Med Inform Assoc 2020;27:2016–9.[doi:10.1093/jamia/ocaa133](http://dx.doi.org/10.1093/jamia/ocaa133)pmid:http://www.ncbi.nlm.nih.gov/pubmed/32951036 [PubMed](http://fmch.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Ffmch%2F10%2FSuppl_1%2Fe001670.atom) 48. WHO. Ethics and governance of artificial intelligence for health: WHO guidance; 2021. 49. Keyes O, Hitzig Z, Blell M. Truth from the machine: artificial intelligence and the materialization of identity. Interdiscip Sci Rev 2021;46:158–75.[doi:10.1080/03080188.2020.1840224](http://dx.doi.org/10.1080/03080188.2020.1840224) 50. Mercer SW, Watt GCM. The inverse care law: clinical primary care encounters in deprived and affluent areas of Scotland. Ann Fam Med 2007;5:503–10.[doi:10.1370/afm.778](http://dx.doi.org/10.1370/afm.778)pmid:http://www.ncbi.nlm.nih.gov/pubmed/18025487 [Abstract/FREE Full Text](http://fmch.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6NzoiNS82LzUwMyI7czo0OiJhdG9tIjtzOjI5OiIvZm1jaC8xMC9TdXBwbF8xL2UwMDE2NzAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 51. Campbell SM. S Artificial intelligence (AI) & explainability: Citizens’ Juries Report. NIHR; 2019.