Skip to main content

Artificial intelligence in health care: laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries

Abstract

The World Health Organization and other institutions are considering Artificial Intelligence (AI) as a technology that can potentially address some health system gaps, especially the reduction of global health inequalities in low- and middle-income countries (LMICs). However, because most AI-based health applications are developed and implemented in high-income countries, their use in LMICs contexts is recent and there is a lack of robust local evaluations to guide decision-making in low-resource settings. After discussing the potential benefits as well as the risks and challenges raised by AI-based health care, we propose five building blocks to guide the development and implementation of more responsible, sustainable, and inclusive AI health care technologies in LMICs.

Introduction

“By failing to prepare, you are preparing to fail”.

(credited to Benjamin Franklin)

In the context of a global epidemiologic transition towards chronic non-communicable diseases (e.g., cancers, cardiovascular diseases, diabetes) [1, 2], approximately half of the world’s population lacks access to basic health care and roughly 100 million people are impoverished as a result of health care spending [3, 4]. In addition, by 2035, the World Health Organization (WHO) anticipates a shortage of nearly 12.9 million health care workers worldwide [5]. To address these challenges, the WHO highlighted in its various drafts and reports the importance of digital technologies to help increase universal access to affordable person- and community-centred care and services [6, 7].

In this regard, artificial intelligence (AI) is seen as a technology that can potentially contribute to the reduction of global health inequalities [6, 8]. AI is defined as “the imitation of human cognition by computers: reasoning, learning, adaptation, self-correction, sensory understanding, and interaction” [9, 10]. Given that most AI-based health applications are developed by, and implemented in high-income countries, their use in low- and middle-income countries (LMICs) is very recent [8]. However, the significant need for health care services in countries with limited resources along with recent developments in the AI field point toward rapid upcoming changes [6, 8, 11]. For instance, while Africa bears 25% of the global burden of disease and is home to only 3% of the world’s health care workers [12, 13], more than 700 million smartphone connections are expected in this continent during the year 2020 [14].

In this paper, we share critical observations and reflections based on our various roles and experience as researchers and experts in digital health and AI, global health, health services and policy research, medicine, public health, health technology assessment, and responsible innovation in health. Some of the authors worked in the field of international development, collaborated with international organizations (e.g., WHO) and/or were born and trained in a LMIC (Mali, Morocco, Niger). The paper adopts a systemic critical perspective with the main objective of stimulating and encouraging debate and further research that address AI as an object of transformation in LMICs’ health systems, not simply as a discrete technical device that can be applied to solve one health problem at the time.

In the following sections, this article discusses both the potential benefits of AI-based health applications in LMICs as well as the risks and challenges that need to be considered and further explored to make way for a more responsible, sustainable, and inclusive use of AI. Readers should keep in mind that several variations are at play both within and across LMICs. While we cannot do justice to these variations, we provide an overview of generic issues that are likely to prove more or less salient in several countries, depending on contextual factors affecting population health needs, human resources, infrastructures, urban/rural divide, etc.

Potential benefits of AI-based health applications in LMICs

Proponents of the development and implementation of AI-based health applications in LMICs list many potential benefits and advantages, mainly to improve the performance of health systems while reducing costs [4, 15]. For example, such applications could potentially reduce the costs of screening and treatment plan selection for pathologies requiring expensive equipment and specialized expertise unavailable in most hospitals in LMICs, particularly in rural and isolated areas [8, 15,16,17,18,19]. Indeed, when available in local settings, new digital technologies, including AI, can facilitate the development of affordable, better quality, and accessible innovations, while overcoming the local resource-constrained environment [20]. These technologies could counterbalance the necessity for hardware and associated high investments, by leveraging software platforms (e.g., digital apps can “measure body temperature or eye deficiencies, instead of using thermometers or expensive eye measurement apparatus”) [21].

Furthermore, through interactive and private communication, the use of AI chatbots or virtual avatars and characters (e.g., photorealistic virtual representations of clinicians) could help populations suffering from stigmatizing pathologies (e.g., HIV/AIDS, psychiatric pathologies) access care and follow-up services in a timely manner (e.g., advice, recommendations, referrals) [22, 23]. By adapting to local cultures and languages, AI automated translation solutions could also improve access to and use of services, as well as compliance with treatments, in areas where culture or language represent barriers to health care [24].

From an epidemiological perspective, AI could also help predict the spread of pathologies or vulnerability within certain groups or communities, and thus allow for more effective interventions [1, 8, 11]. For example, weather conditions and land use patterns associated with dengue transmission can be identified, while social networks can be exploited to detect infectious disease outbreaks. Finally, AI offers significant potential for maternal and child health, which is one of the major public health issues in LMICs (e.g., pregnancy monitoring, prediction of birth asphyxia, mother and/or child malnutrition) [1, 4, 8].

Potential risks and challenges of AI-based health applications in LMICs

Because AI-based health applications are recent in LMICs, there are few robust and contextualized evaluations that can guide informed decision-making in these contexts [1, 8]. As such, there is a significant risk of unintentional adverse consequences [25]. Though their use in health care services remains poorly documented, we can identify several major risks and challenges that are specific to LMICs and that are worth considering carefully.

First, to be effective, the training of AI-based health applications requires large amounts of high-quality data and such data is currently unavailable or very difficult to collect in LMICs [4, 8]. Consequently, there is an important risk of developing biased (e.g., gender, ethnicity, age) or defective (“garbage in, garbage out”) AI solutions or of using AI solutions that were trained in contexts that largely differ from the local population [1, 4]. Because the majority of AI-based health applications are trained using data from high-income countries, they may unknowingly be prejudicial or discriminatory towards LMICs populations [11]. Although algorithms are thought to be objective, Faraj et al. (2018) underscore that algorithms are “political by design” in that they are imbued with the values, choices, beliefs, and norms of their developers and of those who assemble the datasets [26]. As such, technology is a “political artifact” with political and social consequences [26]. For example, an AI solution trained with data biased towards over-diagnosis of schizophrenia in African-Americans can have detrimental consequences if used in some sub-Saharan African populations as these technologies could make incomplete or erroneous diagnoses [27]. Moreover, a medical error generated by an AI application could affect a large number of people at the same time, whereas, traditionally, the error of a clinician affects only a smaller number of persons.

Second, one may wonder how the expertise or infrastructures needed to create appropriate governance models will be developed in LMICs in order to guide the use of AI health care technologies [11], which may negatively affect data management, quality and safety of the technologies, and the overall functioning of fragile health systems. The collection, use, storage, and sharing of both individual and population-based data raise important questions in terms of consent, ownership, and access [28]. If not properly governed and managed, there is a risk this data could be used to persecute or marginalize particular individuals, groups, or communities in relation to, for instance, gender, ethnicity, socio-economic group, pathology, or sexual orientation. While some AI technologies can induce ethnicity, which is relevant in certain clinical cases [29], this function could also be used for racial profiling or discrimination [30]. This is particularly relevant as communities in developing countries are highly heterogenous even within a country’s region [31]. Thus, reflexivity by AI developers regarding biases and privacy, and security in data stewardship are vital [8].

In terms of quality and safety of AI-based health applications, the lack of governance may enable companies to commercialize solutions in LMICs that would not obtain regulatory approval in high-income countries. Some could argue that because access to health care can be difficult in certain LMICs, quality and safety requirements should not create obstacles, which may justify a new kind of “medicine for the poor” [32, 33]. Subsequently, a “it’s good enough for them” logic could be the source of new large-scale public health problems [34, 35]. It should be recalled that 70 to 90% of the medical equipment donated to LMICs fail or do not work as expected because of breakdowns (e.g., broken fuses, discharged batteries, lack of spare parts), the lack of user manuals, or the lack of appropriate training for local staff [36,37,38]. In this regard, the question of who will maintain and update AI-based health applications and with what resources becomes crucial, especially if there is a policy gap [1, 8]. In view of how large expenses and investments in AI can be, some countries may not be able to adopt these technologies beyond the pilot phase. Furthermore, because informal medical care is widely practiced in some LMICs, non-compliant AI applications could spread easily. Since part of the population may consider themselves “lucky” or “privileged” to access some form of health services, they may be unable to challenge or express concerns about the quality and safety of the services provided to them.

In addition to the fragility of their health systems, some LMICs face the challenge of having to implement and coordinate health care delivered by, or overseen by, international development agencies and non-governmental organizations (NGOs). These agencies and organizations usually support particular vertical programs, for instance, malaria control, HIV/AIDS, or maternal health. Hence, their propensity to use AI-based health applications in silos and without a comprehensive vision of other health needs risks further fragmenting and disrupting already fragile health systems, especially by paying little attention to other serious and urgent problems [39]. Failure to consider the realities of local health systems could weaken well-functioning professional, organizational, and community dynamics and practices [40]. For example, the use of AI-based health applications could medicalize certain problems that may be more effectively addressed through poverty reduction, health education, promotion and prevention programs. Consequently, there is also the risk that the budget for AI might divert overall health and social budget and resources [41]. An increasing dependence on AI may also lead to an erosion of clinical skills, critical thinking skills, as well as local practice skills, such as community health practices [15, 32]. There is therefore a risk of newly emerging problems resulting from an over reliance upon AI.

Third, although AI diagnostic tools have the potential to reach and screen rural and isolated populations who lack access to medical experts, their use in such contexts can also raise significant challenges in terms of providing adequate follow-up care, especially when individuals lack the means to reach for and obtain specialized care in large urban centers. AI diagnostic tools developed in high-income countries may recommend treatment plans (e.g., medication, surgery) that are not locally accessible or only available at prohibitive costs in other countries [1]. Obtaining a diagnosis without proper follow-up care may negatively affect quality of life, or even lead to stigmatization within families and communities. This runs counter to the “do no harm” and “test and treat” principles, which are the foundations of medical practice [8], unequivocally accentuating the symbolic North-South divide.

Finally, the use of AI-based health applications in LMICs can involve additional risks for vulnerable and marginalized populations. For instance, mental health AI solutions that can detect psychological traits and patterns could also be used for the interrogation of prisoners or members of certain minorities or dissident communities [24]. In addition, in contexts where men exert control over women’s access to and use of information, technology, and care services, receiving a diagnosis delivered by an AI application on a smartphone could endanger a woman’s safety, especially if the aim of the AI application is to deliver a potentially stigmatizing diagnosis through private communication. In this case, the potential benefit of the AI-based tool meant to protect the privacy of vulnerable populations suffering from stigmatizing pathologies may instead be used as a tool to reproduce symbolic and structural violence, and thus raise a question of human right to security.

In summary, despite numerous advantages of their use, the potential risks and challenges of AI-based applications in LMICs indicate that much work remains to be done before health systems, health professionals, and patients may benefit from these technological advances. This is also the case in several so-called “developed” countries. In line with these considerations, we offer, below, reflections around five building blocks that need to be developed to foster a more responsible, sustainable, and inclusive development and use of AI in LMICs.

Developing responsible, sustainable, and inclusive AI for health care in LMICs

Innovation in LMICs, despite variations within and across national boundaries, faces realities and challenges in terms of infrastructure and capital that differ from those established in high-income countries [42]. The dynamics of innovation systems could not be fully understood and addressed without carefully examining technological appropriation processes which may vary according to the context [42]. This view requires to move away from the historical trend where “accredited” experts dominate the process of building and interpreting technology, leading to “rhetorical closure” [43]. Indeed, the lack of an inclusive dialogue about these innovations limits the possibilities of gaining a detailed understanding of the scientific, technological, social, and cultural issues innovation raises in LMICs and among historically marginalized communities and groups [44]. AI is no exception to this trend, particularly in view of the current Western dominant discourse about its promises.

In this vein, we propose five building blocks to support further research and discussions promoting responsible, sustainable, and inclusive AI in LMICs: the training and retention of local expertise, a robust monitoring system, a systems-based approach to implementation, and responsible local leadership inclusive of all stakeholders.

Because there is currently fierce competition for AI experts worldwide, the training and retention of local AI experts are essential to ensure that the technology not only follows industry norms and standards, but also respects and meets the needs of local contexts and populations. Training in basic AI “language” and culture as they relate to consent, privacy, and responsible use of AI technologies is necessary for all stakeholders: decision-makers, managers, health professionals, citizens/patients, and communities [1, 25]. Such an objective requires international cooperation to share expertise and lend support, which could be coordinated by international, governmental or non-governmental organizations, and agencies. Towards this end, an international platform could invest part of its own resources to offer consulting services and training for local decision-makers and experts to better understand and respect industry norms and standards as well as evaluate AI technologies (e.g., health technology assessment) [1]. Services could also cover the establishment of appropriate governance strategies and the identification of essential areas for investment and interventions in order to avoid investing in “miracle” solutions hyped by the media (e.g., high-end medical equipment without local infrastructure and expertise) [1, 45].

A robust monitoring system, possibly located at the level of international cooperation agencies/structures, where stakeholders can alert cases of malfunction or misuse as well as troubleshoot and share solutions is also necessary. For such monitoring to be effective and constructive, national and international organizations will need the collaboration of leading digital industry players. Indeed, policies are no longer only in the hands of parliaments or international agencies, but also in digital platforms and codes [23].

Because effective and reliable AI health care technologies are not sufficient in and of themselves, their implementation will require a systems-based approach in order to truly benefit health systems, communities, and patients [46, 47]. As such, contextual needs and practices of each country must be taken into account in order to properly implement the technology (e.g., equity focus may differ from one country to another) [48]. Indeed, certain important health needs in LMICs can be better met through social policies rather than advances in medical technologies, for instance, poverty and inequality reduction, gender equality, or education [33]. In this regard, it is relevant to mention that despite considerable progress in medical technologies and interventions, health inequalities have increased in LMICs because of the declining living conditions of poor populations [49]. Thus, AI should also demonstrate a real benefit in comparison to other interventions that are not necessarily technological. For example, in some rural and/or remote areas, the best intervention may involve the implementation of training and retention programs for on-site health care providers.

Finally, responsible local leadership working with all stakeholders in LMICs will be necessary in order to develop robust AI health care technologies adapted to local contexts and beneficial for local populations [8]. In order to identify and understand health priorities and potential solutions, governments, academic institutions, research centers, international agencies, NGOs, industry, and civil society must be involved in the development and implementation of these technologies [4]. Women, minorities, and poor communities must also play a significant role and have a genuine, legitimate seat at the table in order to guarantee that innovation is truly beneficial, while ensuring that biases and structural inequalities are mitigated [8]. This inclusive approach would allow them to “tell their own version of the story”, to develop a counter-narrative of the dominant sociotechnical vision and participate in the process of collective imagination about AI, which implies fostering an “epistemic justice” [50]. Indeed, “[e]quity should not only be a goal but a sociopolitical process of sustainable change” [51]. Then, these projects could be spaces where constructive exchanges and collaboration between all stakeholders can emerge. Empowering local actors and fostering local collaboration between stakeholders is key to the development of responsible, sustainable, and inclusive AI.

Conclusion

AI-based health applications may offer many opportunities for LMICs where resources and expertise are lacking and could become a lever to provide access to universal, high-quality, and affordable health care for all. However, if the implementation of this powerful technology is not framed within, and as, an integral part of a global sustainable development strategy, AI may exacerbate public health issues in countries already dealing with substantial problems and urgencies. Within this perspective, it would be relevant to pursue reflections and research on AI development and implementation in LMICs in view of the Sustainable Development Goals, especially around SDG 17 “Partnership for the goals” since productive lessons are likely to be learned in settings that share a number of contextual facilitators and obstacles [52, 53].

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

LMICs:

Low- and middle-income countries

NGOs:

Non-governmental organizations

WHO:

World Health Organization

References

  1. Hosny A, Aerts HJ. Artificial intelligence for global health. Science. 2019;366(6468):955–6.

    Article  CAS  Google Scholar 

  2. Mayor S. Non-communicable diseases now cause two thirds of deaths worldwide. BMJ. 2016;355:i5456.

    Article  Google Scholar 

  3. World Bank and World Health Organization. Half the world lacks access to essential health services, 100 million still pushed into extreme poverty because of health expenses. 2017. Available: https://www.worldbank.org/en/news/press-release/2017/12/13/world-bank-who-half-world-lacks-access-to-essential-health-services-100-million-still-pushed-into-extreme-poverty-because-of-health-expenses.

  4. Sallstrom L, Morris O. Mehta H. Ethical Considerations: Artificial Intelligence in Africa’s Healthcare. 2019. Available: https://www.orfonline.org/wp-content/uploads/2019/09/ORF_Issue_Brief_312_AI-Health-Africa.pdf.

  5. Global Health Workforce Alliance and World Health Organization. A universal truth: no health without a workforce. 2013. Available: https://www.who.int/workforcealliance/knowledge/resources/GHWA-a_universal_truth_report.pdf?ua=1.

  6. World Health Organization. Draft Global Strategy on Digital Health 2020–2024. Available: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf?sfvrsn=f112ede5_38.

  7. World Health Organization. Big data and artificial intelligence for achieving universal health coverage: an international consultation on ethics: meeting report. 2017. Available: https://apps.who.int/iris/bitstream/handle/10665/275417/WHO-HMM-IER-REK-2018.2-eng.pdf?ua=1.

  8. Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Glob Health. 2018;3(4):e000798.

    Article  Google Scholar 

  9. Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. Jama. 2016;316(22):2353–4.

    Article  Google Scholar 

  10. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.

    Article  CAS  Google Scholar 

  11. The Lancet Public Health. Next generation public health: towards precision and fairness. Lancet Public health. 2019;4(5):e209.

  12. Crisp LN. Global health capacity and workforce development: turning the world upside down. Infect Dis Clin. 2011;25(2):359–67.

    Article  Google Scholar 

  13. Mash R, Howe A, Olayemi O, et al. Reflections on family medicine and primary healthcare in sub-Saharan Africa. BMJ Glob Health. 2018;3(Suppl 3):e000662.

    Article  Google Scholar 

  14. Rice-Oxley M, Flood Z. Can the internet reboot Africa. The Guardian. Jul 25, 2016. Available: https://www.theguardian.com/world/2016/jul/25/can-the-theinternet-reboot-africa.

  15. Guo J, Li B. The application of medical artificial intelligence technology in rural areas of developing countries. Health Equity. 2018;2(1):174–81.

  16. Caprara R, Obstein KL, Scozzarro G, Di Natali C, Beccani M, Morgan DR, Valdastri P. A platform for gastric cancer screening in low- and middle-income countries. IEEE Trans Biomed Eng. 2015;62:1324–32.

  17. Escalante HJ, Montes-y-Gómez M, GonzáLez JA, GóMez-Gil P, Altamirano L, Reyes CA,... et Rosales A. Acute leukemia classification by ensemble particle swarm model selection. Artif Intell Med. 2012;55(3):163–75.

  18. Oliveira AD, Prats C, Espasa M, Serrat FZ, Sales CM, Silgado A, ... et Albuquerque J. The malaria system microApp: a new, mobile device-based tool for malaria diagnosis. JMIR Res Protoc. 2017;6(4):e70.

  19. Kalyanakrishnan S, Panicker RA, Natarajan S, Rao S. Opportunities and Challenges for Artificial Intelligence in India. Proceedings of the 2018 AAAI/ACM conference on AI, Ethics, and Society. 2018. p. 164–170.

  20. Agarwal N, Chung K, and Brem A. Chapter 8: New technologies for frugal innovation. In: Adela, J and Waal GA, editors. Frugal innovation: a global research companion. Routledge studies in innovation, Organizations and Technology; 2019. pp. 137–49.

  21. Leliveld A, Knorringa P. Frugal Innovation and Development Research. Eur J Dev Res. 2017. p. 1–16.

  22. Luxton DD. Artificial intelligence in psychological practice: current and future applications and implications. Prof Psychol Res Pract. 2014;45(5):332.

    Article  Google Scholar 

  23. Kickbusch I. Health promotion 4.0. Health Promot Int. 2019;34(2):179–81.

  24. Luxton DD. Recommendations for the ethical use and design of artificial intelligent care providers. Artif Intell Med. 2014;62(1):1–10.

    Article  Google Scholar 

  25. Matheny ME, Whicher D, Israni ST. Artificial intelligence in health care: a report from the National Academy of medicine. Jama. 2020;323(6):509–10.

    Article  Google Scholar 

  26. Faraj S, Pachidi S, Sayegh K. Working and organizing in the age of the learning algorithm. Inf Organ. 2018;28(1):62–70.

    Article  Google Scholar 

  27. Vayena E, Blasimme B, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med. 2018;15(11):e1002689.

  28. Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon MP, Shaw J, Roy D, Fleet R, Ag Ahmed MA. Fortin JP. Anticipating a New Level of Complexity. JMIR. 2020. PMID: 32406850.

  29. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS,... & Webster DR. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2(3):158. .

  30. Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507.

    Article  Google Scholar 

  31. Sheth JN. Impact of emerging markets on marketing: rethinking existing perspectives and practices. J Mark. 2011;75(4):166–82.

    Article  Google Scholar 

  32. Alami H, Gagnon MP, Fortin JP. Some multidimensional unintended consequences of telehealth utilization: a multi-project evaluation synthesis. Int J Health Policy Manag. 2019;8(6):337.

    Article  Google Scholar 

  33. Alami H, Gagnon MP, Fortin JP. Digital health and the challenge of health systems transformation. mHealth. 2017;3:31.

  34. Ayentimi DT, Burgess J. Is the fourth industrial revolution relevant to sub-Sahara Africa? Tech Anal Strat Manag. 2019;31(6):641–52.

    Article  Google Scholar 

  35. Christie G. Progressing the health agenda: responsibly innovating in health technology. J Responsible Innov. 2018;5(1):143–8.

  36. Niezen G, Eslambolchilar P, Thimbleby H. Open-source hardware for medical devices. BMJ Innov. 2016;2(2):78–83.

  37. Malkin R, von Oldenburg Beer K. Diffusion of novel healthcare technologies to resource poor settings. Ann Biomed Eng. 2013;41(9):1841–50.

    Article  Google Scholar 

  38. Richards-Kortum R, Oden M. Devices for low-resource health care. Science. 2013;342(6162):1055–7.

    Article  CAS  Google Scholar 

  39. Williams LD. Getting undone technology done: global techno-assemblage and the value chain of invention. Sci Technol Soc. 2017;22(1):38–58.

  40. Batayeh BG, Artzberger GH, Williams LD. Socially responsible innovation in health care: cycles of actualization. Technol Soc. 2018;53:14–22.

    Article  Google Scholar 

  41. Bærøe K, Miyata-Sturm A, Henden E. How to achieve trustworthy artificial intelligence for health. Bull World Health Organ. 2020;98(4):257.

  42. Williams LD, Woodson TS. The future of innovation studies in less economically developed countries. Minerva. 2012;50(2):221–37.

    Article  Google Scholar 

  43. Pozzebon M, Fontenelle IA. Fostering the post-development debate: the Latin American concept of tecnologia social. Third World Q. 2018;39(9):1750–69.

    Article  Google Scholar 

  44. Woodson T, Williams LD. Stronger together: frameworks for interrogating inequality in science and technology innovation. Available: https://ssrn.com/abstract=3264086 or http://dx.doi.org/10.2139/ssrn.3264086.

  45. Dercon S. Is technology key to improving global health and education, or just an expensive distraction? The World Economic Forum. May 31, 2019. Available: https://www.weforum.org/agenda/2019/05/technology-health-education-developing-countries/.

  46. The Lancet Global Health. Access to medicines—business as usual? Lancet Glob Health. 2019;7(4):4e385–532.

  47. World Health Organization. Roadmap for access to medicines, vaccines and health product 2019–2023: comprehensive support for access to medicines, vaccines and other health products. 2019. Available: https://apps.who.int/iris/bitstream/handle/10665/330145/9789241517034-eng.pdf?sequence=1&isAllowed=y.

  48. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency. 2018. p.59–68.

  49. Gwatkin DR. Trends in health inequalities in developing countries. Lancet Glob Health. 2017;5(4):e371–2.

    Article  Google Scholar 

  50. Williams LD, Moore S. Guest editorial: conceptualizing justice and counter-expertise. Sci Cult. 2019;28(3):251–76.

    Article  Google Scholar 

  51. Salazar ML, Villar RCL. Equity, globalization, and health. In: Salazar LM, Villar RCL, editors. Globalization and health inequities in Latin America. Springer; 2018. pp. 3–295.

  52. MacDonald A, Clarke A, Huang L, Roseland M, Seitanidi MM. Multi-stakeholder partnerships (SDG# 17) as a means of achieving sustainable communities and cities (SDG# 11). Springer; 2018. pp. 193–209.

  53. Franco IB, Abe M. SDG 17 Partnerships for the Goals. In: Franco IB, Chatterji T, Derbyshire E, Tracey J, editors. Actioning the Global Goals for Local Impact: Science for Sustainable Societies. Springer; 2020. pp. 275–93.

Download references

Acknowledgements

H. Alami is supported by the “Canadian Institutes of Health Research’s (CIHR) Health System Impact Fellowship”. This program is led by CIHR’s Institute of Health Services and Policy Research (CIHR-IHSPR), in partnership with the Fonds de recherche du Québec –Santé (FRQS) and the Institut national d’excellence en santé et services sociaux (INESSS).

We thank the reviewers and editorial team for their insightful comments and suggestions, as these comments and suggestions led to an improvement of the manuscript.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

HA, LR and PL produced the first draft of this manuscript, and received input from SJH, SBMC, MS, MAS, MAAA, RF, JPF. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hassane Alami.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alami, H., Rivard, L., Lehoux, P. et al. Artificial intelligence in health care: laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries. Global Health 16, 52 (2020). https://doi.org/10.1186/s12992-020-00584-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12992-020-00584-1

Keywords