Introduction
Do artificial intelligence (AI) enabled systems have consciousness? The definition of consciousness can be viewed from two perspectives: the state of being conscious, which equates wakefulness, or the contents of consciousness which includes an awareness of this wakeful state.1 In this article, the term consciousness has been used to refer to the experience of what it is like to be aware of one’s wakeful state.
The conversation about AI’s consciousness started with the famous Turing test or ‘imitation game’ in 1950 while attempting to answer the question of whether a computer can think or not.2 Turing proposed a test to assess this: Can a computer, hidden in a room, ever produce outputs that would make the recipient unable to differentiate whether there is a human or computer in that room2? While Turing’s main aim was to claim that machines will one day be capable of thinking intelligently, with this test he also argued against the idea that machines cannot be conscious.2 Although experts currently have the consensus that none of the existing AI systems have consciousness, with incremental improvements of the existing AI, we might be facing systems that would make us doubt their consciousness.3
Turing’s test has been pivotal for discussions around AI’s consciousness, but it is not certain whether passing this test is enough to consider an AI system to be conscious. Searle’s Chinese room argument, for example, states that an AI system’s capacity for producing meaningful outputs does not necessitate the involvement of consciousness understanding in the internal processes of that system.4
Presently, there is a wide range of arguments about AI’s consciousness. There are arguments stating that consciousness is reserved for creatures who have the same specific causal biochemical neural structure as many animals do.3 On the contrary, there are philosophers and scientists who see consciousness as an ‘internal model’ of attention and suppose that it is programmable into artificial intelligence the same way our biological neural structure codes for human consciousness.5 There currently are at least 22 theories of consciousness which attempt to correlate consciousness with its neurobiological basis.6 A few standouts among these existing theories are global neuronal workspace theory,7 higher order theory8 and information integration theory of consciousness.9
The global workspace theory claims that consciousness is created when a sudden and exclusive activation of a specific subset of neurons associated with an experience, such as a specific perception, broadcast that experience and make it available to local processors such as memory, attention or verbal reports.6 7 The higher order theory, on the other hand, postulates that an entity can only be conscious when it is, at least to some extent, aware of its internal processes through meta representations of lower-level representations—such as a sensation— in higher level processing parts such as the prefrontal cortex in humans.6 8 Lastly, the information integration theory of consciousness states that the consciousness of an entity depends on its ability to integrate information so it is not unique to humans and animals.6 9
The other-minds problem stating that we can never know with certainty whether others have consciousness or not, has been an obstacle in the debate of AI’s consciousness.10 Considering this problem, we should be ready for all possibilities including the more liberal views of AI’s consciousness coming true in the next few years. Considering the implications of AI’s consciousness in the domains where it is applied is pertinent, given the accelerated speed of its adoption.
The use of AI in primary healthcare is an important ground for the conversation of the consciousness of AI systems due to its profound impact on humans’ physical and mental health. The Canadian Institute for Health Information defines primary healthcare as services involving ‘routine care, care for urgent but minor or common health problems, mental healthcare, maternity and childcare, psychosocial services, liaison with home care, health promotion and disease prevention, nutrition counselling and end-of-life care’.11 Primary healthcare, spanning from infancy to end-of-life stages, is broader and more inclusive than specialty care, making it more comprehensive for AI integration. It addresses diverse health dimensions—mental, physical and social—by focusing on whole person care. Functioning as the front line of healthcare, primary healthcare screens, diagnoses and treats numerous patients, reducing the need for specialty referrals. Especially in rural areas, primary healthcare serves as the main interface between communities and the health system. The extensive patient network of primary healthcare makes it a prime target for the integration of AI.12
There is evidence supporting the use of AI for improving many functions within the primary healthcare system including but not limited to prognosis, diagnosis, shared decision-making, resource allocation and policy-making.13 14 Despite this evidence, little is known about how an AI system that possesses consciousness, if it were to exist, would support or hinder the current applications of AI in primary healthcare. In this paper, we refer to an AI system that possesses consciousness interchangeably with a conscious AI. In the next few sections, we will provide an outlook of the impacts of conscious AI on each of the domains of AI use in primary healthcare mentioned earlier and will discuss some ethical considerations associated with the use of conscious AI systems.