Regardless of our attitudes towards AI, LLM-powered applications have gained widespread popularity and are gradually reshaping the practices of the primary care sector. Instead of fearing the idea that AI could replace us, we should welcome the opportunities and advantages that such technologies bring. This includes opportunities to ease administrative burdens and improve the quality of patient care and outcomes. However, we must also be mindful of the numerous limitations and shortcomings in the capabilities of LLMs that may affect their usability in clinical practice. Box 1 provides a summarised overview of important factors that primary care clinicians should consider regarding the integration of LLMs into clincial practice.
Box 1Summary of take-home messages regarding the implications of LLMs in clinical practice
Highlights
Accuracy of outputs: LLMs such as ChatGPT are known to produce logical but factually incorrect outputs. Clinicians must be mindful that current non-domain specific LLMs are not designed to provide medical advice, and any medical interpretations should be fact-checked against prevailing clinical evidence/guidelines.
Regulatory oversight: The unique capabilities and limitations of LLMs have illuminated calls for such technologies to be regulated. Clinicians need to periodically assess the impact of a changing regulatory environment on the use of AI and its associated technologies in clinical practice.
Enhancing professional competence in AI: In order for AI technologies, including LLMs, to be successfully integrated into clinical practice, clinicians at all career stages need to have the skills, attitudes and knowledge to use such tools safely and effectively. The creation of an AI-ready workforce requires incorporation of AI competencies into existing medical and professional development curricula.
Respecting patient preferences: Every patient has distinct views and preferences regarding the incorporation of AI in the clinical decision-making process. Clinicians need to proactively communicate with patients regarding the role of AI in their medical care including responding to patients’ fears and ensuring that their choices are respected.
AI, artificial intelligence; LLMs, large language models.
Accuracy and reliability of AI-generated responses
It has been acknowledged that certain LLMs, such as ChatGPT, can produce logically coherent information that may be false or inaccurate. This occurrence, known as ‘AI hallucination’, refers to the phenomenon where an AI-powered algorithm generates fictional or unsubstantiated information in response to a query.36 Moreover, it is important to note that the datasets used to train LLMs may be outdated and incomplete. In the case of ChatGPT, its training data is based solely on information available up until September 2021.37 Hence, clinicians should exercise caution when using LLMs and refrain from overly relying on advice provided by these applications. Instead, they should use their professional judgement to selectively choose clinically relevant information and discard any that are not.
Given that current non-domain specific LLMs like ChatGPT are not designed to serve as reliable sources of medical information, it would be more prudent for clinicians to use specialised medical domain-specific interactive interfaces, like Evidencehunt—an AI-powered, evidence-based search engine that consolidates clinical evidence on specific topics—to assist them in making well-informed clinical decisions. However, it is important to note that this tool does not differentiate between contextually relevant and irrelevant clinical evidence. It functions to summarise available articles indexed on PubMed, and therefore, its results are meaningful only when interpreted in specific contexts, yet these contexts may not necessarily align with the unique circumstances of the presented patient.
AI regulation
Clinicians need to evaluate how AI governance frameworks impact the practical applications of such technology in their clinical settings. As public interest in LLMs continues to grow, attempts to regulate this technology are also on the rise. As AI applications become more prominent in healthcare, clinicians must recognise the importance of handling sensitive healthcare data in accordance with strict ethical and privacy standards. That is, clinicians must ensure that they do not misuse sensitive health data in any manner that compromises patient confidentiality or violates privacy regulations. To proactively uphold these principles, clinicians should allocate time to stay informed about the latest updates in data protection and privacy laws that govern their practice.
Development of AI competencies
To successfully adopt AI-powered technologies in primary care, clinicians at all career stages need the confidence and skills to use these emerging tools and keep pace with the rapid developments in this field. As these new technologies find their way into the hands of our patients, there is an urgent need to integrate education about AI technologies into the existing medical and primary care training curricula. These educational competencies should help primary care clinicians understand the fundamental principles and opportunities for AI use in clinical applications and cover the risks and challenges of AI use. This is particularly crucial in addressing concerns related to confidentiality, consent, and the limited clinical knowledge of LLMs in order to ensure the safety of the patient and clinician. Clinicians must also be aware of how they can effectively communicate with patients regarding their use of LLMs-based tools, including supporting and training patients on how to critically assess the accuracy and relevance of AI-generated content. Moreover, since LLMs generate content based on a user input, they are sensitive to how the input text (or prompt) is framed.38 Therefore, variations in the words or phrases used in the prompt can affect the quality and accuracy of LLM generated information.38 To ensure the robustness and effective use of LLMs, there is also a need to conduct further research in prompt engineering. In particular, research would need to focus on the reproducibility and reliability of LLM generated interpretations across different prompt variations for the same medical query. In doing so, the medical field can establish universal guidelines that provide clear guidance to clinicians and patients on how to construct prompts in a way that allows LLMs to perform a diverse array of medical tasks safely and effectively. For now, it seems that the most appropriate approach for users to take in optimising their prompts would be to experiment with different prompt styles and compare the outputs with the desired results. This process assists users in identifying a suitable structure for future prompts related to a particular query.
Patients’ preferences
Finally, clinicians must respect their patients’ attitudes and preferences towards incorporating AI into healthcare decision-making. While AI chatbots promise to revolutionise clinical practice, patients’ trust in AI technologies remains low. In a study examining the role of AI chatbots in behavioural health, it was found that despite the demonstrated effectiveness of chatbots in promoting healthier lifestyles and offering a safe platform for discussing sensitive topics like sex-related issues, drug and alcohol use, less than 50% of participants expressed acceptance of their potential future use.39 Additionally, an American survey found that 60% of American adults would be uncomfortable if their clinician relied on AI for diagnosis or treatment recommendations.40 Indeed, these findings highlight the challenges of navigating diverse patient preferences when it comes to using AI in healthcare. Integrating AI tools in clinical settings raises questions about balancing technological advancements with the human touch in healthcare, with concerns that AI could potentially depersonalise patient interactions. Transparent communication, ethical considerations, and respect for patient autonomy are crucial elements for fostering the widespread acceptance and effective integration of AI into existing healthcare systems. Certainly, there is a genuine need for more standardised research in this area to thoroughly understand both the advantages and risks, fostering a balanced and informed approach to incorporating AI into clinical practice.