Daily discussions about artificial intelligence seem to paint an image of both friend and foe. Given its tremendous power to reshape our world, it is important to understand this tool and its place within an overarching strategy for health and wellness. 

Artificial intelligence (AI), and the large language models (LLMs) such as Claude, Chat GPT and Google’s Gemini, are able to search, recognize, translate, summarize and generate text responses and images based on large data sets. Using a huge amount of known data, they can synthesize, summarize and generate information based on a user’s prompt.  This ability to quickly search vast amounts of data makes the technology a potentially useful tool in the health care system and individuals seeking health-related information.

According to a statement from the American Medical Association (AMA), “physicians should be encouraged to educate their patients about the benefits and risks of using AI-based tools, such as LLMs, for information about health care conditions, treatment options or the type of health care professionals who have the education, training and qualifications to treat a particular condition. Patients and physicians should be aware that chatbots powered by LLMs/generative AI could provide inaccurate, misleading or unreliable information and recommendations.”1

First, let’s discuss some of the limitations of the current AI LLMs:2

  • LLMs capitalize on “known” (or available) information.This opens the potential for corruption based on the bias of information used to create the data set. This means, the model could be flooded with misinformation or turn to unreliable sources. 
  • LLMs are prone to hallucinations.This means they can fabricate answers or sources that do not exist. These results will be returned to look legitimate, and the user is left to fact-check the AI tool.2
  • LLMs lack discernment.Without a specific prompt, they will (not by default) rely on highly trusted resources. Despite well designed research prompts, the LLM can return inaccurate information and provide different responses, even when repeating the same prompt.4
  • LLMs are prone to the same bias and prejudice inherent in human society.

According to the AMA, these technologies are largely unregulated and there is no current guidance to improve their accuracy or to strip their bias. The U.S. Federal Trade Commission (FTC) has some authority to regulate activities considered to be unfair, deceptive or abusive business practices and can enforce laws for consumer protection. However, these authorities are not specific to AI and the agency is generally under-resourced in this area.1 

While the FDA provides some regulation and guidance for AI tools used by physicians, LLMs used by the public are considered for “educational purposes” only and are not subject to FDA oversight. Without oversight and given the FTC’s limited scope, there is a recognized gap that poses a threat to the public.5 The goal to prevent disinformation requires more research to improve the accuracy of the LLM’s responses to prompts, and guidance on industry-related design that generates safer sources of information.  

My research did not produce precise guidance on improving public use of LLMs for medical information search. Using my own experience with these models and review of the literature, here are helpful guidelines to consider:

First, understand the power of the prompt. Your prompt to the LLM should be specific and provide relevant background information. A person may also suggest that the agent use a “chain of thought” response to enhance accuracy. Instruct the LLM to use peer-reviewed evidence or high-quality medical references such as PubMed. Always ask the agent to provide sources for their response and to verify them; remember that AI is prone to hallucination and may provide inaccurate or fabricated sources. 

Lastly, it is always in your best interest to consult with a medical provider, especially in the event of serious medical concerns. Use the LLM to gather helpful information and to formulate questions to ask your physician or medical provider. Having a strong personal relationship with your medical provider will afford you an expert who can help you safely steer away from the misinformation. 

There is no doubt that AI has the power to reshape our world and improve access to and understanding of our personal health care needs. By understanding its limitations and following informed guidance, the public can use this tool to delve into the vast body of information and research accumulated over decades of medical research. 

Edith Jones-Poland, MD, is an integrative primary care physician and lifestyle coach with Circe Healthcare Solutions. She can be reached at (760) 773.4948. www.circecares.com

References: 1) https://www.ama-assn.org/system/files/ama-ai-principles.pdf 2) https://www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/ 3) https://pmc.ncbi.nlm.nih.gov/articles/PMC12665726/#:~:text=3.1.,et%20al.%2C%202021). 4) https://pmc.ncbi.nlm.nih.gov/articles/PMC12149300/#Sec5 5) https://www.fda.gov/medical-devices/digital-health-center-excellence

Read or write a comment

Comments (0)

Columnists