After exploring the fundamentals of AI in the first article and unraveling its benefits for patient care in the second, it is important to address the other side of the coin: the risks associated with using AI in health care and the essential measures needed to protect patients and their privacy. 

Much discussion of the risks of AI in health care surrounds the use of algorithms to identify a person’s health problem and how bias can enter the decisions. Remember, algorithms are like very smart computer programs that act as assistants to doctors, nurses and even insurance companies (some Medicare Advantage programs are already using them). These programs can sort through your lab work, x-rays and medical history in no time! The programs take your blood tests, heart rate, etc. and put all this information into a picture to give your doctor information so decisions can be made on what is best for your treatment.  

However, what if all the information that is entered into this program (algorithm) is data from male patients and you are a female being assessed for heart disease? What if an AI-designed system is being used to detect skin cancer, but all the data entered into the program is based on lighter-skinned patients? Depression and pain tolerance vary greatly across different cultures and individual experiences; how adequate will your depression and pain symptoms be managed if the data in that treatment algorithm comes from only one or two ethnic populations and doesn’t represent yours? Something called confirmation bias can also occur if doctors or nurses have pre-existing beliefs about a gender/culture’s pain tolerance, depression or heart disease that are confirmed by the bias that exists in the algorithm.  

Some insurance companies, including Medicare Advantage plans, use a computer program to estimate how many days in the hospital are sufficient before you are ready to go home or go to rehab – but what if the data in that algorithm is based on meeting all physical therapy goals and you have not, or living at home with a spouse, and you live alone with no savings? Patients often get discharged before they are ready, based on what an AI computer algorithm has concluded instead of what they can physically do to go home or to rehab. AI programs in health care also require access to a vast amount of personal and sensitive patient data. The risk of data breaches, unauthorized access or misuse of your data is a real concern.  

Ensuring that a treatment algorithm in health care is unbiased can be challenging for a patient, as it involves understanding complex technical details often beyond the scope of public knowledge. However, you can take certain steps to advocate for yourself and seek unbiased treatment with privacy protection.

Ask questions. Patients should feel empowered to ask their health care providers about the tools and algorithms being used in their care. Questions can include how the algorithm works, what data it was trained on and whether it has been tested for accuracy and bias across different populations.

Seek second opinions. If a patient feels uncertain about a diagnosis or treatment plan suggested by an AI-augmented system, seeking a second opinion is a good practice. This can provide a broader perspective and confirm or question the initial recommendation.

Understand your rights. Patients should be aware of their rights regarding health care. This includes the right to informed consent, which means they should be given information about the treatments and any technologies used in their care.

Understand consent forms. When signing consent forms for treatment or data use, it’s crucial to understand what you’re agreeing to. If anything is unclear, ask for clarification. Be aware of how your data will be used, shared and protected.

Use secure channels for communication. When communicating electronically with health care providers, use secure channels. This might include using patient portals provided by the health care provider rather than email, which may not be secure.

Regularly check medical records. Regularly reviewing your medical records can help you spot any errors or inconsistencies, especially those After-Visit Summaries. This also includes checking the results of any AI-driven assessments or conclusions of which you may become aware. 

Be wary of third-party apps. If using third-party apps or devices (like fitness trackers) that integrate with your health care providers’ systems, understand how these apps use and share your data. Always use apps from reputable sources and check their privacy policies.

By taking these steps, you can help protect your personal health information and navigate the AI-enhanced health care landscape more safely and confidently.

Dr. Porter is CEO and founder of MyHealth.MyAdvocate in Palm Desert. She is an experienced health care professional with over 30 years of nursing practice dedicated to unraveling the mysteries of health care processes and advocating for patients, families and caregivers. Immediate assistance is available by calling (760) 851.4116. www.myhealthmyadvocate.com

Sources: 1) https://www.thomsonreuters.com/en-us/posts/technology/ai-usage-healthcare/; https://www.chartis.com/insights/ai-roundtable-building-trust-and-transparency-health care-ai; 3) https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/

Read or write a comment

Comments (0)

Columnists

Living Wellness with Jenniferbanner your financial health michelle sarnamentoring the futureNaturopathic Family Medicine with Dr. ShannonThe Paradigm Shift in Medicine TodayConventionally Unconventional with Kinder Fayssoux, MD