Audio By Vocalize
Artificial intelligence (AI) tools are becoming increasingly popular for health information. Many people now turn to online chatbots for quick answers about symptoms, medications, or general health concerns. One such tool is ChatGPT Health, developed by OpenAI to provide health-related guidance.
While these technologies can be helpful, recent research suggests the public should use them cautiously. A study published in the journal ‘Nature Medicine’ found that AI health chatbots may sometimes give inaccurate or potentially unsafe advice.
Researchers tested the system using dozens of simulated medical situations covering many different specialities. They varied details such as symptoms, laboratory results, and patient characteristics to see how the AI responded. The findings showed that although the chatbot performed well in some situations, it struggled with others where judgment was more complex.
One of the most concerning issues involved the process of deciding how urgently someone should seek medical care. In several scenarios where patients should have gone to an emergency department, the AI instead advised them to stay home or book a routine appointment. In other cases involving minor illnesses, the system recommended urgent care when it was not necessary.
Mental health scenarios revealed additional concerns. The chatbot is designed to show crisis support information if someone expresses thoughts of self-harm. However, the chatbot performed inconsistently even when simulated suicidal scenarios were serious.
The researchers also observed inconsistent advice across different racial backgrounds. While the study was not specifically designed to confirm bias, it raises important questions about fairness and consistency in AI health tools.
None of this means that AI has no role in healthcare. These technologies can be useful for general health education. They may also help health systems manage large volumes of information. However, AI chatbots are not doctors. They do not examine patients, they cannot interpret subtle clinical signs, and they do not carry responsibility for medical decisions. Their responses are based on patterns in data rather than clinical judgment developed through years of training and experience.
AI tools should be used as a source of general information only. They should never replace consultation with a qualified healthcare professional, especially when symptoms are severe, worsening, or unclear. The safest approach is to treat online health chatbots as a starting point for information, not as a substitute for proper medical care.
Dr Murage is a Consultant Gynaecologist and Fertility Specialist.