Doctors think AI has a place in healthcare – but maybe not as a chatbot

Dr. Sina Bari, a practicing surgeon and AI healthcare leader, has seen firsthand how ChatGPT can lead patients astray with faulty medical advice. He recently had a patient who came in with a printed dialogue from ChatGPT claiming a recommended medication had a 45% chance of causing pulmonary embolism. When Dr. Bari investigated, he found the statistic was from a paper about the medication’s impact on a niche subgroup of people with tuberculosis, a situation that did not apply to his patient.

Yet, when OpenAI announced its dedicated ChatGPT Health chatbot last week, Dr. Bari felt more excitement than concern. ChatGPT Health, which will roll out in the coming weeks, allows users to talk about their health in a more private setting where their messages will not be used as training data. Dr. Bari thinks this formalization is great, as it protects patient information and adds safeguards, making it more powerful for patients to use.

Users can get more personalized guidance from ChatGPT Health by uploading medical records and syncing with apps like Apple Health and MyFitnessPal. For the security-minded, this raises immediate red flags. Itai Schwartz, co-founder of data loss prevention firm MIND, notes that medical data would transfer from HIPAA-compliant organizations to non-HIPAA-compliant vendors, raising questions about regulatory approach.

But many industry professionals believe the cat is already out of the bag. Instead of Googling symptoms, over 230 million people already talk to ChatGPT about their health each week. Andrew Brackin, a partner at Gradient who invests in health tech, says this was one of the biggest use cases for ChatGPT, so building a more private and secure version for healthcare questions makes sense.

AI chatbots have a persistent problem with hallucinations, a particularly sensitive issue in healthcare. Research indicates OpenAI’s GPT-5 is more prone to hallucinations than many models from Google and Anthropic. But AI companies see potential to rectify inefficiencies in healthcare, with Anthropic also announcing a health product this week.

For Dr. Nigam Shah, a professor at Stanford and chief data scientist for Stanford Health Care, the inability of American patients to access care is more urgent than the threat of poor AI advice. Wait times to see a primary care doctor can be three to six months. Faced with a long wait for a real doctor or an immediate conversation with an AI, many patients may choose the latter.

Dr. Shah believes a clearer route for AI in healthcare is on the provider side. Studies report administrative tasks can consume about half of a primary care physician’s time, limiting how many patients they can see. Automating that work could allow doctors to see more patients, potentially reducing the need for patients to rely solely on tools like ChatGPT Health.

Dr. Shah leads a team developing ChatEHR, software built into electronic health record systems. It allows clinicians to interact with patient records more efficiently. Dr. Sneha Jain, an early tester, says making records more user-friendly means physicians spend less time searching for information and more time talking to patients.

Anthropic is also working on AI for clinicians and insurers, not just its public chatbot. It announced Claude for Healthcare this week, highlighting how it could reduce time spent on administrative tasks like prior authorization requests. Anthropic’s CPO Mike Krieger said cutting 20 to 30 minutes from each case represents dramatic time savings.

As AI and medicine intertwine, a tension exists between the two worlds. A doctor’s primary incentive is to help patients, while tech companies are ultimately accountable to shareholders. Dr. Bari acknowledges this tension is important, noting patients rely on medical professionals to be cynical and conservative in order to protect them.