scorecardresearch
Friday, July 18, 2025
YourTurnSubscriberWrites: Medical AI Chatbots

SubscriberWrites: Medical AI Chatbots

A med student’s unsettling brush with a beta medical chatbot raises urgent questions about AI in healthcare—who it’s for, how it works, and what happens when trust goes unchecked.

Thank you dear subscribers, we are overwhelmed with your response.

Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/

Introduction

As of typing this sentence, I have access to two medical chatbots on my mobile phone, purportedly only for medical personnel, though only one has an upfront restriction to non-medical personnel (that too because of an eye-watering subscription cost, something common to most medical journals).

As a med student soon to enter a very competitive market, I didn’t really have the time to think about what the easy access of these chatbots, via a variety of health websites and pharmaceutical apps, actually meant. This changed this week, when I was invited to join a web meeting for a demonstration of a medical chatbot aimed at clinical researchers, by a family friend.

To the presenter’s credit, they explicitly stated that the bot was trained exclusively on specific open-access journals/online repositories to avoid “hallucinations” that may arise from errant data. Furthermore, they had apparently applied certain guardrails to ensure that the model would not overstep. They were, however, unable to adequately explain with what logic the bot made its choice of answers, nor what these guardrails actually were, due to not being from the software developer team.

To be fair, even if the developer were present to explain the process as well as the safeties applied, I doubt the very few doctors in the audience would be able to understand. Having dealt with the tech illiteracy of my own learned professors, experts in their own fields, but completely unable to sometimes even present a simple .ppt file on a projector.

For more information, please refer to SubscriberWrites: A brief history of artificial intelligence—from Turing to Transformers.

The meeting did not go well, with the bot being unable to answer the questions offered by the doctor invited by the presenter, nor the presenter able to answer mine.

But I was reassured that this was a product-in-beta, still being actively worked on, and our recommendations for more accurate viewpoints if the bot was aimed at diagnoses, or more extensive reference collection if it was aimed for research, were taken in good taste and would be actively consulted.

The Trigger

While preparing for my exams this month, I was forwarded a message from the very same relative who introduced this budding startup, stating, “All of you can use the product too to get explanation of what is written in your doctor prescriptions as well as check ailments being suffered from, based on symptoms.

Do give your feedback in direct msg to me so we can improve the product continuously with your suggestions. Thanks.” This, in itself, is an innocuous offer, given in good faith by a family friend. Quite frankly, I disagree completely with my respected senior members in the medical community, that the ‘Google Patient’ is a plague that us future medicos will have to deal with.

I find that most doctors in our nation, however effective their methodologies, do not actually practice the ethics they proscribe to. The patient is not adequately informed of the details of their maladies, not provided holistic healthcare, and not forewarned of the litany of adverse reactions their prescribed remedies can lead to.

It is a problem very sharply explained to me, repeatedly, by pointing out that the medical scenario is understaffed, underpaid, and too burnt out to deal with these “ideals”. A frequent answer was that the Indian patient needs more reassurance and trust in the doctor, with adequate results, regardless of the journey.

I find this to be a cynical response, given in a world where a doctor has no fixed working hours, too many responsibilities, and an empathy burnt out by experience. To cut this ramble short, this is something happening not because it should, but because the alternative is too exhausting to deal with.

As such, patients equipping themselves with knowledge, from trusted sources and with good intentions, is not necessarily a bad thing, nor something we can prevent. The problem with the aforementioned forwarded message, is that is precisely not what the future holds.

Most search engines have these form of data-handling models and learning algorithms built-in (see Gemini and Bing-GPT), and patients will tend to trust these chatbots more, since they are supposedly built for medical purposes.

In the aforementioned meeting, the bot was unable to give sufficient information, with potentially dangerous outcomes if its advice was followed (one of its recommendations would have overdosed the patient). It is a product in beta, still to be localised to the Indian population and corrected in its responses.

But I have found that most search results, as well as chatbot answers, are similarly unsatisfactory and dangerous. Most people, including medical students, do not have the time nor the patience to comb through the references and source material to ensure that the advice given to them is accurate to the highest degree.

The Aftermath

Incensed by the message, I followed up the Whatsapp trail to its source, as I was assured by the presenter that the bot would not be made available to the public.

I found that source was an informative article in a reputed newspaper, among various other articles on AI, where the very same presenter has explicitly stated that this is a new product still in development and aimed at aiding clinical research and medical diagnoses.

Nowhere has there been a mention of it being used to inform the patients. Which meant my rage was decidedly unfair and it was not the fault of the company that their message was twisted by the Whatsapp grapevine.

As such, I will not link the article in question, though the dedicated internet sleuth could easily have several precise guesses. But this rage and overwhelming fear is what has prompted me to type out this overly long article, with which I hope to inform, at the very least, the intern at ThePrint, if not the readers.

Now I retreat back to my dry books and notes, to soon give out nervous answers to the same questions in a decidedly beta-product fashion.

These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here