See the inspiring stories Come meet us Time to legalize weed?
Artificial Intelligence

Will AI and ChatGPT replace doctors like me on the other end of the stethoscope?

While the chatbots may sound compassionate, we need physicians to guide the voice of the AI, and elect officials who believe in public health.

Dr. Thomas K. Lew
Opinion contributor

“We’ve got the results back from your tests, and it seems that there are some cells that aren’t behaving as they should. What we're seeing is consistent with cancer. I want to reassure you that feeling shocked or scared is completely natural. But I also want you to know that you’re not alone in this.”

As a hospital-based physician, almost every day, I have to break the terrible news to patients that they have cancer. It’s never an easy conversation, and there’s no set script to do so gently and empathically. A doctor with a good bedside manner may deliver the diagnosis similarly to the voice in the introduction; however, that compassion was not spoken by an actual human, but instead generated by ChatGPT-4, an “artificial intelligence.”

More and more, technology and AI are advancing in fields solely dominated by humans. As the AI encroaches on the nuances of human health, we must tread carefully and rely on human scientists, public health officials and doctors to guide our new powerful friend so that the technology is not misunderstood, misapplied or abused.

Will AI and robots take over for human doctors?

Workers in all fields are learning how to incorporate AI into their jobs, and doctors are no exception. I have seen intelligent physicians turn to AI to analyze complex patient symptoms to create a list of potential diagnoses, fine-tune treatments and find hidden tumors in imaging. 

I donated a kidney to a stranger.It shouldn't be this difficult for others to give.

As scary as this sounds, there is precedent for this – studies are beginning to show that large language model AI programs seemingly are comparable to human physician knowledge. ChatGPT has been able to pass the multiple choice U.S. medical licensing exams that medical students take before years of residency training. My colleagues at Stanford have also published a study that demonstrates ChatGPT even does well and, in fact, outperforms students on exams with complex open-ended questions.

Beyond an incredible knowledge base, AI has made great strides in softer skills such as bedside manner. A study in the Journal of the American Medical Association showed that ChatGPT created more empathetic responses to patient questions than human physicians.

Where does this lead us? Are robots and AI going to take the place of the human on the other end of the stethoscope?

These questions might not be as abstract as we think, and such issues have begun to reach even the ears of the White House. In October, President Joe Biden has enacted an executive order to regulate AI in a number of key areas, including national public health. 

We need physicians to guide the voice of the AI

Physicians have also taken notice of the encroachment of AI into the medical field. Several concerned doctors came together to create the Physicians' Charter on AI as a guide to ensure that use of the technology remains patient-centered, secure and equitable.

The latter is especially important in not exacerbating health disparities for disparate economic and racial groups. Unchecked AI has at times led to an end product of responses that can only be described as racist. Ensuring that AI adheres to our Hippocratic oath will be a continuous effort.

AI-generated trickery:Artificial intelligence is creating an alternative reality where even Tom Hanks can't be trusted

It is incumbent upon us in the medical and public health community to guide the use of this emerging technology for the benefit and not the detriment of our patients and community at large. Even so, policymakers and those in power need to take the advice of our medical experts.

Since the start of the COVID-19 pandemic, we have seen that this is not always the case, and misinformation and nonexperts have gained significant influence. As American citizens, we need to work on putting our votes toward electing officials who “trust the science” and are not swayed easily by misinformation. 

It was estimated that hundreds of thousands of American lives were lost to COVID-19 by those who did not take the vaccines. Many cases can be directly tied to belief in misinformation and distrust of the medical community and the mRNA vaccine technology. We cannot make the same mistake as we roll out and integrate newer AI technology.

While the chatbots might sound compassionate, we need physicians to guide the voice of the AI – and elect officials who believe in public health. 

Dr. Thomas K. Lew

Dr. Thomas K. Lew is an assistant clinical professor of Medicine at the Stanford University School of Medicine and an attending physician of Hospital Medicine at Stanford Health Care Tri-Valley. All expressed opinions are his own. Follow him on X, formerly Twitter: @ThomasLewMD

Featured Weekly Ad