newYou can listen to the Fox News article!
Artificial intelligence (AI) will undoubtedly change the practice of medicine. At the time of writing, PubMed (a medical research website repository) indexes 4,018 publications for the keyword “ChatGPT.” In fact, researchers are using AI and large-scale language models (LLMs) for everything from reading pathology slides to replying to patient messages. However, a recent paper in the Journal of the American Medical Association suggests that AI can act as a surrogate for end-of-life discussions. This is going too far.
The authors propose creating an AI “chatbot” that speaks for incapacitated patients. To quote, “By combining individual-level behavioral data such as social media posts, church attendance, donations, travel records, and past medical decisions, the AI can learn what is important to the patient and predict what the patient would choose in a given situation.” The AI can then express in easy-to-understand terms what the patient “would have wanted,” to help guide end-of-life decisions.
We are both neurosurgeons who routinely have end-of-life discussions with families while treating patients with traumatic brain injury, stroke, and brain tumors, and these heartbreaking experiences are a common but rewarding part of our work.
AI wearables promise to help you remember everything
Our experiences teach us how to connect and deepen bonds with families as we help them through life-changing challenges – in some cases, we shed tears alongside them as they navigate their emotional journey and consider what their loved ones would want us to do if they could speak.
AI is transforming healthcare and assisting doctors, but it is not human enough to handle end-of-life decisions. (iStock)
We never thought it appropriate to ask a computer what to do, and we never allowed the computer to take on the role of doctor, patient, or family member in this situation.
The primacy and sanctity of the individual are at the heart of modern medicine. Philosophical individualism is the basis for the main “pillars” of medical ethics: beneficence (doing good), non-maleficence (doing no harm), justice (being fair), and (our emphasis) autonomy. Medical autonomy means that patients are free to make informed choices and are not coerced. Autonomy often takes precedence over other values, allowing patients to refuse offered treatment and physicians to refuse to perform requested procedures.
However, the decision is made by a competent individual, or a designated surrogate if the patient is unable to speak for himself/herself due to disability. Importantly, the surrogate is not merely someone appointed to read out the patient's will, but a person entrusted with judgment and decisions. True human decision-making in the midst of unforeseen circumstances and unforeseeable knowledge should remain a sacred and inviolable standard in these most critical moments.
Even a technology fanatic must admit that AI technology has some limitations that would make any rational observer think twice.
The computer science principle of “garbage in, garbage out” needs no explanation: a machine will only see what it is given and will answer accordingly. So, would you want a computer to make life support decisions based on social media posts from years ago? However, even if we demanded that the data fed into this algorithm be completely reliable and accurate, we are more than our past selves, and more than hours of recorded conversations. We should not reduce our identities to such trivial “content.”
Now that we've spoken about incompetence, let's consider malice. First, and most simply, multiple hospital systems have fallen victim to cyberattacks by criminal hackers just this year. Should algorithms that claim to speak and make decisions for real people be living on the same vulnerable servers?
An even bigger concern is who will create and maintain the algorithms? Will they be funded or operated by large health systems, insurers, or other payers? Will doctors and families even be able to accept the idea that these algorithms might be weighted to “nudge” human decision makers toward less expensive paths?
Opportunities for fraud are numerous: An algorithm programmed to prioritize the removal of life support could save Medicare money, while an algorithm programmed to prioritize expensive life-prolonging treatments could be a revenue stream for hospitals.

Secretary of the Air Force Frank Kendall smiles after a test flight of the X-62A VISTA aircraft against a manned F-16 aircraft, Thursday, May 2, 2024, over Edwards Air Force Base, California. The flight of the artificial intelligence-controlled VISTA aircraft marks a public statement of confidence in the future role of AI in aerial combat, as the service plans to use the technology to operate a fleet of 1,000 drones. (AP Photo/Damien Dvarganes)
The mere existence of suspected fraud is alarming in itself, not to mention the issues of language, cultural barriers, and specific patient groups with a fundamental distrust of healthcare and other institutions. Consulting a mysterious computer program is unlikely to inspire greater trust in these situations.
Click here to read more FOX News Opinion
The large and ever-growing role of computers in modern medicine has been a source of great frustration and dissatisfaction for both doctors and patients, perhaps most notably the replacement of face-to-face time between patients and doctors with tedious paperwork and “clicking.”
These and countless other computational catastrophes are exactly where AI should be used in healthcare: not to displace humans from their most human roles, but to reduce electronic clutter, freeing doctors to look away from screens, look patients in the eye and offer wise counsel when it matters most.
An even bigger concern is who will create and maintain the algorithms? Will they be funded or operated by large health systems, insurers, or other payers? Will doctors and families even be able to accept the idea that these algorithms might be weighted to “nudge” human decision makers toward less expensive paths?
The average person would be surprised at how little of a physician's day is dedicated to practicing medicine, and how much time is spent on billing, coding, quality assessment, and a host of other technical trivialities. With AI technology still in its infancy, it seems like a good idea to target these low-hanging fruit before handing over end-of-life decisions to mindless machines.
Click here to get the FOX News app
Fear can be paralyzing. Fear of death, fear of making decisions, fear of regret. We don't envy the surrogate decision-makers who are plagued by the possibilities. But abdicating that role is not the solution. The only way is to follow through.
We, as physicians, help patients, families, and surrogates navigate this territory with eyes wide open. Like most fundamental human experiences, this is a painful but extremely rewarding journey. So this is not an opportunity on autopilot. To paraphrase an old man, “The answers, dear reader, lie not in a computer, but within ourselves.”
Dr. Anthony DiGiorgio, DO, MHA, is an assistant professor of neurosurgery at the University of California, San Francisco, and a senior research fellow at the Marketas Center at George Mason University.