Originally published by our sister publication General Surgery News

Early in my time in the Air Force as a flight surgeon, I was piloting a two-seater T-33 jet trainer on a night flight when my instructor told me to fly without instrument guidance, make several rolls, stop on command and tell him what I saw looking out the cockpit canopy. I said, “There are stars above and city lights below.” He told me I was wrong, that what I thought were stars were the city lights, and vice versa. I was flying upside down.
I was instructed to look at my instrument panel, an artificial intelligence monitor, which told me that, indeed, I was upside down. I rolled the plane 180 degrees right side up, following the AI needle, but my vestibular mechanism for the next few minutes contradicted reality. The lesson I learned was that in the modern world, you may have to trust AI over your own perceptions. On another day that year, a fighter pilot in a supersonic aircraft failed to do so and crashed into a mountain.
A simplified definition of AI is the use of computers to expand human interventions. The term AI first came into common usage in 1956, when it was embodied in primitive robots to perform simple tasks. Of interest, these early robots resembled the drawings of humanized automatons by Leonardo da Vinci in 1495. Evolving computer sophistication is responsible for our contemporary world in almost every area: travel, industry, agriculture, conveniences, entertainment and, of course, warfare. In addition to progressing from nano rapidity and multitasking, computers have been taught to analyze and to come to what we call rational conclusions—in essence, to reason. However, the increase of AI capability has terrified many under the assumption that an AI mechanism that cannot only outperform but outthink its creators should be viewed with fear by humans.
That fear of AI has been aroused in various art forms, not only by the proliferation of plagiarism and forgery but by the computer’s potential to outdo the painter, the composer and the writer in their own elements. On the contrary, in science and medicine, except for the critical issues of word plagiarism and the theft of another’s work, repetitions of outcomes are actually required. Replicability is the basic tenet of the scientific method. Outcomes must be reproducible by several investigators to be accepted as factual. However, scientists also fear that AI might introduce false, and possibly harmful, information, a real concern for science and medicine. This concern is not new and has existed prior to AI.
When I first came to the University of Minnesota, Dr. Naip Tuna, an extraordinary cardiologist, was working to write the computer codes for today’s ECG machines. These universally used instruments are the product of interpretive machine analysis, an AI function. Advancing AI diagnostic sophistication has given us CT, PET and MRI. AI has further enabled the development of therapeutic interventions, such as targeted, image-guided radiation therapy; minimally invasive robotic-guided surgery; and virtual reality instruments for use in mental and physical rehabilitation.
AI has moved into more personal interactions with humans. Computer-operated machinery can anticipate human actions and, from an array of stored memory, instantly make data available for review and assessment, as well as initiate a complementary or counterresponse. These capabilities form the basis of video games and chess-playing computers. If I had placed the plane on autopilot during my test flight, the plane would have righted itself without my intervention. If the fighter pilot who crashed his plane had done the same, his plane would have automatically climbed to an altitude that would have avoided hitting the mountain.
A current popular science fiction concern about AI is that machines will “revolt” against their creators and destroy humankind, and that humans will become extinct in a world of self-sustaining robots and “thinking” consoles. This hypothesis credits AI with human attributes, the worst of them. Could AI, for example, create superbugs that would be more efficient in eliminating our species than any mechanical army? That would happen only if a human operating AI, now or in the future, were to initiate such a program.
In contrast to AI, let us examine natural stupidity. Unfortunately, there is an abundance of that in our world, perhaps a preponderance. In medicine, we hope that every physician is intelligent, or at least competent. But that may not be the case. When I was still in active academic practice, conducting patient rounds, I asked a medical student for his thoughts on a patient’s differential diagnosis and how he would proceed to narrow the potential options. He whipped out his iPhone to consult an algorithm. I told him to put the instrument away and to speak from the knowledge base of his nearly four years of medical training to make his own analysis of variables. He proved that he had little knowledge of established facts and, even when prompted, could not produce a reasonable thought sequence. After graduation, this student became someone’s doctor, treating afflictions, counseling fellow human beings. Fortunately, after a patient consultation, this doctor will be able to postpone diagnosis and therapy by waiting for laboratory and imaging results, allowing him time to go to his iPhone, consult the algorithms and then come to a conclusion for the patient. In essence, AI may be the patient’s ghost doctor. In some instances, medical AI might prevent the physician from crashing into an unappreciated mountain.
It has become common practice to place medical specialists into a service line dedicated to a disease entity—for example, a colon cancer service line consisting of a gastroenterologist, an oncologist, a surgeon, a radiotherapist, one or more nurse practitioners, and a stomatist. The surgeon would, for the most part, be kept in the OR to make money for the group. The group head, to whom the surgeon is responsible, is therefore usually not a surgeon; in turn, the group head is responsible to a CEO or to a dean, who probably also comes from another discipline. Thus, service line physicians have lost most of the clinical unity, shared knowledge, experience and directional leadership traditionally concentrated in a clinical department unit. In surgery, the days of the charismatic departments of Halsted, Cushing, Wangensteen, Lillehei, Varco, etc. are gone. To compensate for that loss, the service line often relies on AI systems.
Medical and scientific research has long profited from AI, markedly speeding up processes of analysis and simultaneously answering questions of correlation at lightning speed. Certain AI is the product of “big data” with all of its advantages, as well as its fallacies. The AI machine that can think alongside or beyond the human innovator will propel research at a faster rate into the future than was achievable in the past.
Animal advocates have long protested against the academic and industry experimental use of various animal species. By its capability of modeling experimental conditions and projecting outcomes, AI has been helpful in limiting animal experimentation in research at a substantial monetary saving. The fact that in nature confusing variables intercede in the outcomes achieved should not be used as an argument against computer analysis and projection being artificial and misleading. On the contrary, it is an argument for AI computer analysis that will provide an accurate and precise answer to the specific science question asked, isolated from confounding, unaccounted-for variables.
Will AI-aided research also provide false outcomes? Of course! Quantitatively, however, I believe those negative outcomes will be fewer than what has been, is and will be produced by natural stupidity. We have also encountered research papers written by an AI computer, such as ChatGPT. If performed under the same statistical and ethical supervision required of human-based research, such papers may offer a major advantage rather than a detriment to the dissemination of knowledge. Currently, 60% or more of research outcomes never see publication because the investigators do not have the knowledge or the time to write an acceptable journal submission.
Finally, the detractors of AI offer the specter of unemployment. In fact, reemployment may be the more accurate term. When the covered wagon gave way to the railroad, the railroad to the car, the car to the plane, there were massive shifts in employment; however, there was also a massive increase in employment. In medicine, AI may well lead to fewer general physicians. There are internet companies today that offer a paying caller medical advice from an AI robot doctor. The patient of the future may actually establish a relationship with his or her robodoc. Today there are fewer physicians per 100,000 (2.6) population in the United States than in corresponding nations (3.8). I have always thought that this trend was negative, but possibly, AI may lower the cost of U.S. healthcare and perhaps even improve the medical advice being offered. The fewer practicing physicians may also benefit by being guided by AI to solve difficult diagnostic and therapeutic problems. Such a global change in physician availability could mitigate the American doctor shortage, caused, at least in part, by our expensive medical schools’ inability to graduate a sufficient supply to meet the postulated national demand.
Any addition of intelligence to various professions is desirable. Protecting the status quo and obstructing the evolution of AI is antithetical to the medical profession’s credo of striving for advancement in diagnostics and therapeutics. Let us, therefore, welcome AI because it is inextricably entwined with medical progress.
Dr. Buchwald is a professor emeritus of surgery and biomedical engineering, and the Owen H. and Sarah Davidson Wangensteen Chair in Experimental Surgery, at the University of Minnesota, in Minneapolis. His articles appear every other month.