AdobeStock_By Artofinnovation

The impact of AI on the medical profession has begun

Artificial intelligence has impacted various areas of the medical professions — from the creation of discharge instructions and care plans and other administrative functions such as staffing to patient-facing customer service chatbots, translation services, medical record documentation and even clinical functions that assist with diagnosis in areas such as radiology.

Although AI is still in the early stages of adoption, an American Medical Association (AMA) study finds physicians have already formed opinions about the use of artificial intelligence in health care, termed augmented intelligence by the AMA, with some excited and others concerned.

The August 2023 survey was made up of 1,081 respondents, including 420 PCPs and 661 specialists, 776 employed doctors and 305 physician practice owners. In addition, 525 were categorized as tech adopters and 556 as tech averse.

While only 38% said they currently use AI in their practices, 65% saw an advantage to AI, with the greatest enthusiasm surrounding tools that can help reduce administrative burdens such as documentation (54%) and prior authorization (48%).

Forty-one percent said they were equally excited and concerned, with 30% more excited than worried and 29% more troubled.

Whether or not physicians choose to embrace the new tools, there’s no turning back the clock, with AI set to revolutionize the health care industry as other technologies have done in the past.

“While it’s not the standard of care right now that will likely be the case in the future,” said Brant Poling, founder of the national trial firm Poling Law in Columbus, Ohio. “A lot of practices have already begun incorporating it into their operations.”

And though no AI-related medical malpractice lawsuits have been filed yet, Poling said it’s only a matter of time before both adopters and resisters face claims.

“Plaintiff attorneys will seek to make a case that’s its use or absence contributed to a client’s negative outcome,” Poling said. “So much of medicine relies on clinical judgment and provider error is one of the main sources of claims.”

One example: If there is an AI algorithm that is used to diagnose a myocardial infarction or heart attack and a physician relies on it to indicate a specific procedure that results in complications, and it turns out that the patient did not have a heart attack, the physician is exposed to a lawsuit, Poling added.

“On the other hand, if the patient did suffer a myocardial infarction and was not treated appropriately because an AI algorithm was not employed, the doctor is also exposed,” he said.

On Sept. 27, Poling together with Paul Greve Jr., senior director of healthcare risk solutions at Markel Specialty; and Lisa Richardson, senior area claims manager at Trinity Health, did a presentation on the issue at the OSHRM (Ohio Society of Health Care Risk Managers)-SOHA (Society for Ohio Health Care Attorneys) fall conference in Columbus.

“The MPL Risks and Benefits of Using Artificial Intelligence in Healthcare,” presentation addressed a host of topics, including its potential to promote workflow efficiencies and reduce diagnostic errors, along with the steps that can be proactively taken to mitigate risk.

“Although the tools continue to advance, the most important thing that a physician needs to remember is that AI alone cannot be relied upon,” Greve said. “The physician must double check everything to flag errors.”

An example: One of its more common uses now is in drafting medical notes for patient records because it can capture the conversation between the doctor and patient and summarize the visit, Greve added.

“While this can help the physician focus more on what the patient is saying, these notes must be reviewed before they are entered into the chart,” he said.

AI also has the potential to help eliminate diagnostic errors, Greve said. For example, a physician could input a patient’s medical history and symptoms and conduct a search for all possible conditions.

“It could spur ideas that the physician had not considered, but again it must be double checked for accuracy,” Greve said.

According to a March 15 press release from Hackensack Meridian Health, the network has various AI-enabled capabilities in production as pilots and under development, including solutions to help radiologists prioritize the review of critical cases.

The health system is also engaged in a pilot to help physicians detect advanced kidney disease earlier and another one that assists in the optimization of OR scheduling so care is not delayed.

Students at Hackensack Meridian School of Medicine are already learning how to integrate AI into their training, while prioritizing ethics and patient safeguards.

Robert Garrett, CEO of Meridan Health System, Inc., stated the health system “must commit to deploying AI safely through effective governance,” adding “it will never replace human intervention and oversight.”

What are the biggest concerns that physicians have in choosing to incorporate AI into their practices?

Thirty-nine percent of those who participated in the AMA study were worried about the impact to the patient-physician relationship and 41% expressed concerns about patient privacy.

In all 87% of respondents said data privacy assurances and not being held liable for AI model errors were very or somewhat important to advancing its adoption. In addition, 86% want to see it covered by standard malpractice insurance and 84% want its safety and efficacy validated by a trusted entity.

Annie Matincheck, chief strategy officer at Positive Physicians Insurance Company, agrees explaining there could be increased liability and corresponding coverage needs related to AI in the practice of medicine, as well as in the areas of cybersecurity and data privacy.

“It is important to note that the use of AI is not currently considered the standard of care, as it is still too new and not yet widely adopted,” Matincheck said. “However, as its role in medicine continues to expand, this could change, further emphasizing the need for physicians to actively review, analyze and validate output to ensure safe and effective patient care while recognizing these are tools to complement, not replace, clinical judgment.”

Matincheck said AI has various applications, including the potential to enhance administrative efficiency and prioritize patient cases, but its true value hinges on successful implementation.

“Similar to electronic medical records, effective AI adoption necessitates selecting a vendor that provides a high-quality, adaptable product along with comprehensive training, support and ongoing maintenance,” she said. “It also requires fostering a culture of innovation to ensure smooth integration and high adoption rates.”

Matincheck said AI introduces additional complexities in patient care, necessitating rigorous validation and security measures to build trust in its outputs among both physicians and patients.

Regardless of how AI is being deployed, Matincheck cautioned that it heightens cybersecurity risks in the health care profession, a prime target already due to its sensitive data and need for uninterrupted patient care.

“Cyberattacks on patient records or medical devices, including AI systems, can jeopardize patient health and even be life-threatening, allowing hackers to demand high ransoms by exploiting the critical nature of health care services,” she said. “Our company has transitioned from including small limits of cyber coverage in our medical professional policies to partnering with specialized carriers that offer a separate and more robust coverage solution, including proactive tools for monitoring and preventing breaches. This change reflects the increasing significance of cybersecurity in health care and the need for tailored policies that address these evolving risks more effectively.”

What steps can be taken to mitigate the risk?

First and foremost, Poling said, medical professionals must be aware of the flaws, which include the phenomenon known as AI Hallucination, leading some systems to generate inaccurate results and frame them as fact.

Speech recognition technology is another area of potential concern since it’s not 100% accurate, Poling said.

Greve said health care professionals should review all available studies and conduct onsite tests of AI tools to determine the benefits and risks before implementation.

He said facilities should also secure indemnification agreements from manufacturers to cover potential flaws that result in claims.

“I would recommend that hospitals establish committees with defense attorneys, insurance brokers and other risk management players to establish guidelines and courses of action to mitigate loss should injuries occur,” Greve said. “With specific federal law and regulations likely a long way from being enacted, it will be up to physicians and hospital administrators to lay the initial groundwork to ensure the safety and efficacy of these new tools.”

Written by Sherry Karabin

Data-driven decision-making helps manage medical liability risk and improve patient outcomes

Orthopedic surgery malpractice claims expected to rise with aging population