Artificial intelligence is already at work in Wichita hospitals, but without much regulation

Wesley Medical Center uses the technology to speed up treatment to stroke victims, scan X-rays for incidental tumors and as a second set of eyes to remotely monitor patients.

by Suzanne King

Takeaways: 

  • AI is regularly used in health care, from helping doctors take notes during patient appointments to monitoring the availability of hospital beds.
  • People in the health care industry say the technology has potential to cut costs, reduce staff burnout and improve patient care.
  • Regulators are starting to look at how to ensure the technology doesn’t cause more harm than good, but industry wants to be part of making sure rules don’t stifle innovation.

Patients at Wichita’s Wesley Medical Center are cared for by nurses at their bedside — with help from a virtual care center in Denver.

Hundreds of miles away, computers monitor vital signs and lab results and send out alarms when a patient is in trouble.

Like a growing number of hospitals, Wesley leans on artificial intelligence to cut costs, offset shrinking reimbursements from insurers and compensate for a growing shortage of nurses and doctors.

At hospitals around Kansas, AI also charts staffing levels, reads CT scans, takes notes for physicians after patient exams and offers a shortcut to respond to patients’ follow-up questions.

But as AI’s tentacles reach deeper into medical practices every day, the fast-evolving technology faces only minimal regulation. And the medical industry has arguably little incentive to push for more oversight of a tool they say could transform the business of health care. 

Right now, nothing stands in its way. Hospitals may not even tell patients when they’re using AI.

“Because, unfortunately,” said Lindsey Jarrett, vice president of ethical AI at the Center for Practical Bioethics in Kansas City, “no one’s really telling them they have to.”

Patient advocates warn that regulators need to step up, because biases baked into AI could cause patients real harm. But even as the industry gives a nod to supporting regulations — even helping develop them — they are busy trying to exploit all the cost-saving possibilities the technology has to offer. Meanwhile, patients remain largely skeptical about AI getting involved in their health care.

Why AI?

Hospitals see many reasons to roll out technology that has the potential to save time and, as a result, boost profits. Proponents also say AI brings the possibility of better care and could supercharge research into diseases and treatments.

Put simply, AI is a computer system imitating human behavior, only with vastly greater capacity for taking in and making sense of information. Potential uses, said Tony Jenkins, assistant director of IT initiatives at the University of Kansas Health System, stretch from billing offices to human resource departments to patient care.

“It looks through and parses all that data, and helps us find patterns that are actually beyond human recognition,” Jenkins said. 

Computer processing power only recently became robust enough to sift through years’ worth of hospital data or patient records and come up with something useful. Now that it can, possibilities for it could be vast, Jenkins said.

Hospitals and medical clinics constantly feel pressure to do more with less, he said. To cope with the problems of an aging population. And to make better use of the roughly 17% of the economy devoured by health care costs. 

“There are a million problems that exist in the industry,” Jenkins said. “Using technology to augment (staff) is always going to be at the forefront of any sort of transformation.”

AI at Wesley

Since before the COVID pandemic, Wesley Medical Center has been using AI technology known as Guardian through its virtual care center, said Andy Draper, chief information officer for HCA Healthcare’s Continental division, which includes Wichita. 

Experienced nurses staff the Denver center and watch over the computers constantly monitoring patient data coming from Wesley and other HCA hospitals. When they get an alert that they believe is legitimate, they notify nurses on the patient’s floor to step in.

Draper said the system provides another set of eyes to help nurses in the field. In addition to improving care, he said, the system saves money.

“We’re able to look at 3,000 patients at a time and we’re able to identify which patients are not doing well,” he said. “We’re able to contact the patient’s nurse or doctor to say, ‘This is what is going wrong.’ Without this tool, you would have potential misses.”

Draper said the technology is also vital support to young nurses who may be just starting their careers.

“There’s a shortage of nurses in this country and there will be for many years,” he said. “All these tools help nurses do their jobs.”

Wesley also has employed an AI-powered tool that cuts the time it takes the hospital to get time-critical care to stroke patients.

Viz.ai is an FDA-approved platform that can quickly analyze a brain scan of 1,200 images, pinpoint where a stroke is occurring and immediately send images to everyone on the patient’s stroke team. That could include 20 doctors, nurses and other providers.

“The picture of the stroke is securely sent to the cellphones of all those people,” Draper said. “It’s like, a picture is worth a thousand words. But in this case it’s actually worth 20,000 words, because you’re communicating to 20 different people who all need that information.”

Draper said the technology has helped reduce “door to needle” time — how long it takes to start treatment — to fewer than six minutes from more than 30.

Wesley is also using an AI tool to monitor every CT scan for “incidental findings.” Someone who comes to the hospital for an injury and gets a scan will have the scan reviewed later by an AI-trained system that looks for health concerns the patient may not be aware of. For one of his coworkers, Draper said, a lung scan done for a case of COVID uncovered a lung tumor.

“It saves lives,” he said.

“Twenty-five years ago we didn’t have cellphones and we had really small (computer processing) chips,” Draper added. “They were doing everything they could with the computing power they had. Fast forward to today, and we’re able to do more things with bigger amounts of data. … It’s an improvement in care. All of these things have accelerated care quality.”

Words of warning

But not everyone thinks AI will automatically improve care. 

National Nurses United, representing nurses at two other Wichita hospitals — Ascension Via Christi’s St. Francis and St. Joseph hospitals — sees AI as a corporate attempt to cut nursing jobs.

“Nurses know that AI technology and algorithms are owned by corporations that are driven by profit — not a desire to improve patient care conditions or advance the nursing profession,” the union wrote on its website. “The hospital industry, in cooperation with Silicon Valley and Wall Street, will use AI to further its dangerous effort to displace RNs from the physical care of their patients prioritizing low-cost or free labor over patient needs.”

St. Francis and St. Joseph nurses went on strike twice last year to protest Ascension’s staffing practices. The union representing them argues that nurses, not algorithms, should be deciding how much care a patient needs and what care should be given.

Patients are also generally skeptical of AI in health care. A Pew Research Center survey last November found that 60% of Americans said they would feel uncomfortable if their health care provider relied on AI when providing care. Meanwhile, 33% said AI would lead to worse health outcomes, 38% said it could lead to better outcomes and 27% doubted it would make any difference.

Some aspects of AI used in health care are obvious to patients. For example, at KU several hundred doctors are piloting technology known as Abrdige, which records doctors’ visits with patients and then transcribes notes based on the recording. 

“That’s allowing our clinicians to remain more focused on the interaction with the patients in the room,” Jenkins said. “They’re no longer having to be heads down, fingers on a keyboard.”

The hospital said the system saves doctors hours because rather than writing chart notes from scratch, they can review what the AI system came up with and edit as necessary. With Abridge, patients are informed the AI system is being used and must give verbal consent before it is turned on.

Potential for harm

But other AI uses in health care aren’t so transparent. 

Sometimes medical technology companies embed AI so seamlessly, even the doctors using it may not realize AI is involved. Neither would patients.

“They don’t have full awareness of how AI is actually embedded already so deeply into our decision-making,” said Jarrett of the Center for Practical Bioethics.

That’s a major problem when it comes to AI in health care, because so many factors that could affect care are in play, she said. Patients, and importantly doctors, should know when AI is being used so they can dig deeper and find out how the AI was developed and if there are potential biases that could affect care.

If AI-enabled products are “trained” with information that is biased in some way, it has the potential of doing real harm in a health care setting. If a tool was built with data gathered from white patients, for example, it might not accurately inform doctors how to treat Black or Asian patients. 

Jarrett said the pandemic, which became so entwined with discussions about embedded discrimination in the country, has broadened the conversations people are having about AI in health care. It’s made people think more about how the populations involved in training models could affect care.

“Had the pandemic not happened,” Jarrett said, “we would have continued to have conversations about AI in regards to privacy and security … but we wouldn’t have started to have conversations around bias and patient impact.”

Industry stepping in to set standards

In the absence of governmental regulation, health care providers have no reason to assume that technology was developed with patient safety or medical ethics in mind. Increasingly the industry is taking on the role of establishing standards for how the health care industry can safely and ethically use AI.

Since 2021, the Center for Practical Bioethics, through its Ethical AI Advisory Council, has been mapping out AI standards it hopes all hospitals in the Kansas City region will adopt. Similar efforts are occurring around the country.

The group also wants health care providers to know what questions to ask when they consider using AI technology. For example, doctors need to ask, what has the technology developer done to mitigate bias? What algorithms were used to create it? At the same time, Jarrett said, patients need to ask who the technology was developed for and whether it would work on people of their age, race or gender.

“Developers creating these things in health care should be able to answer those questions,” Jarrett said. “But they’re not required to answer them today.”

Eventually, Jarrett wants a rating system that would easily tell patients whether a hospital has put the right guardrails around its use of AI.

“We have to be able to have these organizational policies where providers can say, ‘Oh, yeah, that is an AI product. And I know how to explain that to you and why I’m using it,’” Jarrett said. 

Beginnings of regulations

As recognition grows about the potential dangers of AI, government agencies, Congress and state legislators are starting to look at how to regulate it. 

Late last year, the White House issued an executive order about AI, which began by saying that the technology “holds extraordinary potential for both promise and peril.” The order specifically called out health care as an area “where mistakes could harm patients.”

Late last year, the U.S. Food and Drug Administration updated its list of approved AI medical devices to include 692, adding 170 over the previous year. The list includes machine learning devices, which have the potential to gain insight from vast amounts of data accumulated in hospitals and doctors’ offices every day. 

The vast majority of approved devices — 87% — involve radiology, while 7% involve cardiology, and a  tiny percentage of approved devices are used in neurology, hematology, gastroenterology/urology, ophthalmology, clinical chemistry and ear, nose and throat medicine.

The agency has not approved any devices yet that rely on generative AI, like the technology used in ChatGPT. But that doesn’t mean the technology isn’t in use in health care. Many trials of AI are still unapproved because regulations are still nascent.

The U.S. Department of Health and Human Services established an AI office in 2021, and the agency has both encouraged the adoption of AI and called for establishing the HHS AI Council to work on governing it.

Meanwhile, a branch of HHS that regulates health IT published a rule last year that would require electronic health vendors to display basic information about how a model was trained or developed.

“What were the patients that the model was trained on?” said Brian Anderson, a founder of the Coalition for Health AI (CHAI). “That would inform its accuracy on how it might perform on patients once it’s deployed.”

But while the rule calls for transparency, it does not specify standards. The rule defers to the industry on what the specific standards should be. And, while many in the industry see a need for regulations, it also wants a part in creating them. 

That’s something CHAI is working on. The group, which includes industry leaders and regulators, last year issued a Blueprint for Trustworthy AI in Healthcare and is working on developing industry standards for the responsible use of AI. 

“It’s gonna be a hard needle to thread,” Anderson said. “Part of the hope is that bringing the regulators and the innovators together, that whatever approach the regulators take, it’s not going to stifle the innovation that’s already happening.”

Right now, industry is making progress establishing guardrails. But ultimately that won’t be enough, Anderson said.

“Guardrails by their very nature are voluntary,” he said. “You could crash into the guardrails or jump over them if you wanted to.”