Tim Strebel

WASHINGTON — Is it possible to create an algorithm that not only accurately predicts which COVID-19 patients will be the hardest hit by the virus, but will also be accepted and trusted by clinicians? Tim Strebel and his colleagues at VA and the National Artificial Intelligence Institute think so, and they are putting their work to the test in a pilot project that could help VA physicians make quicker, more accurate decisions on how to prioritize COVID patients. 

After a nine-year stint in the Army, which included two tours of Iraq and Afghanistan, Strebel began his career at VA in an entry-level position purchasing prosthetics and other medical devices. While in that job, he took a side interest in computer programming. He soon recognized the software could be adjusted to automatically perform tasks that, at the time, ate up hours out of a VA staff member’s day. 

He developed software that expedited the billing process for home oxygen users as well as for ordering eyeglasses. That work earned him two consecutive VA “Shark Tank” Awards, which honor VA employees doing innovative work, and his software is currently in development to be rolled out enterprise wide. 

Strebel’s job today is as an associate group practice manager dealing in informatics at the Washington, DC, VAMC. 

“My job is to use data to improve care and access,” Strebel explained. “For example, we have a heart failure clinic, and there are certain points of care we provide to make sure patients aren’t readmitted. I create a report to show how well we’re doing that. I find areas where we’re not meeting care needs, try to trend those over time, and [that data] informs leadership how to provide better care.”

When COVID-19 patients began arriving at VA hospitals, Strebel immediately had an eye on the numbers. 

“My part was to try and capture all of our data that I could—COVID positive tests, hospitalizations, lengths of stay. Very quickly, I had a pretty robust data set.” 

A few years prior, Strebel had taken an interest in machine learning and began wondering if he could use the data coming into the DCVAMC to create a prognostic algorithm—one that could predict mortality rates for COVID patients.

“Eventually I ran out of computing power, so I went to the VA Innovation Hub and asked about computing resources,” Strebel said. “I got my cloud computing power, but they also connected me with the National A.I. Institute. They took me under their wing and helped take that was just emerging for me as a skill and helped push me over the edge of being pretty good at it.” 

The A.I. that Strebel has developed creates a report that provides a 120-day mortality risk score for COVID patients, and it does so using two models. The first looks at conditions known prior to the patient coming into the hospital—risk factors such as BMI, age, comorbidities and other data already present in the patient’s electronic health record (EHR). The second model includes lab work and vital signs taken at admission. The latter has proven to be more accurate.

The AI is being piloted at a number of VA sites, and Strebel expects to know within a few months whether the algorithm is accurate and clinically useful. 

However, even if proven accurate, getting clinician buy-in can be difficult, especially when it comes to a computer program that does their thinking for them. 

“Clinicians in general don’t trust AI,” Strebel declared. “There have been papers written about that. There have been huge blunders [with previous AI projects], and clinicians hear about that.” 

However, even if doctors don’t trust the program’s prognostication, the AI could be very helpful in other ways. The program automatically sifts through the EHR, pulls out all known COVID-19 risk factors and presents them in an easy-to-read format for busy clinicians. It also weights those factors—showing how much each, either positively or negatively, contributed to the AI’s prediction. 

“We provide those values for the clinician to look at, and they can disagree or argue,” Strebel explained. “At the end of the day, humans have a ton of biases. And an AI can have biases, too. But if it’s performing well, we ought to trust the AI. Maybe not at face value, but give it some credence.”