BETHESDA, MD—“So you see a diagnostic error. That error occurs at the sharp end of medicine,” declared Dr Mark Graber, chief of medical services at the Northport VA Medical Center. “You look at where the diagnostic error occurred, you can always find a clinician, or sometimes two, to point the finger of blame at. But you don’t know what went into that clinician’s decision. What are the facility’s communication, coordination, training policies, and procedures? What are the things that led to the diagnostic error?”
While there is considerable research examining diagnostic errors, there has been little done in the way of creating concrete interventions to reduce them, Graber explained to a room full of physicians at the Agency for Health Care Research and Quality national meeting last month.
He and colleagues undertook a search of available literature, pulling in articles in the fields of business, health, psychology, military systems, and engineering, in the hopes of finding tested interventions to reduce diagnostic errors, which they could weigh the benefits of and then test in an ambulatory care setting.
“We were hoping the articles would include an outcome measure [for the tested interventions]. We insisted on that,” Graber explained. “We were not looking for improvements in [tests and technology]. That’s a given: that as technology improves, diagnostic error will fail.”
A breakdown of recorded diagnostic errors shows that 28% were cognitive errors (wrong action or thinking on the provider’s part), 19% were system errors (problems with facility systems), and 46% were a combination of the two, with the rest considered no-blame errors. Of the 1,000 articles that Graber and his colleagues sorted through, only 37 had tested interventions—32 focusing on cognitive errors and five focusing on system-related errors. About 120 studies had suggested interventions, but did not test them.
“It was surprising to us that there were only five system-related interventions,” said Dr Hardeep Singh, program chief at the Houston VA HSR&D Center of Excellence and co-investigator of the study.
Also surprising was that the interventions focused on only three of the five dimensions of the diagnostic process, those five being: patient-provider encounter, diagnostic tests, follow-up and tracking, referrals, and patient-
related factors, such as patients not adhering to recommendations or coming to appointments.
A lot of attention has been paid to follow-up and tracking, with two tested interventions targeting delivery of test results through electronic means. “There were a lot of suggested interventions.” Singh noted. “There were recommendations to establish criteria for communication of abnormal test results and to standardize the steps involved in the flow of information. For example, if a test result comes back after 5 o’clock in the evening, what are the procedures to get those results to the right person?”
“This is low-hanging fruit,” Singh declared. “There’s a lot of literature that talks about it, but only two tested interventions.”
There were no tested interventions in the patient-related area of the diagnostic process, Singh said. “We’re left with one big question: How do we better engage patients? What’s the efficacy of engaging patients in the decision-making process? How do we design an intervention that can actually test that question?”
Singh pointed out that while there’s a federal mandate stating the results from any mammogram must be given to a patient within 30 days, that mandate does not exist for other tests. “You could have a chest x-ray with an abnormality. There is no mandate for the radiologist to send it to the patient. They send it to the physician. And there are definitely communication breakdowns that happen.”
Diagnostic errors that have a cognitive cause are frequently due to doctors seeing a symptom they have seen many times before, diagnosing it based on that experience, and making a quick decision without considering other possibilities.
“Most cognitive errors involve breakdowns in synthesizing the available data due to faulty context assumptions and premature closure,” Graber explained. “This is very common. Whenever we’re faced with a puzzle, when we come up with a solution that explains what we need to explain, we’re done. We’re happy. That problem is solved.”
Errors can also be caused by affective biases and environmental factors, such as stress and sleep deprivation, he added.
There have been several tested interventions designed to increase medical knowledge, such as increasing training time and diagnostic events to increase experience, using simulations to provide compacted experience, and increasing feedback to improve calibration and reduce overconfidence.
There were no tested interventions focused on improving clinical reasoning, Graber declared. There were many suggestions in the literature, from using debiasing techniques to improving metacognition. “There were a lot of ideas, but not many that were tested, and not many that apply to an emergency department, which was where we wanted to focus,” Graber said.
The intervention that the team has decided to test are physician checklists. “Checklists are ideal in dealing with complexity – dealing with so much complexity and time pressure. Checklists can combine system-based, patient-based, and cognitive interventions,” Graber explained.
However, they have not decided on what type of checklist to use, and were using their colleagues at the AHRQ conference as a sounding board for opinions. The first candidate is a general checklist. That checklist would include the following steps: Obtain your own history from the patient (instead of relying on the one taken by a nurse or physicians’ assistant; perform a focused physical exam; generate a hypothesis; pause to reflect; engage with the patient; embark on a plan, but acknowledge uncertainty.
The researchers are also looking at a syndrome-specific checklist. “Suppose a patient comes in with chest pain,” Graber said. “The physician thinks it’s pneumonia. But he has a checklist of other possible things it could be. And, with the patient, he goes through to cross everything off the list. It helps to slow the physician down and makes the patient aware that [the physician is not] 100% [sure that] it’s pneumonia, but 95% [sure that it’s] pneumonia.”
Physicians at the conference were appreciative of both checklists, but critical as to how realistic it would be to incorporate into real-world scenarios, where clinic doctors are hard-pressed for time, and emergency rooms are described as barely-controlled chaos.
“The challenge will be how to build it into the system,” Graber admitted.
Another possible drawback is increased cost. “If you shake people out of their intuitive thinking, and they do come up with more possibilities and order more tests, that will cost more money and could lead to false positives,” he said.