Lloyd W. Klein
Chicago, Illinois, United States
Making good judgments that are patient-centered and evidence-based seem straightforward when evaluated from the executive perch, but the practitioner in the trenches knows that despite an extensive knowledge base and vast experience, the myriad decisions, large and small, which constitute daily practice, pose abundant opportunities for error and misjudgment. As a departmental quality officer, I am often asked to assess the results and decisions of others. I am often troubled that the current “swing” of the pendulum is toward decision-making grounded in rigid and prescribed guidelines-adherence, which has substantial policy benefits, but away from individual patient choice, which does not. Most importantly, I am worried that miscommunication between those who make the decisions and those who appraise them may create a serious predicament: incorrect physician quality assessments1.
How doctors don’t make decisions
A distinction must be drawn between flawed decisions and expected or unpredictable undesirable results; the latter cannot be realistically anticipated or obviated. It is the first of these that we seek to eliminate. Was there an incorrect analysis of the patient’s condition made, or an inaccurate application of a study result, or were the patient’s apprehensions not given sufficient credence? To untangle these possibilities, there must be a complete comprehension about how physicians actually make therapeutic decisions.
A quixotic notion of the way doctors make clinical judgments emanates from the conception of scientific inquiry. The idea is this: After a correct diagnosis is definitively reached, all of the treatment options are considered, the risk: benefit ratio for each is evaluated, the patient’s goals and fears are elicited, and a treatment plan tailored individually to optimize outcomes and comfort while simultaneously limiting peril. But the experienced physician perusing this depiction will acknowledge that every step in this summary has inherent imprecision, and further, does not reflect how choices are made in reality 2–4. There is wide variability how physicians evaluate symptom severity and interpret test results. There are frequently differences of opinion as to whether the results of a particular clinical trial pertain to that individual patient. Consequently, the application of existing knowledge to patient management is often ambiguous. Often, a poor judgment is blamed on fatigue or a failure to consider simple but easily overlooked issues such as a recent medication or asking the patient a question using a word that is not understood. If then an unexpected or untoward event occurs, and a root cause analysis is undertaken, comparing what actually happened to an idyllic fable creates the unsatisfying challenge that since the ideal is rarely achieved even when things go well, we aren’t able to truly understand (excluding an overt error) why a bad outcome was experienced.
The impression that benefits and risks can be quantified and compared objectively is not usually apposite and is not the method used by physicians 3,4. The scansion of medical decision-making is not that of a rational process in which every possibility is overtly considered and its likelihood painstakingly calculated. To develop a probabilistic model for every case would be time consuming, require precise data about relative incidences of outcomes in various clinical circumstances, and assumes it is possible to quantify all of the possibilities. In fact, in most complex medical situations, both good and bad outcomes have uncertain likelihoods, and to some extent depend on factors that are not typically captured. For example, local expertise may be an important consideration at a particular institution. Patient frailty may impact decisions, yet it is never directly evaluated in clinical trials. 5 Decision trees might be advantageous in theory, but an accurate probabilistic approach to diagnosis has never been successfully created 6, 7 and has only been outlined for treatment strategy. 6, 8-11 Therefore, it is unrealistic to presume that a computer program could be constructed to calculate the best strategy because a) there are too many variables to consider and b) therapeutic options change too fast to develop sufficiently accurate probabilities.
Most clinical trial results depend on case selection and reflect the optimum expected results in that patient population, which may, or may not, apply to a different group. Often there are some outcomes that favor one strategy but other outcomes that favor another. At times, a particular strategy may improve symptoms but entail higher mortality risk. Trial results that run counter to prevailing opinion are typically discounted 12. How can the resulting value judgments ever be incorporated accurately in a patient-centered model?
Practice guidelines are valuable tools, highly useful in most patients, but based on the underlying implication that the experts writing the document have assessed the pros and cons probabilistically, and rendered a conclusive judgment. Most physicians believe that guidelines should be just that–a guide, starting point, but not intended to be the final arbiter in an individual patient. If the physician feels that the patient poses an unusual circumstance, a more individually tailored approach should be chosen. Additionally, an analysis of one guidelines document (13) found that only 275 (10%) of 2711 recommendations had the level of evidence (Ia or IIIa) that would enable the use of deductive logic. Further, most physicians recognize that the uncritical application of guideline mandated therapies can lead to bad decisions, because patient preference 14, 15 and exceptions always exist.
How doctors actually decide
Despite the aspiration for a scientific, rational decision making process, treatment decisions are practically determined by three principles: 1) reliance on pattern recognition, intuition, and experience in previous cases that fit that pattern; 2) avoidance of a catastrophic outcome; and 3) the iterative generation of possible courses of action, with rapid mental simulations to compare outcomes, and the selection of the first course of action that is not rejected. Intuition is used to recognize situations and help to decide how to respond, and analysis is used to verify that the intuitions are appropriate to the situation. This model 16 is precisely the one used by and taught to battlefield commanders and firemen, who must respond rapidly and do not have the luxury of time to weigh the pros and cons of every decision; the reasoning must be intuitive but accurate. To some extent, this method may be more apparently applicable in emergencies and during surgical procedures, but likely is employed by all physicians in a modified form.
Step One – Intuition
Physicians use instinct and a few basic facts to determine a diagnosis and evaluate if the individual patient is typical for this pattern, evaluate prior experience in such cases, recall the guidelines, then implement a course of action. Intuition is used to generate a workable course of action, and then that is considered logically to confirm it as appropriate. In other words, our mental process holds pre-conceived patterns of diagnosis and optimal treatment strategies: when a patient is seen with a certain symptom and laboratory pattern, then we treat in a certain manner. If the patient does not pose a typical, pre-set pattern, we then ask for more data to further evaluate the situation. In this circumstance, with every new data point, we re-evaluate the pattern and mentally test if a certain strategy is likely to work or not; and if not, to consider another strategy or get more data. I believe that most physicians rely almost solely on their intuition, guided by their experiences and little else. These recollections may be flawed: we tend to remember the best outcomes and suppress bad ones, and rarely recollect accurately the features that determined the result. Often, there is scant attention to studies or guidelines, particularly when the details and conclusions of that study do not fit their preconception.
We often obtain information in an order that does not correspond with its relative importance, yet we must process it in relation to its value in making the judgment. Cognitive errors, in which insufficient adjustment is made when the clinical data arrives randomly, are common. A physician’s initial assessment may need to be readjusted and sufficient weight given to subsequent data or events. In that sense, doctors make decisions analytically, in a process that might resemble a meandering road. There may be bits of data that are unexpected and unlikely, but when discovered, could reasonably alter the apparent trajectory: e.g., a patient who refuses surgery or someone who consistently has adverse effects with medications and is apprehensive of that treatment method.
Step 2 – Risk aversion
Uncertainty is an indelible part of medical decision making: all decision-making requires risk. Risk aversion is a reaction to uncertainty in decision-making 1 and results in choices that minimize risk – by determining the worst possible outcome and making choices that minimize that possibility. Unfortunately, that choice frequently results in less likely benefit. Doctors are risk averse and tend to choose the option we think limits the likelihood of risk, not the one that optimizes benefit, especially avoidance of a catastrophic outcome, regardless of its likelihood. “Rational” decisions aim to maximize the best outcome, which when it works out leads to the best results, but often require taking higher than usual chances of a disastrous consequence.
In part, this behavior is strongly influenced by the medical malpractice system, in which a bad short-term result is a basis for legal action. Further, it is emotionally powerful for the physician to select a treatment strategy only to witness a complication consequent to that choice. The advent of national registries and states with public reporting accentuate this apprehension by measuring only the short-term downside. The same quality assurance methods that physicians implement for self-assessment, though well-intentioned, has the unintended consequence of rewarding risk aversity by equating complications with lack of skill, which is often inaccurate.
It might seem that risk aversion would lead to fewer tests or operations that pose hazard to the patient, decreasing overutilization of unnecessary procedures. However, in many cases, physicians perceive that not performing these procedures may lead to increased medical morbidity, or increased malpractice risk; hence, risk averse behavior paradoxically can lead to riskier health decisions. Many studies document international and US regional disparities to treat patients, with no discernible effect on outcome despite marked differences in cost; these differences may find their origins in this concern. Thus, least risk choices do not necessarily lead to optimal benefit, for patients, doctors, or the medical system.
Step 3 – Iterative mental simulation
Expert physicians generate three to five hypotheses early in the evaluation of a complicated patient. Early-hypothesis generation improves the speed and accuracy of medical evaluations by leading to targeted questioning and testing 2-4. A good doctor recognizes the pattern better and faster, like a musician immediately comprehends the prosody of a complex piece. An experienced clinician modifies decision making to enhance speed and simplicity: rather than performing exhaustive searches for the best answer to a problem, thinking stops when a sufficiently adequate solution is reached.
This method of decision-making has the positive attributes of being quick and effective in complex situations. The downsides are that to function satisfactorily, there must be an extensive experience to correctly recognize the salient features and model solutions, and the doctor must not fail to recognize a pattern in easily misidentified circumstances. An unusual variant of the pattern might go unrecognized by even an excellent doctor. An inexperienced physician faced with an unusual case may not realize it, and make the wrong choice. With more experience, the ability to recognize patterns is enhanced and more frequently the first option considered will work.
While doctors tend to be risk averse, patients are loss averse. Patients simplify their choices, focusing on and exaggerating the differences between their options and ignoring their similarities (prospect theory). Generally, people frame their decision from a reference point (memory of past health vs. current); they tend to prefer less risk when their perception is that they are seeking a gain and accept more uncertainty when trying to avoid a loss. Whether or not a non-physician can be taught how to assess costs, risk, and benefits objectively with no clinical experience, particularly in a matter involving their own health, is a matter of conjecture.
To reiterate the banal teaching of medical school that the physician must listen to the patient is a cliché, yet practice guidelines and appropriate use criteria expressly exclude such considerations. Rather, physicians select the proper treatment strategy based on the evidence, and when the data clearly show that a particular option is superior, that approach should always be presented to the patient as the preferred one. Sometimes there is more than one reasonable choice, and the various alternatives present relative advantages and disadvantages that should be discussed in detail. Since every patient is unique, and nearly all have preconceived opinions concerning treatment choices, these premises oversimplify decision-making. Another aspect of this problem is that many physicians believe they haven’t the time to explain the pros and cons of each decision to every patient, and then ask for the patient’s thoughts. Some physicians find it easier and less time consuming simply to dictate to less medically knowledgeable patients, rather than explain nuances and options rationally. 1, 17
Common errors in evaluating physicians’ decisions
It should be recognized in judging the correctness of a medical decision resulting in an adverse event that hindsight is perspicacious. Bad outcomes are always predictable after the fact. Often, decisions are made based on an estimate of probability, and this is a particular concern in recommending high-risk surgery or invasive procedures, and may result in incorrect advice.
Decisions that do not adhere to existing guidelines are not necessarily proof of a poor judgment, although it should raise questions. “Guideline mandated therapy” does not exist in reality, as all decisions must be made in the context of the patient. An outside agency cannot rationally determine whether a decision is appropriate without considering all of the factors that influenced that choice.
Having described the mechanism by which doctors make decisions, what does that tell us about assessing the judgments of others?
• Guidelines are a good starting point for rationally evaluating the evidence base, but adherence is not a sufficient evaluation of physician quality. A physician must explain and document why the patient was suitable for the approved approach, or why a different approach was necessary. The lack of guideline adherence in a particular circumstance does not establish that it was a flawed decision.
• Exceptions must always be anticipated. Guidelines should be written with recommendations and conclusions that allow for exceptions and reasonable alternative therapeutic choices, when they exist. In particular, patient preference may well lead to decisions that vary.
• Clinical decisions depend on a myriad of factors, and criteria based on one particular test result may oversimplify decision-making.
• Risk and uncertainty can never be fully eliminated from medical decisions. These must be described in detail to the patient and the physician must consider these in the process.
• A rigid approach i.e., algorithms that seem to eliminate uncertainty, actually merely camouflage the ambiguity in clinical decision-making, and can not be relied on to produce the best decisions.
The dreaded consequence of our misperceptions as to how doctors make therapeutic decisions is that others will attempt to take the decisions out of our hands. If we are unable to explain the process to patients and third party payors, they may prefer a formulaic answer that eliminates ambiguity and doubt. If physician report cards, third party assessments of hospital and physician quality, ties between outcomes and reimbursement, and the public reporting of outcomes become a reality, serious repercussions may result.
- Klein LW. How do interventional cardiologists make decisions? Implications for practice and reimbursement. Journal of the American College of Cardiology – Cardiovascular Interventions 2013; 6: 989-91.
- Brush JE. How cardiologists think. http://www.cardioexchange.org/voices/how-cardiologists-think/
- Brush JE Jr., Radford MJ, Krumholz HM. Integrating clinical practice guidelines into the routine of everyday practice. Critical Pathways in Cardiology 2005; 4: 161 – 167.
- Brush JE. Does intuition lead to bad medical decisions? http://www.cardioexchange.org/voices/does-intuition-lead-to-bad-medical-decisions/
- Klein LW, Arrieta-Garcia C. Is patient frailty the unmeasured confounder which connects subacute stent thrombosis with increased peri-procedural bleeding and increased mortality? Journal of the American College of Cardiology 2012; 59: 1760 – 1762.
- Szolovits P, Patil RS, Schwartz WB. Artificial intelligence in medical diagnosis. Annals of Internal Medicine 1988; 108: 80 – 87.
- Cooper GF. A diagnostic method that uses causal knowledge and linear programming in the application of Bayes’ Formnula. Comput Methods Programs Biomed. 1986; 22: 223 – 237.
- Brause RW. Medical analysis and diagnosis by neural networks.
- Pauker SG, Kassirer JP. The threshold approach to clinical decision making. New England Journal of Medicine 1980; 302: 1109 – 1117.
- Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185: 1124 – 1131.
- Pauker SG, Kassirer JP. Therapeutic decision making: a cost benefit analysis. New England Journal of Medicine 1975; 293: 229 – 234.
- Lamas GA. Changing Minds. Clinical trials and the inertia of opinion. Cardiosource WorldNews. 2013: October: 6.
- Tricoci P, Allen JM, Kramer JM, Califf RC, Smith SC. Scientific evidence underlying the ACC/AHA Clinical Practice Guidelines. JAMA. 2009;301(8):831-841
- Moussa ID. The practice of interventional cardiovascular medicine: “evidence-based” or “judgment-based”? Catheterization and Cardiovascular Interventions 2008; 72: 134 -136.
- Ofri D. Uncertainty is hard for doctors. http://well.blogs.nytimes.com/2013/06/06/uncertainty-is-hard-for-doctors/
- Ross KG, Klein GA, Thunholm P, Schmitt JF, Baxter HC. The recognition-primed decision model.
- Fischhoff B, Brewer NT, Downs JS. Communicating Risks and Benefits: An evidence-based user’s guide. http://www.fda.gov/downloads/AboutFDA/ReportsManualsForms/Reports/UCM268069.pdf
LLOYD W. KLEIN, MD is Professor of Medicine at Rush Medical College and Director of Quality, Cardiology Section, Advocate Illinois Medical Center, in Chicago, Illinois.