Law 3

law 3: Mostly Wrong

Law 3: For every perfect medical experiment, there is a perfect human bias.

The brevity ofThe Laws of Medicine makes it more powerful. The book can be devoured in a single morning, but the reader will still be left profoundly impacted.

Dr. Siddhartha Mukherjee challenges the reader to go beyond contemplating the three laws that make up the book, but to conceive and open one's eyes to other Laws. Today, with an expanding data-driven world, we discount our humanity. From the author’s note:

“It’s easy to make perfect decisions with perfect information. Medicine asks you to make perfect decisions with imperfect information.” 

This imperfect information is then processed through the imperfect spectacles of the human mind.  

In 1954 Dr. Richard Asher wrote that we are men of action, but every action is, and ought to be, preceded by a certain amount of thought. We should carefully examine these thought processes as they can be faulty. Dr. He says that crooked or dishonest thinking can occur in medicine & classifies crooked thinking in medicine under therapeutics, statistics, causes, words, etc.  He says that the concept that a statistical relationship between two things automatically implies a causal relationship between them is “perverse.” He points out, for example, that statistics would show the incidence of erythroblastosis foetalis is twenty times less common in children of opium smokers.  However, this should not obviously lead to prescribing opium in the maternity ward.  These types of causal relationships are among our human biases that have existed from well before Dr. Asher’s publication over sixty years ago, and are present in today’s medical practice as stated in Law 3.  

Law 1: a strong intuition is much more powerful than a weak test, and Law 3 are closely related.  Law 1 can be paraphrased as “every diagnostic challenge in medicine can be imagined as a probability game.” Probability is based on objective factors such as prior organ damage and testing, but also the physician’s history taking, instincts, and interpretation of the data.  One of my favorite passages in the book is when Dr. Mukherjee quotes Dr. Bernie Fisher: “In God we trust.  All others must bring data.”  However, being human, our biases can change the lens through which all this data is examined. If the data collector is flawed, does that not mean his accruement, analysis, and interpretation of the information is flawed?  How do we account for this? Dr. Mukherjee says that reading a study inherently introduces human perception, arbitration, and interpretation – and hence involves bias.  Bias is not only limited to research and those reading and analyzing the study. What bias can the subjects of the study introduce?  One should consider the possibility of the Hawthorne Effect- when a human knows they are being observed, behavior changes. How do we account for these biases?

What about making medical decisions? Often complex decisions are made using mental heuristics and shortcuts to reduce complexity. These shortcuts develop over time with experience and increased knowledge in the field. Biases however, particularly cognitive biases, can easily creep in. While in medical school, I would hear of a symptom, or learn of a treatment, and try to apply this to all similar clinical situations (anchoring bias). Even experienced physicians are susceptible to similar biases. How often do we ask what is the data for established treatments?  Sodium polystyrene sulfonate (Kayexelate) was approved based on two studies that barely deserve the name studies (no controls, confounders such as low K diet, diuretics, and other drugs that lower potassium). In 2011, the FDA issued a warning that Kayexelate was associated with colonic necrosis. For over fifty years Kayexelate was used without much consideration until there were significant complications in patients. Did the fact that Kayexalate was approved and was an old drug allow this complication to be overlooked for so long?

This brings me to the practice of evidence based medicine (EBM) and guidelines. In 1996 Dr. Sackett defined EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions involving the care of an individual patient.  It integrates individual clinical expertise with the best available clinical evidence from systematic research.”  How does this evidence come about? One thinks of a clinical question and the important outcomes to measure. The results are analyzed, and if found to be “good,” used in evidence based care. This evidence is often used to deliver a doctor-defined patient agenda. A medical encounter takes place and the health care provider focuses on what choices need to be made and carried out for the patient.

Has the research and medical decisions incorporated the patient? Did the research take into account the experience of their subjects? These questions are at the core of the movement to expand Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) in our research agendas. We also must remember that EBM looks at the care of populations as opposed to individual patients. Dr. Shyaan Goh, an Orthopedic Surgeon from Australia, wrote to the British Medical Journal that the clinicians are the problem when EBM adversely affects clinical judgment. The clinician might not account for the quality or applicability of the evidence; they might not understand the rationale behind a guideline and not properly observe the guideline.

The question asked throughout this commentary is how do we account for these biases? How do we guard ourselves from allowing our own human instincts that may lead us astray?  I think it is imperative to focus on the central figure of the story, the patient, not the health care provider. We have to take into account the individual who is experiencing the illness- what are their goals and expectations? A patient is not just a series of questions and tests.

As for the clinicians themselves, what other advice might be helpful? Well, Dr. Asher implored us to keep on the path of straight-thinking, regardless of the destination, to avoid crooked thinking.  He quoted Rudyard Kipling’s Elephant’s Child to provide us with “helpers” to achieve this end:

“I keep six honest serving-men,

Their names are What and Why and When

 and How and Where and Who.”

Maybe we should consider having these six helpers at our side aiding us to “hunt” and bring our biases to the forefront. Then, as Dr. Mukherjee says, we can confront bias head-on and incorporate it into the very definition of medicine. 

Commentary by Beje Thomas, Nephrologist  

NSMC intern, class of 2018  

Additional Reading:  

  1. Asher R. Straight and crooked thinking in medicine. BMJ 1954;2:460.
  2. Lehman Richard. Siddhartha Mukherjee’s three laws of medicine. BMJ 2015; 351 :h6708
  3. Accad, M. and D. Francis. "Does Evidence Based Medicine Adversely Affect Clinical Judgment?" BMJ (Clinical Research Ed.) 362, 
  4. Greenhalgh T et al. Six ‘biases’ against patients and careers in evidence-based medicine. BMC Med 2015;13:200
  5. Goh S. The Problem with Evidence Based Medicine is really the Clinicians. (Letter to the editor). BMJ. 2018 July 18
  6. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312:71–72. doi: 10.1136/bmj.312.7023.71.

Law 3: For every perfect medical experiment, there is a perfect human bias.

It is indeed nice to have three laws - recall Newton, and perhaps more famously, Asimov’s three laws of Robotics. It's pithy, epigrammatic and attempts to ground us. The laws teach us humility even as research and innovation reach ever upwards. So what does Mukherjee mean by this third law?

He starts off with his fellowship in Oncology. The Human Genome Project had its great moment, and though the term ‘precision medicine’ had not even made it past a synapse, the oncology world was dealing with precisely engineered, monoclonal antibodies with fabulously unheard of outcomes. Following on the success of the VEGF inhibitor Imatinib (Gleevec), there was another ‘cousin’ that Mukherjee and his cohort of fellows were seeing used. The fellows saw dramatic positive results in their patients - but somehow, paradoxically, and in stark contrast, the actual clinical trial showed little benefit. How did this happen? Selection bias struck the fellows. They were handed patients from graduating fellows who had the most ‘educational value’ aka the patients doing well. On the other hand, the patients who did not do well were handed back to the attending physician. Like the parable of the broken window, one has to be careful about what is not seen. The patients who are lost to follow up may be lost because they are too sick to come back. Though this was not a perfect experiment that Mukherjee describes - it does illustrate a common enough bias - bring it up the next time an experienced colleague starts off with an ‘in my experience…’ to counter your meticulously gathered data. Vinay Prasad explains the responder bias, all too common in oncology, in a nice tweetorial here.

Another example cited by Mukherjee is the radical mastectomy - Halsted’s procedure, championed by a famous Hopkins surgeon in 1882. In a clever piece of naming, the radical makes one imagine that the roots of cancer have been eradicated, and it took nearly a century before the futility of the approach was revealed by randomized controlled trial. A clever study from Giovannucci showed an example of recall bias. In women with breast cancer, the diet history taken after the cancer diagnosis seemed to suggest that high fat intake was associated with cancer. However, a dietary history taken a decade prior to the diagnosis of cancer, from the same women, showed no similar association. The cancer diagnosis creates false memories. Food questionnaires, forgive the pun, should be taken with a pinch of salt.

But these are all epidemiological studies. Surely randomized controlled trials are not biased. The entire rationale is to prevent these kind of confounding, selection bias - or information biases to creep in. However, the trial methods do count. Though Mukherjee doesn’t go in to those aspects, blinding, allocation concealment, proper randomization are but some additional features of trial quality which can bias even the best laid plans. Check out the Cochrane risk of bias tool, which explains a few of these in detail. There is more to this of course. Should one change practice on the basis of a single small trial? Enter publication bais - or the shelf drawer (full of unpublished negative trials) bias. How about the more important issue of generalizability, or external validity? Does a psychology study of WEIRD individuals apply to all humanity? Surely not. Men and women are biologically different - but not for all conditions and surely not in response to all therapies. The need to do trials in all subpopulations is sometimes carried too far, however, in denying effective therapies to dialysis patients. Just because a trial has been done in the general population doesn’t mean the therapy will not work in dialysis patients. Generalizability should not be an excuse to practice renalism.

So, Mukherjee wants us to be bias hunters, on the look out for biases in every study. Eternal vigilance is always necessary.

Summary by Swapnil Hiremath, Ottawa