What a suvey of placebo use really tells us

Placebo

One of the key concepts essential to science-based medicine is the placebo: What it is, what it isn’t, and how it complicates our evaluation of the scientific evidence. One my earliest lessons after I started following the Science-Based Medicine blog was that I didn’t understand placebos well enough to even describe them correctly. Importantly, there is no single “placebo effect”. They are “placebo effects”, a range of variables that can include natural variation in the condition being studied, psychological factors and subjective effects reported by patients, as well as observer bias by researchers studying a condition. All of these, when evaluated in clinical trials, produce non-specific background noise that needs to be removed from the analysis. Consequently, we compare between the active treatment and the placebo to determine if there are an incremental benefits, to which we apply statistical tests to determine the likelihood that the differences between the intervention and the placebo groups are different from random chance. Removed from the observational nature of the clinical trial, we can’t expect the observed “placebo effects” to persist, as they’re partially a consequence of the trial itself. A more detailed review of placebos is a post in and of itself, so I’ll refer you to resources that describe why placebo effects are plural, that placebo effects are subjective rather than objective and there is no persuasive evidence to suggest that placebo effects offer any health benefits. What’s most important is the understanding that placebo effects are a measurement artifact, not a therapeutic effect.

Placebo effects are regular topics at this blog, because an understanding of placebo effects is essential to evaluating the evidence supporting (so-called) complementary and alternative medicine, or CAM. As better quality research increasingly confirms that the effects from CAM are largely, if not completely, attributable to placebo effects, we’ve seen the promoters of CAM shifting tactics. No longer able to honestly claim that CAM has therapeutic effects, “treatments” such as acupuncture or homeopathy are increasingly promoted as strategies that”harness the power of placebo” without all the pesky costs or side effects of real medical interventions. But this is simply special pleading from purveyors and promoters. Unable to wish away the well-conducted trials that show them to be indistinguishable from placebos, they instead are spinning placebo effects as meaningful and worthy of pursuit – ideally with your favourite CAM therapy. Again, I’ll refer you to posts by David Gorski and Steven Novella who offer a more detailed description of how negative results can be spun to look positive. Because CAM’s effects are indistinguishable from placebo, we should not invest time and resources into pursuing them – we should instead focus on finding treatments that are demonstrably superior to placebo.

But what if physicians are already using placebos widely in practice? Setting aside the ethical issues for now, widespread placebo usage might suggest that physicians believe that placebos are effective treatments. And that’s the impression you may have had if you skimmed the medical headlines last week:

Continue reading

It’s time for transparency in Canadian clinical trial data

Science-based health professionals hold the scientific method is pretty high regard. We advocate for evaluations of treatments, and treatment decisions, based on the best research. We compile evidence based on fair tests that minimize the risks of bias. And we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

In order for our evaluation to factor in prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if evidence remains unpublished or is otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades. Continue reading