The portrait above might be that of Thomas Bayes... but probably not.
Perfect! I wanted to talk about evidence, after all.
When I was in high-school, my history teacher would always ask: “What evidence do you have of that?” That may be the single most useful lesson I picked up in school. He didn’t care how “true” your statement was, and would simply move on to the next person if you had no evidence. You could say the most ridiculous thing, and get a hearing if you could provide evidence. Of course, the next step was to assess the evidence to see if it was credible, and any evidence which did not pass muster would be discarded.
Take, as a quick illustration, an anti-vaxxer stating that there is a causal link between vaccines and autism. As I have previously discussed, this is (to be clear) bullshit, but bear with me. Most of the time, no evidence is provided, so a good response may be “Ignore. No evidence.”
When pressed, many people will mention Andrew Wakefield. Ok. This can be considered evidence, at least. A published study of... twelve children? Which could not be replicated? Which was later repudiated by 10 of the 12 co-authors over conflicts of interest, and then fully retracted? And he was struck from the UK medical register? (All sources in the original post, noted above)
Uh, yah. I think we can safely pass on Wakefield as a credible source.
On The Skeptics Guide to the Universe (https://www.theskepticsguide.org/podcasts), Dr Steven Novella frequently talks about approaches to dealing with discussions of this sort. People will sometimes start raising a bunch of “evidence” (usually vague anecdotes or even vaguer references to “research” done by “experts”) , and essentially try to Gish gallop you. A good approach is to ask for their “best” example, discard anything which is vague, discard anything without evidence, and then evaluate that example. If that “strongest” example is demonstrated to be weak, their entire argument is suspect. (And no – a large volume of “eyewitness” accounts, without corroborative evidence, cannot be taken as evidence of a specific hypothesis, particularly in the absence of a reasonable explanation for why no corroborative evidence is available.)
I recently encountered a post describing some anti-vax “research”, and decided to dig a bit. The anti-vax community has learned over the years, and have developed strategies to try and “defend” their positions. Sadly, it’s not a coincidence that these strategies are almost always dishonest. If they were honest, they would be providing credible evidence for their position. It’s very sad to think of the many people who are being misled by a few charlatans and cranks, and sadder still to think of those who will be hurt by this disinformation, particularly if they are given influential positions in government...
At any rate, the post described a new study claiming some harm from vaccines. Ok. Evidence to review! But I’m not an epidemiologist, or a doctor, so this is where Bayes comes in.
Thomas Bayes was a statistician, philosopher, and Presbyterian minister, who is famous for (surprise, surprise) Bayes’ theorem. This provides a rule for conditional probabilities and led to the development of Bayesian inference, which is a technique for statistical analysis. Math aside, the concept is wonderfully applicable to critical thinking and general reasoning. The idea is that the probability of an hypothesis can be calculated, then refined as more information becomes available. The new piece of information is assessed not only on its own merits, but also based on our “priors”.
As more information is gathered, you can then “nudge” the confidence level up or down. Think of an old-fashioned balance scale as an analogy, where each fact is a grain of sand. As you accumulate information, the scale tips one way or the other, leading us to change the way we assess new information.
As an example, consider the 2011 OPERA project’s discovery of neutrinos which appeared to be travelling faster than light.
Looking at this in Bayesian terms, our balance scale has about fourteen quadzillion (a huge number I just made up) grains of sand on one side, and we have just added one grain of sand to the other side. Our first thought should probably be to suspect an issue with the experiment or the measurement. Or, put another way, is it really likely that one number on one experiment just disproved the last hundred years of physics?
No.
To no one’s surprise, it turned out that the results were incorrect due to equipment failures, and the grain of sand can be removed from the scale.
That’s really the difference between scientists and pseudo-scientists. Every pseudo-scientist wants to be Galileo, while every real scientist wants to find the truth.
So, back to my review of the article claiming to be about vaccine harm. I won’t use actual names, as this is an exercise in Bayesian thinking, rather than a “debunk”.
As I’ve looked into vaccine denial previously, my initial reaction was unprintable, but I want to be open-minded, so let’s assign a “prior probability” of 50%, which I consider extremely (even ridiculously) generous.
First, I checked the source of the study. The publication claims to be peer-reviewed, but appears to be new, small, and have a minimal “impact factor” (a measure of how frequently it is quoted by other scientists). Not encouraging, but let’s give it a pass: +0%.
None of the authors appear to be epidemiologist... One is a physician, one appears to have a Master’s degree in public health, and the third is an author. Not a good sign for a paper on epidemiology: -5%.
The study was funded by a foundation named after (and run by) one of the authors, and I could find no reference to it aside from the foundation’s own website. Not good, but not necessarily bad, except that anti-vax groups sometimes create “foundations” to generate misleading papers: -10%.
Lead author (the physician who runs the foundation) has a whole Wikipedia page, which describes him as promoting COVID-19 and other disinformation, provides links to sources demonstrating this, and notes that a medical board has recommended that his board certification be revoked due to his promotion of COVID-19 disinformation. Very bad sign: -25%
Looking at the study itself, it claims to be a review study (ie, not original research), but seems to simply be combining quotes and information from other studies, without providing clear context, all while making vague judgmental statements with no clear connections. This sounds like nothing more than anomaly-hunting, which attempts to chip away at the credibility of a position by raising issues which are irrelevant or explainable in other ways. Bad sign: -10%.
So, continuing.... Erm, wait. I seem to have run out of numbers... My initial (very generous) 50% appears to now be at 0%.
Obviously, this is an oversimplification for illustrative purposes, and nothing is certain, so read 100% as (to paraphrase Stephen Jay Gould) ‘confident to such a degree that it would be perverse to withhold provisional assent’, and 0% as a level at which the evidence can safely be discarded as meriting no further attention.
It’s important to note that this approach can and should be applied to sources as well, as far down as necessary. What degree of confidence can we put in our research sources? We should never trust anything 100%, but some organizations have earned a degree of trust – they should still be cross-checked whenever possible, but can be “trusted” for a first approximation. Over time, you can refine your list and adjust your confidence levels. It’s not a coincidence that credible sources usually show some degree of objectivity, issue corrections when they make mistakes, and generally agree. In contrast, non-credible sources usually use biased language, rarely issue corrections, and frequently contradict themselves.
Funny... it’s almost as if being honest and consistent are correlated in some way... Hm...
Cheers!
Comments