top of page

Welcome to the Rage Machine!

Updated: Oct 23



One of my all-time favourite songs is Welcome to the Machine, by Pink Floyd. While the song appeared on the 1975 album Wish You Were Here, I first heard it in the mid-1980s. I love synthesizers, I love Pink Floyd, and I love science fiction, so it’s not hard to guess what I would want to play every time I started my computer.


Unfortunately, when I first got a computer in 1987, calling the sound-generating part of a PC a “speaker” was not technically a lie, but it was certainly “lie-adjacent”. While amusing, an 8-bit version of Welcome to the Machine is also rather painful, assuming I had had access to such a thing at the time. Eventually, though, I bought a Sound Blaster (audio expansion card), immediately recorded a clip, and set up a script to play the iconic words “Welcome to the Machine” every time the computer started.


It had been a while since I listened to the song, but I was reminded of it when I listened to the Cyberwire podcast special episode Election propaganda part 1: How does election propaganda work?, and heard Rick Howard refer to social media as the “rage machine”. The Pink Floyd song is particularly appropriate when you consider that the album and song were both a tribute to former Pink Floyd member Syd Barrett and a critique of the music industry. It can be read as a story of the impact of the music industry “machine” on the mental and physical health of Barrett.


Fast forward almost fifty years, and the entire music industry can be seen as a part of a vastly larger (and even more literal) machine, of which social media is a major component.


There’s enough material in that idea to support the careers of academics, artists, critics, and endless others for decades to come, but I’m currently most interested in the information ecosystem in which we currently live.


Over that time, the quantity of data and information available has been increasing exponentially, but the overall quality of that data has been decreasing. In the 1970s, we had access to a relatively small amount of relatively high-quality information. Publishing was expensive, which led to significant vetting by publishers and libraries – and it was comparatively easy to identify the relative trustworthiness of sources.


Please note that this does NOT imply that the 1970s was a golden age of quality information – the comparatives in the previous paragraph are doing a lot of work.


Nowadays, through the supercomputers in our pockets (for comparison, the fastest computer in the world in 1985 was the Cray-2, which was comparable to a 2011 iPad 2), we have access to exponentially more information than was ever previously possible. Wikipedia alone has on the order of 100 times as many words as the online Encyclopedia Britannica, and that doesn’t even scratch the surface of the quantity of information available. The challenge is that the cost to post information is near-zero, and quality-checking is normally an afterthought (at best).


While there are a lot of excellent sources of high-quality information available, vast amounts of content are entertainment, satire, spoof, opinion, poorly-researched, out-of-date, misleading, or deliberately false. The critical skills (both now and in the future) are in learning how to vet our sources and in applying critical thinking to the content we consume.


I’ve commented before on the need for critical thinking and how to identify and avoid logical fallacies, but I think it’s useful to consider information from the perspective of information security. According to the Canadian Centre for Cyber Security (CCCS), information can be broken into several types:

  • Valid: factually correct, based on data that can be confirmed, and not misleading

  • Inaccurate: incomplete or manipulated in a way that portrays a false narrative

  • False: incorrect, and with data to disprove it

  • Unsustainable: cannot be confirmed or disproved based on available data


Since most statements include multiple points, it’s important to break them down as far as possible, to avoid cases where valid information is used as part of a larger statement which is ultimately inaccurate, false, or unsustainable.


We can break activities which can cause harm into three main categories:

  • Misinformation: false information not intended to cause harm

  • Disinformation: false information intended to manipulate

  • Malinformation: information which is based on truth, but that misleads


Take, as an example, the “fact” that cholesterol is bad. Our understanding of cholesterol and it’s relation to heart disease has evolved over the past century, and the truth is nuanced. Dr. Christopher Labos give a summary of the “Cholesterol Controversy” on Science-Based Medicine. In the 1970s, we knew that there was a relationship between cholesterol and heart disease, and the guidance at that time was to lower your cholesterol (by diet and/or drugs). We understand more now, and realize that overall cholesterol is less significant than the makeup of that cholesterol, so the guidance is now generally to focus on lowering LDL (low-density lipoproteins, aka “bad cholesterol”) and focus on the ratio between LDL and HDL (high-density lipoproteins, aka “good cholesterol”).


People who “know” that cholesterol is bad are usually just remembering that earlier guidance, and this misinformation may simply be the result of assuming that you are remembering correctly, that what you remember is still correct (ie, has not expanded or been subsequently disproven), and of not double-checking to be sure. It should be noted that misinformation is not inherently malicious or harmful, but it can be exploited by bad actors to cause significant harm.


An example of disinformation might be for a bad actor to say that some remedy is a guaranteed “cure” for heart disease. Just making stuff up is entry-level disinformation.


Malinformation is where things get complicated. If our bad actor (Malice) were to say that it’s been “disproven” that cholesterol is bad, that’s arguably a true statement, even though it’s wildly misleading. Malice would obviously not say “we now understand more about how we process cholesterol and saying that cholesterol is bad is overly simplistic. In fact, there are several types of cholesterol and it’s important to focus on the overall balance...” Et cetera.


No. Malice will say that it’s been “disproven”, complain about how “scientists keep changing their minds”, and then start talking about whatever they’re selling (literally or figuratively).


There is a debate technique known as the “Gish gallop”, in which a person tries to overwhelm their opponent by presenting a mass of arguments, with no regard for accuracy or strength. The point is that making an unfounded statement is quick and easy, while demonstrating that a statement is false takes time. This has been codified as Brandolini’s law, which states: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.”


And THAT is why “fact-checking” is so unpopular in some circles. People who know they are lying frequently don’t like attention drawn to that fact.


Getting back to social media, which thrives on engagement, the formula is simple. Content which generates strong emotions leads to engagement, which leads to money and influence. And, since accuracy requires time and nuance, generating anger (engagement) through “rage bait” is very effective. But, since at least a few people care about the truth, mixing in a bit of truth makes it harder to distinguish between truth, lies, and opinion. It also erodes society’s trust in expertise and institutions, which keeps the whole vicious circle moving.


That said, there are a few techniques which can be effective... One great tactic to address the more extreme cases is mockery, such as this comment which came up on the recent presidential debate...


It is vitally important to learn how to confirm sources, and double-check assertions made – ESPECIALLY when they support your preferences. It’s also important to understand that there are no 100% definitive sources, merely degrees of confidence. As one example, Media Bias Fact Check provides information on media sites, including information on political leaning, level of factual reporting, ownership, and so on. I find it quite useful as a first-pass on a site with which I am not familiar.


As an example, the above-mentioned Science-Based Medicine is listed as having a “PRO-SCIENCE” bias rating, with a “VERY HIGH” rating for the level of factual reporting.


By vetting and cross-checking your sources, it’s possible to build an understanding of the relative credibility of different sites. It’s also important to consider the tone of the source – some organizations work hard to be as objective as possible, while others show very clear bias in their wording.


Welcome to the Machine!


Cheers!


Comments


bottom of page