Lockdown Learning: 10 Logical Fallacies

Luke Vassor
8 min readMar 28, 2021

Beware of faulty reasoning.

According to Wikipedia and the Oxford advanced learner’s dictionary of current English:

“A fallacy is reasoning that is logically incorrect, undermines the logical validity of an argument, or is recognised as unsound. The use of fallacies is common when the speaker’s goal of achieving common agreement is more important to them than utilising sound reasoning.”

At some point we have all knowingly, or unknowingly, committed a logical fallacy in an effort to argue for or against a position. Indeed, some people use them intentionally, something you will often see in emotionally-charged debates be they on TV, YouTube or podcasts.

Here are 10 examples of logical fallacies.

1. Strawman: Misrepresenting someone’s argument to make it easier to attack. This is done by exaggerating the argument to an extreme length and then attacking the extreme version, or taking a single aspect of the argument, wrenching it from context, and attacking that aspect alone rather than the original argument.

Example: “Hey Person X, did you know that doctors in Scotland have actually prescribed time in nature to patients? It has been shown to ease symptoms of anxiety and depression. It would likely benefit all of us to spend more time outdoors.”

Person X: “Oh so we should all move into tee-pees and live in the woods forever — that would work well wouldn’t it?”

This quite obviously ignores the original premise, augmenting it beyond intention.

2. Gambler’s fallacy: This is the false belief that ‘runs’ occur across statistically independent events such as spins on a roulette wheel or rolling dice.

Often, this stems from a confusion between empirical (or experimental) probabilities and theoretical probabilities. Empirical probabilities are collected via experiment e.g. if I wanted to calculate the probability of rolling a 6 on a fair 6-sided die, I could roll it multiple times and record my observations. For demonstration’s sake I could roll 6, 1, 2, 6, 4, 3. This would suggest my empirical probability of rolling a 6 is 2 rolls/6 rolls or 1/3 or (33.333…%).

Obviously, this differs from theoretical probabilities, which are calculated without experiment, and simply use information about the situation at hand. We know there are six outcomes, each equally likely since the die is fair. So the theoretical probability of rolling a 6 is the favourable outcome/total number of outcomes = 1 side/6 sides = 1/6 (or 16.666…%).

The problem here is that empirical probabilities only converge on (or move towards) theoretical probabilities as the number of trials becomes larger and larger. This is an example of the law of large numbers first proved by the Swiss mathematician Jakob Bernoulli. Essentially, our experiment of 6 roles is not nearly enough data — any good experiment and statistical test necessitates large sample sizes. If we rolled 10,000 times, we would expect the empirical probabilities to approximately match the theoretical ones, i.e. each number would be rolled around 1,666 times (or 16.666..% of 10,000).

Example: With gambler’s fallacy, this often plays out in the belief that “just one more roll/spin” will get you the number/colour you need, or that because the winning number has not shown up for a long time, it is likely to soon. This mistake considers rolls to be dependent events when they are in fact independent. One roll does not affect the outcome of the next roll.

Crucially for the gambler, the law of large numbers guarantees stability in the long run for averages of random events. E.g. whilst a player in a casino may win money in the short term from a “lucky streak”, the long term averages of the various games will be tilted in the casino’s favour, guaranteeing them to accrue long term profit. This is the part gamblers miss on more naïve games, like roulette spins. Casinos wouldn’t be open for long if it were truly possible for customers to keep beating the house.

3. Black-and-white thinking, or false dichotomy: When the actor presents two alternative states as the only possibilities.

Example: This is very relatable for anyone in a state with polarised political views. Picture a political debate. A conservative or right-wing supporter might accuse anyone with slightly liberal viewpoints as being socialist (or some derogatory term), blindly dismissing any credible liberal policy ideas. Conversely, a liberal or left-wing supporter might accuse anyone with conservative viewpoints as being something equally as derogatory but right-wing, dismissing any credible conservative policy ideas. This polarises politics and perpetuates a “with us or against us” mentality, dispensing any semblance of a spectrum or mosaic of political perspectives, where it is possible to have centrist, left-leaning, and right-leaning views on different topics. Essentially black-and-white thinking puts people/ideas in a box and does not consider halfway solutions or compromises.

4. Perfect solution fallacy, related to black-or-white thinking: The idea that a solution should be rejected because a given problem would still exist after the solution is engineered.

Example: I heard an example of this in a debate on cryptocurrencies and the rising popularity in Bitcoin. The aggressor argued that because they don’t solve every one of the problems inherent to fiat currencies or centralised finance, cryptos won’t work, missing the point that progress in any domain is gradual and iterative, and doesn’t need a silver bullet solution.

A less nuanced example is: “anti-drink driving campaigns are pointless because people will still drink and drive”. This is fallacious because it ignores the premise of the proposed solution, obviously overlooking the fact that 100% eradication of drink driving is not the goal.

5. False cause fallacy: — this relates to the well-known adage that anyone who has taken a statistics class had drilled into them — that correlation does not imply causation, or that simply because two phenomena correlate does not mean one caused the other. One can see the danger of reaching such conclusions by looking at amusing plots (see more here).

Example:

An example of spurious correlation (source: https://www.tylervigen.com/spurious-correlations)

6. Begging the question (Petitio principii): A circular argument in which the question answers itself. This occurs when the argument’s premise assumes the truth of the conclusion rather than acting as evidence for it i.e. assuming without proof the position in the question.

Example: Lucy: “Why didn’t you include John’s poetry in the student publication?” Anne: “Because it was judged as not sufficiently worthy of publication.”

Evidently, the answer itself begs the question.

7. No true Scotsman, or an appeal to purity: Used as a way to dismiss relevant criticisms of an argument. The original arguer offers a modified assertion in response to an unwanted counterexample that excludes the original assertion.

Example: Person X declares that scotsmen do not put sugar on their porridge. Person Y points out that he is Scottish and puts sugar on his porridge. X yells that “no TRUE scotsman would”.

8. Appeal to authority. A fallacy in which the opinion of an authority is used as evidence to a claim rather than empirical evidence itself.

Example: A famous example is that of leading zoologist Theophilus Painter. In 1923, he declared that humans had 24 pairs of chromosomes —this conclusion was drawn from poor data which contained contradictions. Scientists then propagated this “fact” until 1956, based on Painter’s authority, despite later studies showing the correct number to be 23 pairs. Even textbooks which contained photos of 23 pairs falsely reported the true number to be 24.

9. Appeal to probability: taking something for granted because it would probably be the case (or might possibly be the case).

Example: Something can go wrong, therefore something will go wrong. Simply because something is possible doesn’t mean it is likely.

This is a good example of confusing possibility and probability — a million dollars in cash may or may not be on my bed when I get home, but that doesn’t mean the probability either way is 50%.

10. Conjunction fallacy. Another example of misunderstanding probabilities. When the actor believes that two independent events occurring in conjunction with one another is more likely than either event occurring on its own.

Example: The classic example cited in the literature is the 1983 “Linda problem”, an experiment run by mathematician Amos Tversky and psychologist/economist Daniel Kahneman. Participants were given the following description of a fictitious person named Linda:

“Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.”

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

The majority of participants chose option 2.

This highlighted an apparent shortcut in human reasoning. We know from statistics that the probability of two independent events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone — i.e. it is mathematically more likely for Linda to have either one of those attributes rather than two at the same time.

E.g. If we choose a very low probability that Linda is a bank teller (5%) and a high probability that Linda is a feminist (95%):

Pr(Linda is a bank teller) = 0.05, Pr(Linda is a feminist) = 0.95

then assuming they are independent events, the probability that Linda is a bank teller and Linda is a feminist

Pr(Linda is a bank teller and a feminist) = 0.05 × 0.95 or 0.0475 (4.74%)

The probability of two independent events occurring together is the product of their individual probabilities: P(A∩B) = P(A) x P(B) (Source: wikipedia)

This is simple arithmetic — you can’t multiply two numbers less than one and greater than zero and arrive at an answer that is greater than either number. The probability of being both these traits (4.75%) is lower than even the low probability of being a bank teller, 5%.

The authors suggest that the reason for people committing this fallacy is because people use a shortcut or heuristic the authors call “representativeness” i.e. Option 2 seems more “representative” of Linda, despite irrationally contradicting the mathematical certainties. Perhaps an evolutionary social mechanism for quickly figuring out what someone’s general personality is. This work contributed toward the authors’ Nobel prize.

For a deeper dive on logical fallacies check out the following resources:

--

--