In this post we’ll learn one of the most elegant and beautiful logical weapons that mathematicians can wield. Every mathematician develops a toolbox for proving theorems, and this is one of the most powerful. Its Latin name is *reductio ad absurdum, *and its English name is proof by contradiction.

**How it works**

Suppose we’re trying to prove that a statement (call it A) is true. Also suppose that there are other statements that have already been proven true. It doesn’t really matter what these other statements are, except for the fact that **they have already been proven completely true***.*

What we do is suppose that the statement A is false. There is nothing wrong with **supposing** something, so long as we never confuse it with something that is **proven** false. Now, from this **supposition **we can start to derive other “valid” statements (scare quotes here to remind us that the subsequent statements are valid **only if the supposition is correct** (i.e., if A is **actually false**)). Using logical deductions from our supposition, we want to arrive at a conclusion that contradicts something else that we know is true.

For example, call one of our already-proven-true statements B, and then **suppose **that A is false. If we can use this supposition to show that B is false, then we have a contradiction because B is both true (already proven true) and false (just proven false)! This is clearly not okay, and it tells us that the initial

**assumption**was wrong. What was the initial assumption? That A was false. Thus, A must be true! Think about this one, it’s important. In the meantime, here is a very trivial example (a better example is the proof of infinity primes).

**Example**

Suppose it is already proven that 0=0, and 0 doesn’t equal anything else. I.e., **zero only equals zero***.* (This is obvious, but remember that we’re just looking at the logic here, not the math). Suppose it is also known that we can add and subtract numbers in the normal way. We can now ask the following trivial question: does 1=2? Clearly the answer is no, but how can we **prove** it? (obviousness is not a mathematical proof).

First, we assume the opposite. ** Suppose** that 1=2. If that’s the case, then let’s see what happens when we subtract 1 from both sides (there’s nothing wrong with doing that). Then 0=1. But, we

**already know**

*that zero equals zero and nothing else! Therefore the conclusion that we’ve drawn here, namely that 0=1, is a contradiction! This means our supposition was impossible. What was our supposition? That 1=2. Thus it is the case that 1 does not equal 2.*

Yes, this is trivial, but we use this method all the time for proving cooler stuff. For example, it’s used in lesson 13 for showing that there are, in fact, **at least** two fundamentally different kinds of infinities, and again in lesson 14 to show that there are **infinitely many** different kinds of infinity!