Picture a mathematician sitting on a jury, determined to find the right verdict. A witness stands up and solemnly says: ” I swear to tell the truth, the whole truth and nothing but the truth.” The mathematician wonders – How do we trust this guy? Is there a way we can always figure out whether someone is telling the truth or not? What if I asked for a formal proof of every statement they made? Would that work?

Sadly, the answer is no, not even in principle. Mathematics will never revolutionise the judicial system. We’ve known this since 1931, when Godel published his famous Incompleteness Theorems, which show that there can never be a formal proof system that proves the truth, the whole truth, and nothing but the truth.

The theorems show that you cannot have an axiomatic formal deduction system that can be used to decide the truth of *any* sentence by proving it (or disproving it) from the axioms, without ever proving both a statement and its negation. Even worse, one of these unprovable sentences is the sentence “I will never prove both a statement and it’s negation”. If a formal system stands up in court to defend its honour and says this, it’s lying.

(A statement is a logical sentence that has a well defined truth value – it is meaningfully either true or false, for example “For every natural number, there is a prime number bigger than that number”.)

Godel’s insight was that you could make up self-referential sentences that assert their own unprovability, by defining a system that allowed numbers to encode logical sentences that talk about numbers. The second part was the complicated part, and the reason the theorems are seen as 80 pages worth of esoteric mathematical logic. Once you have it, the first part is very easy.

Fortunately, unlike Godel, in 2021 we look back on decades of people messing around with computers, which means that the following statements (which Godel proved from scratch) are things we take for granted:

- Computer programs can be represented as numbers (eg binary source) or strings, and there are logical sentences we can say about them which are either true or false.
- A formal axiomatic system can be defined by a computer program that examines strings of symbols and checks for syntactic validity according to the rules of the system.
- Code can be data and data can be code.
- You can write programs that run on their own source code as input, and vice versa.
- Given a deterministic program that produces a certain output from a certain input, the stack trace and a log of each variable update can be converted into a formal proof that the program, on that specific input, produces that specific output.

*This is all we need.*

**Theorem 1 (The First Incompleteness Theorem)** No formal system strong enough to talk about Turing-complete programming languages can be both consistent (never proving both a statement and its negation) and complete (can always prove either a statement or its negation).

**Proof** Let F be any formal system that can talk about programs. We will prove this theorem by explicitly constructing a sentence defined in such a way that if F can prove the sentence, it can also prove its negation, and vice versa. This means that either F is inconsistent, or that sentence is something F cannot prove the truth or falsity of.

Consider the following deliberately contrarian program, which we call R (for J. Barkley Rosser, the guy who came up with it, strengthening Godel’s original theorem, which was slightly weaker than our version)

‘*Given an input A, search though all strings in shortlex order:*

*if you find a formal proof in F of the sentence “A running on the input A never halts”, halt.*

*if you find a formal proof in F of the sentence “A running on the input A halts”, go into an infinite loop.*‘

Our sentence will be S = “R running on the input R never halts”.

Now suppose a proof of S in F exists. Then if we run the program R on input R, it will find this proof and halt. But then we can use this very program trace (that ends by halting) as a proof that “R running on the input R halts”, which is exactly the negation of S !

Similarly, if a proof of not(S) = “R running on the input R halts” existed, R running on R would find this proof, and go into an infinite loop. But again we could use the program trace (upto the point where it enters the loop) as a proof that “R running on the input R never halts” – S !

Thus the only way to avoid the inconsistency of proving both S and not(S) is if neither of these proofs existed, which means that R running on input R will keep searching through strings forever. Thus (if F is consistent) R actually never halts and S is a true statement – except one that F can never prove is true.

**Theorem 2 (The Second Incompleteness Theorem)**: No consistent formal system (*strong enough to talk about Turing complete programming languages ) F can prove its own consistency.

**Proof.** The proof of theorem 1 proved that if F is consistent, then R running on R actually never halts. In other words, we proved the statement ” F is consistent implies S”. We simply formalize the proof of Theorem 1 to get a proof (in F) of “F is consistent implies S” .

But then if F can prove ” F is consistent”, putting the two together means that F can prove S !

But we also know from Theorem 1 that if F is consistent, it* can’t* prove S. The only way out of this potential contradiction is that if F is actually consistent, it cannot prove that fact.

Whew. That was intricate, but short. So why did Godel need 80 pages? He was also proving the following amazing theorem:

Theorem 3: Elementary arithmetic (addition and multiplication with integers) can be made into a Turing complete programming language.

Proof: No way I’m touching this one. The essential idea is to use unique prime factorization to encode numbers and logical symbols as other (very very long) numbers and find fixed points. Quanta has an explainer.

*There’s an interesting question hiding in the statement of theorem 2 – what if a formal system was so weak Godel’s First Theorem didn’t apply to it but strong enough to talk about itself in some way – could it then prove its own consistency?

Surprisingly, the answer appears to be yes! This mathoverflow link discusses a paper on Self-Justifying Axiom systems, which can kind of prove their own consistency by using weaker versions of standard addition and multiplication.

C*redits: This post is basically a much shorter version of Sebastian Oberhoff’s wonderful article Incompleteness Ex Machina, which inspired my lecture on Incompleteness at Monsoon Math this year*