# The Godels Incompleteness Theorem explains consciousness

## What are the philosophical implications of Godel's first incompleteness theorem?

### Godel's sentence

I would like to pause here in order to first explain Godel's sentence correctly and to agree to prove . The original representation dates before the invention of the computer and is complicated. But we live 80 years later where computers are familiar.

First, in Godel's theorem you always speak of an axiomatic system S. This is a logical system in which you can prove theorems by a computer program. You should think of Peano Arithmetic or ZFC or some other first order theory with a computable axiom scheme (axioms that can be enumerated by a solid computer program).

It is assumed that such a system of axioms can describe a computer at any finite time. The memory of a computer is a sequence of bits, a large integer M, and manipulating the memory by performing a step is a very simple function that does something out of the instruction set and changes the memory. When you finally do this often, you will get a new memory state M '. The basic assumption for S is:

S can follow a computer program: Given a fixed general-purpose computer with the initial memory M, it is proven that the memory at time t is M 'for any finite time t.

In Peano Arithmetic this is trivial because you can code the memory of a computer in a very simple way and the axioms without induction, only the axioms for computing data, give the result of a finite computation. So this applies to Peano arithmetic without induction and without quantifiers. It is very easy to ask a system of axioms - that it can follow a computation and prove that the result is correct at the finite point in time.

Next, let us assume that S can indicate things like "Program P does not stop". This requires a quantifier

I assume that S is consistent. The statement "S is consistent" means that if S proves that "P does not stop", then P actually does not stop. Because if P stops, S would follow it until it stops, and then prove it too. Note that S can prove that "P lasts" without actually stopping P. It just can't prove "P won't last" if that's a lie.

Theorem: Consider such an axiom system S. There is a program P that does not stop, for which S cannot prove that "P does not stop":

Proof: Write the SPITE computer program to do the following:

1. Print your own code into a variable R.
2. Derive all the consequences of the axioms from S and look for the phrase "R does not stop".
3. If it finds this sentence, it stops.

If you think about it for half a second, S stops at the moment S proves "SPITE doesn't stop" (by construction) and makes S a liar. The self-reference is on the first line - printing your own code into a variable, and being able to do so is an exercise for undergraduate programmers.

That is the complete construction and the complete proof.

Godel 2: S cannot prove to be consistent

The proof is as follows: If S is consistent, then SPITE cannot stop because we see that this implies that S is inconsistent. If S can formalize this argument and prove consistent, it proves that SPITE does not last (which is impossible).

That is the full evidence. It takes familiarity with logic to see that for a suitable S the informal deduction "S is consistently implies that SPITE does not last" can be formalized in S, but if it weren't possible to formalize intuitive logical deductions like this one , we wouldn't use S at all.

Rosser: The problem with Godel1 and Godel2 (which, up to a trivial choice of the canonical example, are essentially the same theorem) is that S is still complete, but could be wrong . In other words, Godel's evidence did not show that there was a program P whose deadlock is not predicted at all. Perhaps S will decide that all programs will either be paused or not paused, but it's not true - it is telling you that some unpaused programs will pause. If it tells you that a program has not paused it would be a contradiction in S (after enough time), but it can tell you that a program without pausing will stop without contradiction (this is obvious but subtle).

So write ROSSER:

1. Prints its code in R.
2. Derives from S and searches for a) "R is printing on the screen" b) "R is not printing on the screen".
3. If it finds a) it will stop without printing. If it finds b), it prints "Hello" and stops.

Now S can neither prove "ROSSER prints" nor the negation "ROSSER does not print". So if a second program follows Rosser and is stopped while printing ROSSER, this program is neither shown to be stopped nor not shown to be stopped.

### Philosophical Implications

The main philosophical implication is the most important in the history of philosophy (this is not an exaggeration):

There is a universal concept of computation that is independent of the axiom system.

This is the main result of Godel's work, although it was not fully understood in Turing's version until a few years later. But Godel groped for it, understanding that this was true very quickly after proving the theorem, and realizing that Turing had provided the explanation, he abandoned his own approach to computation in Turing's favor. The reason I can get away with such evidence as above is because we are all familiar with the terms "computer" and "computer program" and we all know that any precise algorithm can and can be programmed on a computer One computer is as good as the other.

Immediate philosophical implications:

There is an agreed understanding of what it means to have a well-defined set of rules.

You can actually create a machine that can simulate other well-defined rules.

Any two such machines are equivalent --- A can simulate B and B can simulate A.

If physics exists (if there is a precise set of rules, even probabilistic rules to model nature), the universal machine (equipped with a random number generator) can simulate anything in nature.

It follows:

The complete behavior of humans can be simulated by a universal computer.

With the plausible logically positivistic assumption, this means that

A computer is a machine that can think.

We owe this insight to Turing. Von Neumann has the following insight:

A computer is a machine that can model the behavior of biological systems. Godel's theorem is analogous to self-replication.

These are by far the most important philosophical insights of all time. The forerunners for this are Liebnitz attempts to produce a philosophical machine that could argue mechanically. Leibnitz understood some of the effects of a computer.

### Philosophical implications of Godel's theorem himself

Godel's theorem showed that there is no one set axiomatic system that can prove all of these meaningful truths when you take the philosophical position that the statement "Program P doesn't stop" is absolutely meaningful. This is obvious - the program derived from the axioms cannot prove too much about itself without contradiction.

More importantly, however, it shows you how to create stronger systems. If you tie in with the axiom "S is consistent", you get a new system that makes the system stronger. This means that every axiom system has a stronger one, the Godel reflection

S + "S is consistent"

is strictly stronger than S.

You can repeat this process by repeating the Godel reflection. There is no barrier at infinity - you can consider the system that represents the most infinite Gödel reflection as the union of all the theorems proven in the nth stage (there is a specific program that will print out the prints ---). You don't need set theory to make the iteration precise, at least not for sufficiently small infinite ordinals. The process of repeating Godel's reflection reproduces Cantor's ordinal numbers.

This answers the question of mathematical philosophy

Why are ordinal numbers necessary?

It fully justifies Cantor's set theory. Cantor's set theory is required to give Godel reflections on Omega theories. However, it does not justify the entire metaphysics, only the ordinal numbers behind the whole numbers.

As you go through the ordinal list, ordinal numbers are increasingly complex computer programs (each ordinal number is, in Paul Cohen's words, a "process"). Traditionally, you can define the limit of all ordinal numbers that can be represented on a computer within a set theory. This is known as the Chuch-Kleene atomic number. You can only approach the Church-Kleene ordinal number in its complexity.

### Further development

According to Godel, Gentzen proved the consistency of Peano arithmetic within a very limited axiomatic system (PRA - a weak fragment of PA) with the additional assumption

The ordinal epsilon-nothing is justified

From this point on it was clear that consistency proofs for complicated theories could be given from simple theories with the only complex assumption that a certain countable computable ordinal number is established. For PA, the ordinal number wasn't even that complicated, so there are questions (like the Paris-Harrington problem, or the Hydra problem, or the Goodstein theorem) that correspond to the ripple of epsilon-nothing, and so on can't be within proved by PA, but is equivalent to the consistency of PA, so they are demonstrable in any theory that can prove the consistency of PA.

The topic of the order proof theory tries to assign to every mathematical theory an ordinal number that can be described as explicitly as possible. This program has been successful with certain quantity theories and there is no barrier to the consistency of one still to prove so infinite theory. So there is another implication

It is possible to complete Hilbert's program and prove the consistency of myriad infinite mathematical systems using only countable computable ordinal numbers that can be represented on a computer.

This program is active today, but has not yet proven ZF's consistency. This is partly because many people keep saying that it is not possible because of Godel's Theorem.

The main assumption in these ideas is that the process of creating ordinal numbers that approximate the Church of Kleene's ordinal number can somehow be understood, although it cannot be formalized as a computer program. This process is a gain in complexity analogous to biological evolution, and how far we can go within our human limits is not known.

### Wrong implications

There are many false implications of Godel's theorem

We are more than computers

This interpretation arises from the fact that we can understand that a program P does not stop (namely SPITE for a given formal system), although the formal system cannot. To see that this is a wrong conclusion, all you have to do is look at what SPITE does: it simulates the system, looks for the prediction and spits it out! There's no reason you can't do this to a person:

If you can simulate a person, you can predict their choice despite their decision - so that you can only put a million dollars in box A if the person chooses box B.

This is the exact analog of Godel's theorem in the human realm, and it is a famous philosophical problem. There are also statements

John Searle cannot consistently believe this statement

this is exactly analogous and John Searle cannot believe that, although it is true. Although this is a first-order logical statement about arithmetic, you need to provide a computational model for Searle's beliefs that may not be possible due to randomness. But the idea is the same - the things that the axiom system cannot know but we can know are things about the system itself.

There are other constructions based on Chaitin, which reformulate the incompleteness clause as follows

No program can prove that a string has a Kolmogorov complexity greater than the length of the program plus a fixed constant

However, from a computational perspective of humans, this just means that no human can prove that a string of characters has a Kolmogorov complexity that is greater than the complexity of a program that simulates that human. Since it is so difficult for us even with low Kolmogorov complexity, this is a safe prediction.

So there are no consequences of Godel's theorem that imply that the computational theory of mind is wrong. There are other ideas

Godel's theorem states that semantics cannot be formalized

This is also not exactly correct - Godel's theorem states that the semantics of stopping computer programs by strengthening systems are only remotely axiomatic and the process of strengthening at the high end is not algorithmic. But the exact nature of the non-algorithm could be simply by naming ever larger calculable countable ordinal numbers approaching the ordinal number of the Church of Kleene, and this could reasonably be evolutionarily possible.

Gode's theorem kills Hilbert's program

in so far as Hilbert's program developed an order analysis in response to Godel's theorem, this is not true. It makes a naive implementation of Hilbert's program impossible, but the order analysis view is not naive at all and is exactly the kind of foundations one might expect for mathematics that are infinitely rich and infinitely complex.

### Nikolaj-K

How are S and R related? Point 2 in the SPITE list is not too precise. I don't know why I would know that the program's hold property would be encoded, or even encodable, in the framework initiated by the S-rules. I don't know how to translate the assumptions on S to a computer. What does the assumption mean or how is it justified? Evidence / strings are clearer to me than the calculation. Also the line after "I assume S is consistent." is confusing to me. This probably has something to do with what is formulated in Godel's original proof as the coding of the proof in the language of arithmetic.

### Ron Maimon

@ NickKidman: I assume you know a general purpose computer with memory M and command function f, so that after n clock cycles the state is f (f (f (.. (f (M)) ...) (iterates n) -time The main assumption on S is: 1. It should prove for every fixed M that the n-fold iteration of f is on MM '(where M' is the real answer - this is a finite computation), 2. it should be able to form the sentence "(for all n) f-interated-n times on M has not stopped." Both are true in any reasonable model for mathematics.

### Ron Maimon

The statement after "S is consistent" is just a reformulation of the consistency. If S is consistent and proves that "P does not stop", P cannot stop after n steps, since S knows what is f-iterated-n times in the initial memory state of program P, and thus it would prove that P stops, and that would be a contradiction in terms. Conversely, if S is inconsistent , it will prove everything, including that a stop program does not. So S is consistent if (S proves that "P does not stop" implies that P does not stop). On the other hand, S "P-breaks" for non-stop programs can prove this is an "omega inconsistency".