Are all numbers interesting?

Yes.

And we can prove it using a “proof by contradiction.”

- Assume there are positive numbers that are not interesting.
- Therefore, there is a smallest number that is not interesting.
- Hey! That’s interesting!
- Therefore, the assumption in (2) is violated. We have a contradiction and have proved that all positive numbers are interesting.

This proof is a bit whimsical. Here’s a similar, more serious question called Berry’s Paradox. [1] Define a positive number X as follows.

“The smallest positive integer not definable in under eleven words.”

This sentence has ten words. And it defines X. Therefore the sentence “The smallest positive integer not definable in under eleven words” defines X in 10 words. Think about it. The statement contradicts itself. We have defined the smallest positive integer that can’t be described in ten words using ten words. This is a contradiction. A paradox.

**Meta-Statements and Self-Contradiction**

Paradoxes such as Berry’s are *self referential* or *meta-statements*. They refer to themselves. The meta-statement “This sentence has five words” is true whereas the meta-statement “This sentence has two words” is false. Meta-statements can be funny, like “Dr. Whacko guaranteed he would either cure Frank’s loss of short term memory, or refund the large fee he charged.” (Humor is never funny when analyzed, but we are being serious about humor from meta-statements, and so we are compelled to analyze them.) The Dr. Whacko statement is funny because, when information about the statement is applied to itself, we see that Dr. Whacko’s guarantee is worthless. If Frank doesn’t regain his short term memory, he will forget the guarantee and the dishonest Dr. Whacko can keep his fee regardless.

Meta-statements can be contradictory and therefore wrong. The statement “Only claims provable by science can be believed” is a statement that cannot be proved by science. The statement is *self-refuting* or *self contradictory* so it must therefore be wrong. Likewise, an ideology claiming “There is no absolute right or wrong” assumes the statement “There is no absolute right or wrong” is absolutely right thereby revealing a self-contradiction. The statement is wrong. Even though the statement “There is no absolute right or wrong” can’t be absolutely right, it can be absolutely wrong.

**Turing’s Halting Problem**

Meta-analysis is a useful tool in mathematics and computer science. Alan Turing,[2] the father of computer science, used self contradiction to show that there are lots of things a computer can’t do. One is writing a computer program to analyze what another computer program will do. Some computer programs, such as a never-ending loop, can run forever. Other programs stop. Can we write a *halting computer program* to analyze another computer program to find out whether or not it will halt? We’ll call the halting program H. Using self-contradiction, Turing showed that H cannot exist .

There are obvious programs that will halt. Any program that starts with a STOP command will halt. A program whose sole purpose if to print “HELLO WORLD” will also stop. But we are looking for a halting computer program to analyze whether or not *any* program will halt. Turing showed writing a program is not possible. Like Berry’s paradox, he proved this by showing the existence of a halting program would be self-contradictory.

How?

If a halting program, H, exists, it will be computer code in, say C++. Let P be the program that is analyzed by the halting program. The output of the halting program is either YES, the program P will HALT or NO, the program P will run forever.

Let’s add a little bit more C++ code to the halting program H and call it H+. If the output of the halting program is “NO the program P doesn’t halt,” we leave the halting program output as is and the augmented program H+ then STOPs. If the output is “YES the program P halts,” we add some code that goes to a LOOP. LOOP is a simple program that repeats itself forever and NEVER STOPS. This completes the H+ program.

Here it gets a bit tricky (self-referential arguments typically do). What happens if we ask H+ to analyze itself to see whether the program P=H+ halts or not? Here are the possibilities.

- If the input program P = H+ halts, then the halting program will say YES and the augmented analyzing program H+ will LOOP forever. So if the input P=H+ halts, then the analyzing H+ program will run forever. We have a self-contradiction.
- Likewise, if the input program P = H+ doesn’t halt, then the halting program will say NO and the H+ that is analyzing P = H+ as the input will STOP. So when the input P = H+ doesn’t halt, the analyzing H+ program halts. This is another contradiction.

The existence of a halting program would therefore refute itself. It can’t exist.

**Things we could do if we had a Halting Program**

If a halting problem existed, it could be used to solve many of the open unsolved problems in mathematics.

One of these problems is Goldbach’s conjecture which claims “All even numbers can be written as the sum of two prime numbers.” For example 12=7+5, 22=19+3, 102=97+5, 7000=6997+3 and 15826= 7907 + 7919. Goldbach’s conjecture checks out for big numbers and a counterexample has never been found. To date, there exists no mathematical proof of Goldbach’s conjecture.

Anyone with elementary programming skills can write a looping program looking for a counterexample for Goldbach’s conjecture. We check a given even number and see there are two prime numbers that add up. If there are, we move to next even number and check it. If not, we have found a counterexample so stop the program and announce “Goldbach’s conjecture is wrong!”

If there were a halting program, we could feed it P = the Goldbach program and ask whether or not it the Goldbach program halted or not. If the halting program said NO, the program never stops, we will have proved Goldbach’s conjecture and collect our million dollars. If the halting program said YES, the program stops, then Goldbach’s conjecture is wrong. A counterexample has been found.

There are many other open problems in mathematics that could be solved by the halting program. In the movie *A Beautiful Mind*, mathematician John Nash, played by Russell Crowe, talks of the holy grail of all open problems: the Riemann hypothesis. Like Goldbach’s conjecture, a single counterexample can disprove the Riemann hypothesis. The Riemann hypothesis is one of the seven Millennium Prize Problems where a million dollars is offered for a proof.[3] If we had a halting program, the Riemann hypothesis could be proved or disproved. So could many other open problems in math that require a single counterexample to disprove. These include strange sounding but not yet proven propositions such as the Collatz conjecture, the Pollock octahedral numbers conjecture, Catalan’s Mersenne conjecture, and Brocard’s problem.[4][5]

But, sadly, the halting program does not exist and these problems are still unresolved.

**Determinism, Knowability and the Halting Program**

If there are no random components, a computer program is deterministic. It sits in computer memory ready to be run. Will it halt? Maybe, if we run the program, it will stop in a day so. Maybe it will run for thirty years. After thirty years, we might conclude the program won’t halt. But maybe it will halt in the thirty-first year. We don’t know. If the program never halts, we might never know in our lifetime whether or not it will halt.

So even if a computer is deterministic, there may be details about the program that are unknowable without running the program. A generalization of the Turing’s problem, dubbed Rice’s theorem,[6] says there is not much that can be determined about a computer program unless we run. We can’t write a meta-program, for example, that can look at any computer program and decide whether or not it will print the number three!

The interesting conclusion is this. Determinism does not always imply knowability. A computer program can be deterministic, but we might not be able to know what it will do before we run it.

**Chaitin’s Number**

Chaitin’s number takes Turing’s halting problem to astonishing heights. A single number, Chaitin’s number, allow us to solve Goldbach’s conjecture, the Riemann hypothesis, and every problem of this type where are a single counterexample disproves a proposition. A single number can do all this mathematical heavy-lifting! We know this number exists. It lies between zero and one.

Chaitin’s number exists and is precisely mathematically defined. But its value, even to a reasonable finite accuracy, is unknowable.

Chaitin’s number varies from computer language to computer language. A specified version of C++ has a Chaitin number as does the latest version of Matlab.

Here’s a way to think about Chaitin’s number. All computer programs can be reduced to a bunch of ones and zeros. We’ll choose a program at random by flipping a coin. A heads is a one and a tails is a zero. We assume the program is *prefix free*, which means that no program is the start of another, bigger program. This allows us to know when a program is complete and thus when we can stop flipping the coin. Chaitin’s number is the probability (therefore a number between one and zero) that by randomly flipping a coin we’ll end up with a program that halts. This sounds simple but has profound implications.

Remember the program we wrote to analyze Goldbach’s conjecture? This is one of programs we might find by flipping the coin. Suppose, then, we run all of the programs that are as long as the Goldbach program or less (i.e., that many bits long or less).[7] There are lots and lots of programs like this, but we are assuming we have lots of lots of time with lots and lots of computing power.

One by one, the programs begin to halt. We’ll eventually hit a point where one more program stopping will result in a percentage of halted programs exceeding Chaitin’s constant to the accuracy of programs the length of the Goldbach program. Since we know Chaitin’s constant, we know that all of the programs that can stop have stopped. Any program that has not stopped will never stop. If the Goldbach program has not yet stopped, it will never stop and we will have proved the Goldbach conjecture!

And there is more.

If the Riemann hypothesis program is shorter than Goldbach program has also not stopped yet and its stopping will exceed Chaitin’s number, then the Riemann hypothesis will also be proved! And we can include the programs for the Collatz conjecture, the Pollock octahedral numbers conjecture, Catalan’s Mersenne conjecture and Brocard’s problem.

All of these and other open math problems are solved with one number: Chaitin’s number. And we only need to know Chaitin’s number to finite precision — basically to the number of decimal places of the longest program we want to consider.

This is an astonishing property for a single number. Here is a number that essentially contains within it the solution to every difficult problem in mathematics. There are numerous resources we need in time and computing power to calculate Chaitin’s number, but possibly more astonishing is that it exists. When interpreted using the prefix free code, the C++ program on your computer has a Chaitin’s number. The version of Matlab on your computer has a Chaitin’s number. Chaitin’s number is real and deterministic. It exists. But Chaitin’s number is unknowable, except to God.

**The Most Interesting Number **

We began with a whimsical proof that all positive numbers are interesting. Yet some numbers are more interesting than others. Perhaps a single number that contains the secrets of numerous open mathematical problems that have stumped the world’s best mathematicians, in some case for centuries, takes the award for the most interesting number of all.

Too bad we’ll never know its value.

**To Learn More**

Gregory J. Chaitin’s 2005 lecture at Mälardalen University and 2000 lecture at Carnegie-Mellon are available on YouTube. Chaitin has written several accessible books about his famous number, including *Thinking about Gödel* *and Turing: Essays on Complexity*[8] and*Conversations with a Mathematician: Math, Art, Science and the Limits of Reason.*[9]* *

**References and Notes**

[1] Gregory J. Chaitin, “The Berry Paradox,” *Complexity* **1** (1995): 26-30.

[2] Charles Petzold, *The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine* (New York: Wiley, 2008).

[3] Keith J. Devlin, *The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time* (New York: Basic Books, 2002).

[4] Richard K. Guy, *Unsolved Problems in Number Theory* (New York: Springer, 2004).

[5] Victor Klee and Stan Wagon, *Old and New Unsolved Problems in Plane Geometry and Number Theory* (Washington, D.C.: The Mathematical Association of America, 1996).

[6] H. G. Rice, “Classes of Recursively Enumerable Sets and Their Decision Problems,” *Transactions of the American Mathematical Society* **74** (1953): 358-366.

[7] For technical reasons, we need to look at all programs equal to the Goldbach program plus one bit.

[8] Gregory J. Chaitin, *Thinking about Gödel and Turing: Essays on Complexity, 1970-2007* (Singapore: World Scientific Publishing Company, 2007).

[9] Gregory J. Chaitin, *Conversations with a Mathematician: Math, Art, Science and the Limits of Reason* (New York: Springer, 2002).

**Links to Images:**

http://www.umcs.maine.edu/~

http://www.teach-ict.com/

http://images.google.com/

http://whatculture.com/wp-

http://old.lms.ac.uk/*The Millennium Problems*, Keith Devlin]

http://www.youtube.com/watch?

http://www.youtube.com/watch?