The Halting Problem of Alan Turing
A Most Merry and Illustrated - well, Illustration
In 1931, Kurt Godel - at the time a young unemployed mathematician who lived with his mother in Vienna - published a paper that flummoxed the mathematical and philosophical world. Even today - pushing a century later - people will trumpet what an amazing, incredible, fantastic, and important achievement Kurt hath wrought. Why, Kurt proved there were unprovable truths! Hailed by some as the first demonstration that a psychological and philosophical principle could be proven with mathematical rigor, Kurt's paper, so we're told, lets us know why documents like the Bible and the Constitution of the United States have deep fundamental, but nevertheless, unprovable truths.
Of course, Kurt did nothing of the sort. What he demonstrated was that in formal mathematical systems which are both sufficiently complex and consistent, you can create well-formed formulas - that is, legitimate mathematical expressions using rules dictated by the system - which are true, but nevertheless can't be proven within the system. Since in the same paper Kurt proved very easily that the expression he created was true (by going outside the box, so to speak), it's all the more odd that people said Kurt found there were truths that can't be proven.
Today you can get philosophers and mathematicians arguing whether Kurt's theorem is of primary importance to mathematics and logic or whether it's overblown. Philosophers tend to say what Kurt did was as important to the mathematician as Einstein's theory of relativity or Heisenberg's uncertainly principle is to a physicist. It changed mathematics forever and the world will never be the same.
But then you'll find mathematicians who point out that what is amazing is not how much Kurt's work changed mathematics - given it's admitted profound implications - but how little it changed it. Even today many - actually most - mathematicians proceed through their careers with scarce a look at Kurt's work, thank you. As one mathematics professor recently put it, Kurt warned mathematicians to stay away from certain areas - but those are areas most mathematicians wouldn't go to anyway.
Now it's pretty tough for the average reader to access the merits of these discussions. Kurt's paper is pretty hairy. Filled with symbols, theorems, mapping, and definitions (46 of them!), it's kind of ridiculous to expect Joe Blow to set aside the remote on Super Bowl Sunday while Janet Jackson is on the screen and peruse the little Dover paperback, On Formally Undecidable Propositions of Principia Mathematica and Related Systems. We also have a natural and legitimate skepticism when you hear there is a - quote - "proof" - unquote - that relates to the Secrets of the Universe, but, sorry, it's too complex to explain.
Fortunately coming up with unsolvable problems became the thing to do in the early to mid-part of the Twentieth Century. Just five years after Kurt published his paper, another of the Twentieth Century's Mathematical Titans (and oft-cited as the founder of computer science), Alan Turing, came up with a nice little proof for the non-provable. Now known as "The Halting Problem", you actually can set down the remote on Super Bowl Sunday and read what Alan has to say. Better yet, you can understand it and not miss a second of Janet.
But up front (no pun intended) the reader should be warned. Even when the proof is explained clearly and concisely, many readers have a bit of a "Say what?" reaction. That's due to two things. First, the problem, like many of the more interesting theorems, is an indirect proof. You've probably seen those in school - they're the ones that start off with "Assume that X is true ...". This causes problems for kids trying to prove theorems since they think you can assume what you want is true and then prove it is. Sad to say, this isn't permitted in math.
Instead what you do in an indirect proof (also called proof by reductio ad absurdum) is you assume the opposite of what you want to prove. Then you work backwards until you come up with a conclusion that is nonsense, that is, a contradiction. If you do that, we're told, then your first assumption - the opposite of what you wanted to prove - is wrong. So what you wanted to prove in the first place is correct.
Actually you shouldn't feel too bad if you feel an indirect proof is, well, indirect. And indeed some mathematicians - called "intuitionists" - don't like using indirect proofs. This is, they don't believe that just because you can prove something isn't true, then you can assume the opposite is. At the same time, strict intuitionists are a minority in the mathematical community, and virtually all mathematicians accept (and use) indirect proofs.
Secondly, Kurt's proof contains a circular reference. This is not the same as a circular argument (which isn't allowed). Instead it's best called self-reference and is where the function seems to use itself as it's own input or otherwise somehow included in itself. Self-references can be troublesome because they are common in paradoxical brain teasers which, although amusing, aren't really valid. But math can't get by without some self-referring formulas. So they are not - as the rabbi in Fiddler on the Roof said - expressly forbidden, but they can leave nagging doubts in your mind that the theorem prover is trying to pull a fast one.
For instance, some self-references make perfectly good sense. "This sentence has five words" is true even when referring to itself. Even "This sentence has twelve words" makes sense. It makes sense, that is. It just happens to be false.
But then there's the famous sentence "This sentence is false." If that sentence is true, and it refers to itself, then it is not true. And if it is not true, it's false. But if "This sentence is false" is false, the sentence must be true! So the statement is both true and false a the same time.
Now there are various ways around this. You can argue you have just proven - via an indirect proof - that a sentence that refutes its own truth is not actually a sentence at all. Or more exactly, it is not a statement, which is a sentence that must be true or it must be false. For instance, "Hey, babe, what'cha doin' tonight?" is a sentence, but it is not a statement. But "The sentence is false" has been constructed using rules for constructing statements. It looks like a statement, but since it has no actual truth or falsity, it cannot really be a statement.
But it still leaves the question open when can you legitimately use self-reference. Both Bertrand Russell and Ludwig Wittgenstein had some ways out of it, as do other logicians. In effect, they created systems that prohibit creating formulas that deny they are true. Doing so works fairly well, but does not rule out you cannot create other type of contradictions.
In any case, unlike Kurt's proof, we will find that Alan's is actually quite simple. The main thing is to take it one step at a time and not get ahead of yourself.
Alan, as many people know, was concerned with computers (he helped break the secret German codes during World War II). So he framed the problem in computer terms and the problem seems perfectly sensible.
Is it possible, he asked, for a computer to tell us automatically whether 1) a solution to a problem exists or 2) if there is no solution?
Alan, working in the days before computer had error traps - and in fact, when there were not even computers - developed an idealized computer. If Alan's computer would solve a problem, it would stop and give the answer. If it couldn't solve the problem, it would keep trying forever. Since the goal is to determine if programs will halt or not, it is called the halting problem.
So the halting problem asks, can we write a computer program called H, - or in days before computers, create an effective method - that will determine if any program - we'll call it P - halts or not? Or in phony computer language (called pseudo-code by the cognoscenti):
|
If P will halt, then print "P HALTS!" |
  If P will not halt, then output "P DOES NOT HALT!". |
In other words, plug any computer program into HALT and HALT will tell us if the program halts or not. If it does, then we can run the program and get the answer. Or we're immediately told there is no answer and we know not to bother. We could even modify HALT to go ahead and run the programs that do work. So clearly HALT then is the ultimate computer program.
The question: Can HALT exist?
The proof is simple and we simply construct a program which we'll called WHOPPER (or W) since it whops HALT.
|
If "H( W )" outputs "W HALTS!", then go into an infinite loop. |
  If "H( W )" outputs "W DOES NOT HALT!", then stop. |
What we see here is that WHOPPER puts itself - that is a copy of its own code - into the subroutines which are our old friend HALT.
So what happens if we try to run WHOPPER? There are only two possible outcomes. Let's consider these separately.
The first possibility is we run WHOPPER and it halts. That means it was the second line of our program that was executed.
But wait a minute! If WHOPPER halts - and is controlled by the second line - that means HALT identified WHOPPER as a program that did not halt! So if WHOPPER halts, that means WHOPPER does not halt!
And if WHOPPER does not halt? Then it means it loops forever and that the control was directed to the first line. But again there's the problem. Control goes there only if HALT says WHOPPER does halt! So if WHOPPER does not halt, that means WHOPPER does halt!
In other words, WHOPPER halts if and only if (as the philosophers say) WHOPPER does not halt. Ladies and gentlemen, we have a contradiction, and our basic assumption - that HALT exists is - false. Therefore there is no effective program that can tell if a program will solve any given problem. Case closed.
What? Something still bothering you? Something still nagging at the back of your mind?
Yes, that is correct. WHOPPER does have a self-reference. More specifically the self-reference is specifically defined so it will confuse halt. So it is kind of like the "This sentence is false" paradox which was crafted specifically to create a paradox.
Now you shouldn't be too concerned if you still think it is difficult to see when we can or cannot use self-reference. Kurt himself may have had doubts. Stanislaw Ulam, who worked with Robert Oppenheimer on the Manhattan Project and Edward Teller on the H-bomb, said he always had the impression that Kurt worried that his own theorem was really nothing more than a new (although crafty and clever) paradox and not a real theorem. But we won't go into that topic right now.
But we will point out that a program that accepts its own code as input is a bit troublesome. How do you write the darn thing? I mean, you have to write the program in the program before you write the program!
Now these are the type of objections that prompt some people to put postings on web discussion threads about how Georg Cantor's diagonal argument is a bunch of hooey or that Einstein's Theory of Relativity is bullshine, claptrap, and horse hockey. The problem is that simplified explanations can sometimes be more confusing than if you take your time and do it right from the first. So we'll back up and be a bit more exact. To do this, we will use Georg's diagonal argument which far from being nonsense, is one of the most important discoveries in history. ("Georg", we must point out, is not a misprint and from now on we'll call him "George".)
OK. If a problem can be solved by a computer, you must write a program. To write a program you must use a finite number of characters. However, the size of the program is open ended, so there is really no limit to the number of programs you can write. The same is true for the input. This means that there are an infinite number of programs and and input. But more exactly, there are a countably infinite number of programs and input. But given enough time you could in principle, write down any program and its input.
Although we can't explicitly write an infinite number of programs and inputs, we can work out a diagram that will serve our purpose equally well. To do this we simply make a table where the top row represents our list of inputs and the left column represents all our programs. So given any number identifying a program and any number for the input, you could pick out the right row and column and run the program.
We will anticipate an objection of those that say unless you first write out all the infinite programs you can't put them in order. You certainly can't select a number and say it is equal to a given program. We can say the same goes for the input.
Au contraire as Alan might have said, we can so prove it is possible to assign a unique number to a computer program without writing all the other programs. You can call out any number - that's any number - and we can tell you 1) if the number is that of a computer program and 2) exactly what the computer program is. And you can do it for the input. Kurt helped us out here.
A computer program, for instance, is a string of characters written out. Now Kurt showed in his theorem that you can take any system that puts strings of characters in rows and assign unique numbers to those strings. You do this by Godel numbering. To assign a Godel number to a string of characters, you first assign a counting number to the individual characters. This can be done anyway you want and is pretty easy particularly for writing computer programs.
All you have to do is list the characters on a keyboard and number them. You can see a list if you click on the figure at right. The list includes numbers, letters (both capital and lowercase), and the various non-alphabetic characters. Even though we only have 10 numbers (including zero) in our list, we can write integers and rational numbers (rounded off) by stringing together integers and decimal points. Irrational numbers are also rounded off on a computer.
Once you have these primitive characters assigned a counting number, you write your program and keep track of the numbers assigned to the characters. Then you make each of those numbers in the order written exponents of the product of prime numbers in increasing order. You have a unique number. More to the point, you can be given any number and by factoring, tell if it is indeed that of a program by looking at the exponents and transcribing them back into the characters.
For instance, the line below is a computer program. It will actually run on a computer. It won't do much - just add one to any number you put in. But it will run and then stop.
P(x){x=x+1}
Looking up the characters in the table, you will find the following numbers for the characters.
48 8 88 8 91 88 29 88 11 17 27 93
These become the Godel number, G where
G = 248 X 38 X 588 X 78 X 1191 X 1388
X 1729 X 1988 X 2311 X 2988
X 3117 X 3727 X 4193
But how can you find where a program falls in a list of all programs if we haven't written every possible program? Well, first pick a number, any number. Now start with x = 1. Is it a Godel number of a program? Nope. It's just and exclamation point. So move on to x = 2. Not a Godel number either, just a quotation mark. But we can keep moving up until we find the first number that is a Godel number of a computer program. That first Godel number of a program is assigned n = 1.
If the program assigned to n = 1 wasn't the program you picked, you just keep moving up, finding numbers which are Godel numbers of programs and renumbering the ones that are with increasing integers. Eventually you'll find reach the Godel number of the program you picked and will assing it some unique counting number, n.
You can also do this in reverse. Pick out a program. Now calculate its Godel number, G. We know G has an associated counting number, n, which is it's place in the list of all programs. Now subtract 1 from G and see if it is a Godel number of a program. If it is, G - 1 is assigned n - 1. If it G - 1 isn't a Godel number, keep subtracting 1 until you do find a Godel number of a program. Then that will be n - 1. Continue working backwards and keep count of the Godel numbers you stumble across. Eventually you'll reach a Godel number where below it there are no more Godel numbers. Since that's the lowest Godel number and you've kept track of how many Godel numbers you've found, you can calculate n for your program.
If this seems to complex, then you can think in terms of what a computer does to a program. It converts it to binary code - that is a series of 1's and 0's. That is, a simple program to the computer may look like 1 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 1 1 0 1 1 0 1 1 0 0 1 0 0 0 1 1 0 1 0 0 0 0 1 0 1 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 0 1 0 1 1 1 1 1 0 0 0 1 0 1 1 0 1 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 1 0 1 1 1 0 0 1 1 1 1 0 1 0 1 1 0 1 0 1 1 1 1 1 0 1 0 0 0 0 0 1 1 0 1 1 1 1 0 0 1 1 1 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 1 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 1 0 0 1 1 1 1 1 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 1 1 1 0 0 0 1 0 1 0 1 1 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 1 1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 0 1 0 1 1 0 0 0 1 1 0 0 1 1 1 0 1 0 1 0 0 0 0 1 1 0 0.
Now binary numbers are, in fact, numbers. You convert them to decimals by taking the increasing exponents of 2 times the digit in the binary. So the binary number
1 0 1 0 1
is the same numbers as
1 X 24+ 0 X 23+ 1 X 22+ 0 X 21+ 1 X 20.
which is equal to 21. And since a computer program can be equated with a binary form, we can then go through the same weeding process as we did for the Godel numbers and identify and rank any computer program.
So in the end we have an effective method for determining where any computer program is in the list of all possible computer programs. We can use the same process to locate the number of the inputs.
Huh? You call that an effective method. Well, yes, it is an effective method. We said effective, not efficient. Effective means it works.
Now that we can enumerate our programs and input, we will modify our halting problem so it will accept the program - identified by its number - as one variable and the input as another. That is, instead of writing H(x), we write H(Program x, Input y) or more briefly H(x, y).
Now we can enter any Program x and Input y into the table. If the Program x halts with Input y, then H(x, y) will write "H". Other wise it will write "L" (for "Loop").
Our table then is what we'll call the Universal Turing Machine. No matter how large a number you pick, we can look up what happens. If H(x, y) exists, then does our Universal Turing machine. We can pick any program and any input and tell if we can get an answer.
But we ask again. Can the Universal Turing Machine exist?
Well, if the Universal Turing Machine exists then the table must contain all possible programs to handle all possible inputs. On the other hand, if we can find a new program and a set of inputs that cannot be in the table, then our assumption can't be correct and neither H(x,y) or the Universal Turing Machine exist.
At first glance, it would seem difficult to find a new program with a new sequence of input. After all we've already listed an infinite number both.
Well, that's where George's diagonal argument comes in which is one of the slickest ways to prove something you can imagine.
Since we've already established that we can pick any number for a program and any number for an input, we can select the program and input where both have the same number n. You can see that is the diagonal of the table, hence the name. These are the elements marked in red below.
Next we switch every "H" on the diagonal to a "L" and every "L" to an "H". Then we write them as a row. That row represents a set of possible outputs for a program which we'll call Program D.
All right. If our hypothesis is true - that our original table has all the output of H(x, y) for all programs, then this new Program must be somewhere in the original list. In other words, D is some number in our left hand column and the list of H's and L's must be in one of the rows.
Well, which program can Program D be? It can't be Program 1 because that machine has "L" as the output for Input 1 Program 1 has "H" for that input. It can't be the second program because that has "H" as the output for the second input and our new row has "L". Program 3 has "H" for Input 3 and Program D has "L". In fact, if you go down to any row, you'll find the program can't be anywhere in our original list. Wherever we had a "H", we now have a "L" and vice versa. That means we've just created a new program when we already had a list of all possible programs! That's impossible and so our fundamental hypothesis must be incorrect. Ergo, neither H(Program x, Input y) or the Universal Turing Machine exists.
Now tables are not usually considered bonafide proofs. So we really should cast the tabular demonstration into a more acceptable form. But we still don't want to mix up the programs with the inputs. So let's create a new program with number n as follows:
|
If H(x, x) writes "H", then LOOP; |
  If H(x, x) write "L" then HALT; |
Note that our new program is defined from a program that already is completely specified, that is H(x, y). Also the Programs and the Input are kept separate. We just use the same number of the program with the number of the input. So you don't have to define P before you define it and our earlier concerns happily vanish into thin air.
The question is does P exist?
Well, here we go again. If P exists the it must be in the list since our assumption our list has all programs. So is what we call P the Program #1? It can't be because if P(1, 1) halts, then that means that H, the Halting Program, has idenified Program 1 using Input 1 loops! But if Program #1 with Input 1 loops, H has told us it halts! And P cain't be Program 2 neither, boy, because if you use Input 2 on Program 2 you again see the same inversion of behavior. Keep going and no matter which number you pick, our new program can't be in our original list. So once more we conclude our hypotheses - that H(x, y) and the Universal Turing Machine exist - is false.
At this point, some people may ask if we can salvage our program by simply making room for the new program. Then the Universal Turing Machine would contain the program we created. So we'll advance each row by one and our old Program 1 becomes Program 2 and so on. Then our Program D is the new Program 1. Now all the programs are in the list, right?
Soddy, old chap, but we can now do exactly the same thing again. We can simply define a program Q(x, y) as we did for to P(x, y) but using the new list. Then Q(x, y) can't be in the new list for the same reason P(x, y) wasn't in the old list. Even if you add Q(x, y) to the list, you can still create a new program and input and no matter how many new programs and input you create, you can always create another. So the conclusion is still the same. There is no effective solution to determine if any problem can be solved or not.
This is where some people scratch their heads and wonder if Alan is trying to pull a fast one. Just what did he do? He first defined the function, H(x, y). Then he took that and defined another function, P(x, y) in terms of it. But he defined the P(x, y) so that wherever H(x, y) gave one answer, P(x ,y) gave the opposite answer - and for the same input. So course, such P(x, y) can't be one of the original H(x, y) functions! We defined it so it couldn't! Big deal!
And remember Kurt. He proved a theorem that really does nothing more than says it can't be proven. But does that stop anyone from proving theorems that have any practical value?
In other words, just what use is That Which Kurt and Alan Wrought?
Well, that's a good question. And in fact, CooperToons has found some mathematicians who are, in fact, quite bored by theorem's like Kurt's and Alan's. On occasion he's asked Ph. D. mathematicians to clarify points in Godel's work. The responses range from an "Oy, vey! Not Godel again!" rolling of the eyes to a cryptic "Dunno. Go ask a logician".
The problem is it is very, very easy to overly mystify what Kurt and Alan did. What Kurt did was to show that if you defined number like Bertrand Russell did, you can create unprovable formulas (once more unprovable within the system). On the other hand if you define numbers like Mojzesz Presburger, all the formulas you are allowed to create can be proven to be true or false depending on whether they are true or false. More to the point there is an automatic procedure which will let you prove which statements are true. Mojzesz's arithmetic is complete and decidable.
What Alan did was similar to Kurt. If you have a sufficiently broad definition of "solving a problem", then yes, there is no single automatic way to solve all problems. On the other hand, if you restrict the definition enough, you can drum up a definitition that will. But whether the restricted definition allows you to solve all the problems you want to solve is a different question.
But what Kurt and Alan definitely did was they told us that there are areas that mathematicians and computer scientists shouldn't waste their time with it. And people were wasting their time in those areas. David Hilbert and his friends had been trying to prove for years that you could prove all true formulas in mathematics. Kurt and Alan showed this was not the case and David should look for something else to work on. Either that or just be less ambitious.
Finally we need to reemphasize that neither Alan nor Kurt said there are unsolvable problems and certainly not Great Truths That Can't Be Proven. If you find an unsolvable problem, you need to go outside the system. Euclid's famous Fifth Postulate is a case in point. After thousands of years of people trying to prove Euclid's Fifth from the other four axioms, it was finally shown that the postulate's truth or falsity depends on your system. Once you specify your system and start proving things about the system, you can indeed prove whether Euclid's Fifth Postulate is true or false.
That doesn't mean that Kurt and Alan just proved what people had already known for decades. That's because Euclidean geometry with all five postulates is a complete system like Mojzesz's arithmetic. With four axiom Euclid isn't complete. Add the Fifth and it is. On the other hand, what Kurt and Alan showed was that some systems - like Bertie's arithmetic - are not complete and you can't make them complete no matter how many axioms you add.
Now there were mathematicians and even ordinary people who breathed a sigh of relief when they learned of Kurt and Alan's theorems. After all, that means that mathematicians would always have something to prove. Some even say that Alan and Kurt's proved that mankind's intellect is something more than a machine.
But you have to wonder. After all when computers began to show up in the world and even after the onset of the computer revolution of the 1980's, it was still said that computers would never be able to beat chess grandmasters. After all, we were told computers could not develop overall strategy that is unique to the human mind. But we now have computers that do beat grandmasters and before long (CooperToons hypothesizes) we will have computers that can always beat any grandmaster at chess. After all, we now have computers that beat humans at Jeopardy!
So maybe the best question should actually be "Are humans more than complex, imperfect, and slow computers?"
Perhaps we should compute on this a little.
References
Alan Turing: The Enigma, Andrew Hodges, Simon and Schuster (1984). Alan's volumous biography.
"Alan Turing", The MacTudor History of Mathematics Archive, http://www-history.mcs.st-andrews.ac.uk/Biographies/Turing.html. Probably one of the best quick sources for biographies of mathematicians and logicians on the web. Huge number of entries from ancient to modern times. Most entries have a lot of detail about the people. And thank heavens it's a non-commerical website. No animation, no ads, and easily navigated. The home page is: http://www-history.mcs.st-andrews.ac.uk/
"On Computable Numbers with Applications to the Entscheidung-Problem", Proceedings of the London Mathematical. Society, Vol. 42, 230-265 (1936). Alan's paper doesn't use modern programming and shows that a "Turing Machine" can be given a problem it can't solve. Interesting presentation of computers before computer's existed.
Online at http://www.turingarchive.org/browse.php/B/12 but not in a terribly convenient form.
"Computing Machinery and Intelligence", Mind Volume 59, pp. 433-460. Alan's original paper on the imitation game ("The Turing Test"). Easy to read and understand. An online vesion is at http://www.loebner.net/Prizef/TuringArticle.html. The debate on whether the Turing Test really defines "thinking" or if it just means you can fool all of the people all of the time is still ongoing.
"Alan Turing", http://plato.stanford.edu/entries/turing/ The Stanford History of Philosophy - Online, free, no registration, but not (thank goodness) completely open to all writers - you have to be an expert to write for SHP. This article is by Alan's biographer, Andrew Hodges
"Does Gödel Matter? The romantic's favorite mathematician didn't prove what you think he did.", Jordan Ellenberg, http://www.slate.com/id/2114561/. Jordan is professor of mathematics at the University of Wisconsin and points out that most mathematicians work with equations like "2 + 2 = 4" rather than (x)~Bew(x) which is what Kurt did. Under most definitions of the concept "number", then "2 + 2 = 4" is always true and you can prove it. So Kurt proved there are equations which don't do anything other than say they can't be proven? Many mathematicians simply say "Who cares?" and get back to work.
"Official Alan Turing Website", http://www.turing.org.uk/turing/index.html The web site is also by Andrew. Quite a lot of information. One gripe: flickering, if minor, animation
The WHOPPER Program
If you're interested, if H(x) exists, W could be written as:
|
{ while ("H( W )" == 1); } |
We use "1" for "H". So as long as H(W) halts, W will keep evaluating H(W) for ever. If H(W) doesn't halt, the W will stop.
Return to Alan Turing Caricature