Halting problem
Adapted from Wikipedia · Discoverer experience
In computability theory, the halting problem is about deciding whether a computer program will stop running or keep going forever. This question is very important because it shows that some things can be described but are impossible to calculate for every program.
Alan Turing showed in 1937 that the halting problem cannot be solved by any general algorithm. This means there is no single way to always know if a program will stop. The problem is usually studied using a model called a Turing machine, which helps us understand what computers can and cannot do.
The proof of this idea uses a special kind of program that does the opposite of what another program predicts it will do. This shows that no program can correctly decide the halting problem for every possible case, making it a fundamental limit of what computers can achieve.
Background
The halting problem is about figuring out if a computer program will stop running or keep going forever when given a specific input. This idea applies to all programs written in certain types of languages. Some simple programs are easy to understand — for example, a program stuck in an endless loop never stops, while a program that prints "Hello, world!" stops right after it prints.
However, Alan Turing showed that there is no single method that can always tell us whether any program will stop or run forever with any input. He proved that any such method could give wrong answers in some cases, meaning we can't create a perfect way to know for sure.
History
Further information: Algorithm § History: Development of the notion of "algorithm"
In 1936, Alonzo Church showed that some problems cannot be solved by computers. Alan Turing did similar work in 1937. Since then, many other problems that computers cannot solve have been found, including one called the halting problem, which became well-known in the 1950s.
Timeline
- 1900: David Hilbert asked important math questions at a big meeting in Paris.
- 1920 – 1921: Emil Post looked at a problem about whether computers can always stop, but it wasn’t solved yet.
- 1928: Hilbert asked if math was complete, consistent, and decidable.
- 1930: Kurt Gödel gave answers to Hilbert’s first two questions.
- 1931: Gödel published his important work on what can’t be decided in math.
- April 19, 1935: Alonzo Church showed that some problems can’t be solved by computers using a special system.
- 1936: Church published the first proof that a big math problem can’t be solved.
- October 7, 1936: Emil Post added a “stop” rule to his computer ideas.
- May 1936 – January 1937: Alan Turing published his work showing some problems can’t be solved by computers.
- 1939: J. Barkley Rosser noticed that the ideas of Gödel, Church, and Turing were all connected.
- 1943: Stephen Kleene talked about computer procedures that must end.
- 1952: Kleene talked about the fact that no computer can always tell if another computer will stop.
- 1952: Martin Davis first used the term “halting problem” in lectures.
Origin of the halting problem
Many people think Turing first talked about the halting problem, but he didn’t use those words. The first person to use the term “halting problem” was Martin Davis in 1952. He talked about whether a computer will stop or run forever.
Davis said:
“we wish to determine whether or not [a Turing machine] Z, if placed in a given initial state, will eventually halt. We call this problem the halting problem for Z.”
Before Davis, Stephen Kleene had said something similar in 1952:
“there is no algorithm for deciding whether any given machine, when started from any given situation, eventually stops.”
The halting problem is connected to other problems about whether a computer will print certain symbols, but they are not exactly the same.
Formalization
In theoretical computer science, a decision problem is any question that can be answered with a simple yes or no about a mathematical object. The halting problem asks this: Given a description of a program (P) and an input (x), will the program (P(x)) eventually stop running?
This problem is important because it shows that some things can be described but not solved by computers. The halting set represents this problem, showing which programs will stop with certain inputs.
Undecidability
Main article: Undecidable problem
A decision problem is decidable if there is a way to always find the correct answer. The halting problem is undecidable, meaning no general method can always tell us if a program will stop. This was shown using Turing machines, but it applies to any system that can perform similar calculations.
Proof concept
Christopher Strachey showed that the halting problem cannot be solved. Imagine a special tool that could tell us if any program will stop. But when we create a program that uses this tool to do the opposite of what the tool says, we find a contradiction. This means such a tool cannot exist.
Sketch of rigorous proof
The idea above can be made more exact. We try to show that no method can always decide if a program will stop on any input. By creating a special program that behaves differently based on the answer of another program, we find contradictions. This is similar to a famous math argument about lists that cannot list all their own numbers.
| f(i,j) | i | ||||||
| 1 | 2 | 3 | 4 | 5 | 6 | ||
| j | 1 | 1 | 0 | 0 | 1 | 0 | 1 |
| 2 | 0 | 0 | 0 | 1 | 0 | 0 | |
| 3 | 0 | 1 | 0 | 1 | 0 | 1 | |
| 4 | 1 | 0 | 0 | 1 | 0 | 0 | |
| 5 | 0 | 0 | 0 | 1 | 1 | 1 | |
| 6 | 1 | 1 | 0 | 0 | 1 | 0 | |
| f(i,i) | 1 | 0 | 0 | 1 | 1 | 0 | |
| g(i) | U | 0 | 0 | U | U | 0 | |
Computability theory
Main article: Computability theory
One way to show that solving a problem is very hard is to connect it to the halting problem. This means showing that if we could solve the problem easily, we could also solve the halting problem easily. But since we know the halting problem cannot be solved by any general method, this proves the other problem is also very hard.
For example, deciding whether a statement about basic numbers is true or false is also very hard. This is because the question of whether a computer program will stop or run forever can be turned into a statement about these numbers. If we could always tell the truth of number statements, we could also solve the halting problem — but we know this is impossible.
Gregory Chaitin created a special number, called Ω, that helps us understand the halting problem. This number represents the chance that a randomly created program will stop. Even though we can calculate the first few digits of Ω, there is no way to find all its digits. This shows that some things can be described but not fully calculated.
Because the halting problem cannot be solved, it shows that not everything that seems possible can actually be done by computers. This helps us understand the limits of what machines can achieve.
Generalization
Many different versions of the halting problem exist in computer science books. These versions are very hard to solve and share the same difficulty as the basic halting problem. This means there is no general way to answer them for all possible programs and inputs.
One version asks whether a program will stop for every possible input. This version is even harder to solve than the basic halting problem and cannot be decided even with special help called an oracle.
Another version looks at programs that can sometimes correctly say if another program will stop but might not answer for other inputs. Figuring out which programs can do this is also very hard and cannot be solved in general.
A special kind of computer called a lossy Turing machine, where parts of the information can disappear, has a halting problem that can be solved, though not in a simple way.
Machines that have special help called an oracle can solve the halting problem for some specific computers and inputs but still cannot always tell if computers like themselves will stop.
Related articles
This article is a child-friendly adaptation of the Wikipedia article on Halting problem, available under CC BY-SA 4.0.
Safekipedia