In my last post, I mentioned that I wanted to write about automated theorem proving. As it turns out, finding quality blog posts about this subject is very difficult due to it being a relatively new and confusing field. I did however come across a community blog of scientists, Less Wrong, that gave me a good starting place for thinking about the problem. There is a general method using first order logic that would allow us to construct a proof for any provable proposition and the author Jonah Sinick mentions this, but there are also downsides noted.
“It’s easy to give an algorithm that generates a proof of a mathematical theorem that’s provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.”
It’s certainly interesting that a general algorithm exists for provable propositions but the algorithm is awful. All we can guarantee in this algorithm is that a deduction will be returned in some finite amount of time (the universe may die before it can be returned). I suppose a kind of ultimate question is if there is some algorithm that necessarily runs in less time than the enumeration technique and produces the same proof? I think the fact that the author raises the obvious connection between theorem proving and general artificial intelligence is interesting, since it seems likely that new advances in “deep-learning” techniques will create better algorithms in the future. Furthermore, there is the question of what currently is provable in a general sense. The blog post mentions that only trivial theorems seem provable. Sinick writes, “I don’t know of any computer programs that have been able to prove theorems outside of the class ‘very routine and not requiring any ideas,’ without human assistance (and without being heavily specialized to an individual theorem).”
Sinick then mentions an example of theorems that if provable would hugely increase his confidence in the prospect of a true general AI. To him this could mean deriving the Sylow subgroup theorems. Since the Sylow subgroup theorems are constructed from the group axioms, we necessarily need to prove the Sylow subgroup theorems from the group axioms. I’ve had experience constructing these proofs myself, and the bizarre thing is that often some idea not in the axioms or derived from the axioms must be adapted to prove the theorems. These ideas themselves can be thought of as giving the justification for a deduction. Modus ponens or logical inference is an idea that sits behind some deduction in first order logic for instance or even the law of contradiction.
An efficient method for finding a correct deduction must then be able to illuminate the ideas that sit behind deductions from axioms. One would naturally think that the time to find a deduction would be based, at least partially, on the “complexity” of the set of ideas, which would be difficult to define concretely since two sets of separate ideas may be able to prove the same theorem. Perhaps something like Kolmogorov complexity (something that measures the complexity of a given string – for instance the string 0000 is less complex than 1214) would be a good starting place to look at complexity as a legitimate mathematical construct.
Ultimately, I think this problem has been very interesting to think about. The notion of a trivial set of theorems is interesting as well as the complexity of a set of ideas to required guarantee a deduction. Perhaps next time we can look at ways of defining the complexity of a set of axioms and an associated theorem T.
Edit: It seems that Chaitin’s incompleteness theorem will guarantee that some string is unprovably complex past some level of complexity. Likely, this will be a huge problem.