Prolog’s Death August 21, 2010Posted by Andre Vellino in Artificial Intelligence, Logic, Logic Programming.
Maarten van Emden just posted a terrific and authoritative account of one episode in the history of Prolog under the title “Who Killed Prolog” (and, tantalizingly, promises another episode soon featuring my other super-heroic programming language, Lisp).
According to van Emden, perhaps best known (by citation counts, anyway) as co-author (with Bob Kowalski) of the seminal 1976 JACM paper “The Semantics of Predicate Logic as a Programming Language“, the culprit in this who-done-it is the boondogle Fifth-Generation Computer System (FGCS) project.
van Emden’s historical account of what went wrong is completely correct, but I am not sure that this is all there is to it. I think there are (also?) technological and cognitive model issues with the language that are just as important to explaining its eventual demise.
I have had many opportunities to teach Prolog to programmers and by far the biggest cognitive problem that they have with this language is understanding what the interpreter is doing at any point in time. Prolog’s attempt at being declarative (I say “attempt” because I don’t think it succeeded quite well enough) is the problem: how to get a computer to do something without telling it what to do?
The art of computer programming isn’t taught or practiced as the art of specifying a problem – it should be, perhaps, but it isn’t. Arguably, the imperative programming paradigm is a more natural fit with the von Neumann computer architecture anyway; hence the popularity of strongly and statically typed imperative languages in which it is clear by inspection (or should be) what the machine is being instructed to do and on what data-objects these instructions should be performed.
The most confusing thing about Prolog is that, whatever algorithm you implement with it must be on top of the built-in ones, namely depth-first search, and unification (and only using recursion rather than iteration). Two things are always going on during the execution of a Prolog program: the traversal of a search space in which choice-points are introduced whenever multiple clauses match the current computational goal and a process of (possibly partial) variable instantiation (which may be undone when the the program traverses another branch at choice-points).
That this process of computation is difficult to grok is especially noticable when you try to debug a Prolog program. Computations get undone when attempts at satisfying a goal fail; other computations get retried down different branches resulting in different unifications and worse of all, the order in which you wrote your clauses in the program makes a difference to how it gets executed and, indeed, whether any part of the program is reachable.
I think this is just the kind of computer-generated complexity that, like multiple inheritance in Object Oriented languages, a programmer can really do without. For most programming tasks, except, perhaps, the kind found in computational linguistics, the fruits of these cognitive extravagances are not worth the expense.
So yes, the FGCS project was a boondoggle that contributed to Prolog’s death, but if Prolog had been easier to understand – perhaps with some stronger typing and some greater degree of declarativeness (such as can be found in some experimental descendants of Prolog such as Goedel) it might have survived.
Then again, perhaps not – Ada, after all, is pretty much dead too and it had none of these problems. Maybe it really is, as Maarten suggests, primarily a social phenomenon.