Programming Language Seduction September 8, 2010Posted by Andre Vellino in Constraints, Knowledge Representation, Logic, Logic Programming.
add a comment
Maarten van Emden has written a new piece, “The Fatal Choice” (companion to his earlier post “Who Killed Prolog“) which aims at explaining the fierce loyalty that devotees have to programming languages such as Lisp and Prolog.
One can’t help but notice in this post, a slight undertow of resentment – as if Maarten had been jilted. To wit:
The US had acted as transmitter of the Prolog bug without being itself infected.
… as though Prolog were a European disease against which Americans (but not Canadians) were (thankfully) immune.
The examples that follow are not useful, but are chosen to convey the seductive nature of Prolog.
… as though she were an evil temptress whose beauty is merely skin deep!
As a lover (of Prolog) myself, I can definitely attest to its seductiveness. I still love Prolog – for some of the reasons that the subsequent generation of Python and Ruby affecionados do: the freedom from the limitations of strict typing, dynamic interpretation, conciseness… Shall I compare thee to a summer’s day?
But what has smitten me for life is not really unification and backtracking – it’s Constraint Programming. There are lots of programming languages deisgned for solving constraint satisfaction problems. There are even spreadsheet plugins for solving CSPs. But what makes Logic Programming the perfect paradigm for constraint programming – as do speadsheets, actually, and for the same reason – is the declarative nature of relations between constrained variables.
One of my favourite toy program that illustrates the declarative character of CSPs in a logic programming framework is the N-Queens problem. The program below can be run with the Finite Domain (FD) solver in GNU Prolog.
queens(N, L) :- length(L, N), domain(L, 1, N), safe(L), fd_labeling(L). safe(). safe([X|Xs]) :- safe_between(X, Xs, 1), safe(Xs). safe_between(X, , M). safe_between(X, [Y|Ys], M) :- no_attack(X, Y, M), M1 is M+1, safe_between(X, Ys, M1). no_attack(X, Y, N) :- X #\= Y, X+N #\= Y, X-N #\= Y. domain(L,S,E) :- values(S,E,V), fd_domain(L,V). values(N,N,[N]):-!. values(N,M,[M|R]):- L is M - 1, values(N,L,R).
I like this program not because it is 66% shorter (in than the equivalent [backtracking and recursive]) Java program but because the constraints are laid out declaratively by inequality constraints (in “no_attack”) and the work is done by a constraint propagation engine whose conceptual model is a natural extension of unification. In fact, constraint propagation on finite domains and on closed intervals on the real line have a semantic model given by the theory of lattices, which Bill Older and I wrote up in a paper in 1994.
What I find seductive is about constraint logic programming is the mathematical elegance of its underlying semantics and I haven’t yet fallen out of love.
Prolog’s Death August 21, 2010Posted by Andre Vellino in Artificial Intelligence, Logic, Logic Programming.
Maarten van Emden just posted a terrific and authoritative account of one episode in the history of Prolog under the title “Who Killed Prolog” (and, tantalizingly, promises another episode soon featuring my other super-heroic programming language, Lisp).
According to van Emden, perhaps best known (by citation counts, anyway) as co-author (with Bob Kowalski) of the seminal 1976 JACM paper “The Semantics of Predicate Logic as a Programming Language“, the culprit in this who-done-it is the boondogle Fifth-Generation Computer System (FGCS) project.
van Emden’s historical account of what went wrong is completely correct, but I am not sure that this is all there is to it. I think there are (also?) technological and cognitive model issues with the language that are just as important to explaining its eventual demise.
I have had many opportunities to teach Prolog to programmers and by far the biggest cognitive problem that they have with this language is understanding what the interpreter is doing at any point in time. Prolog’s attempt at being declarative (I say “attempt” because I don’t think it succeeded quite well enough) is the problem: how to get a computer to do something without telling it what to do?
The art of computer programming isn’t taught or practiced as the art of specifying a problem – it should be, perhaps, but it isn’t. Arguably, the imperative programming paradigm is a more natural fit with the von Neumann computer architecture anyway; hence the popularity of strongly and statically typed imperative languages in which it is clear by inspection (or should be) what the machine is being instructed to do and on what data-objects these instructions should be performed.
The most confusing thing about Prolog is that, whatever algorithm you implement with it must be on top of the built-in ones, namely depth-first search, and unification (and only using recursion rather than iteration). Two things are always going on during the execution of a Prolog program: the traversal of a search space in which choice-points are introduced whenever multiple clauses match the current computational goal and a process of (possibly partial) variable instantiation (which may be undone when the the program traverses another branch at choice-points).
That this process of computation is difficult to grok is especially noticable when you try to debug a Prolog program. Computations get undone when attempts at satisfying a goal fail; other computations get retried down different branches resulting in different unifications and worse of all, the order in which you wrote your clauses in the program makes a difference to how it gets executed and, indeed, whether any part of the program is reachable.
I think this is just the kind of computer-generated complexity that, like multiple inheritance in Object Oriented languages, a programmer can really do without. For most programming tasks, except, perhaps, the kind found in computational linguistics, the fruits of these cognitive extravagances are not worth the expense.
So yes, the FGCS project was a boondoggle that contributed to Prolog’s death, but if Prolog had been easier to understand – perhaps with some stronger typing and some greater degree of declarativeness (such as can be found in some experimental descendants of Prolog such as Goedel) it might have survived.
Then again, perhaps not – Ada, after all, is pretty much dead too and it had none of these problems. Maybe it really is, as Maarten suggests, primarily a social phenomenon.
Innovation = Breaking Conventions June 22, 2010Posted by Andre Vellino in Information retrieval, Knowledge Representation, Logic.
1 comment so far
A few months ago I was listening to Beethoven’s Piano Sonatas and observing to myself that the genius of the great western composers (my favourites: Monteverdi, Bach, Mozart, Beethoven, Stravinsky, Shostakovitch, Messiaen) is that they all “broke the rules”. I have heard that said about visual artists as well.
In challenging the conventions of their time, they created something new, exciting, different and unexpected that nevertheless expressed a coherent musical thought, evoked specific moods and created a musical narrative.
I was thinking (analogically) that the same might be true in computer science. Consider for instance relational data modeling. The core assumption here is that there are “entities” (things) and “relations” (associations) between them. What if we broke the rules for representing data either in the Object model or in the conventional Entity-Relationship model?
Following suggestions by Peter Turney in an early blog post on relations and attributes consider what a “database” might look like if the fundamental constituent was the relation instead of the entity.
In the conventional model, E-R diagrams allow database developers to constrain the fields in each entity to have some kind of association (1-1, 1-many, etc.) with other fields in other entities. But suppose instead that the fundamental assertion was that a relationship be true between, say, “events”.
The idea I am entertaining is similar to what Donald Davidson sketches in his paper Mental Events. In this view, the ontological primitive is an “occurrence” in some domain or field – e.g. a bank transactions / address changes / christenings – in which the constituents of the event (people, banks etc.) are not the primitive “objects” that need to be modeled by the database. Rather, it is the events themselves that are at the hub.
This is an awfully sketchy idea, but that’s one of the nice things about blogging – you have the freedom to just put out these half-baked ideas out there and get some early feedback.
Logicomix September 28, 2009Posted by Andre Vellino in Logic.
add a comment
Personally, I am glad for book reviews at the New York Times. Without them, it would no doubt have taken me much longer to discover LogicoMix, the comic book version of “the epic story of the quest for the Foundations of Mathematics”.
The concept and story comes from Christos H. Papadimitriou who teaches theoretical computer science at UC Berkely and wrote the erudite text book “Elements of the Theory of Computation”.
One anonymous reviewer of LogicoMix on Amazon says:
…the book tries to be too many things at once, and succeeds as none of them. It is neither a strong introduction to Russell’s ideas, nor a worthwhile biography in condensed form, nor a successful piece of historical comic art. It’s a pleasant enough read, but considering its ambition ultimately a disappointing one.
So… my expectations from the NYTimes (mostly) positive review are dampened somewhat. Still – how often does this golden age of logic get the attention of the graphic novelist? Maybe they scooped Art Spiegelman.
Wait for Bus or Walk January 24, 2008Posted by Andre Vellino in Logic.
Here’s a(small) question to which I’ve always wanted to know the answer: is it more effective to wait for the bus or walk to the next bus stop?
Justin has to travel a distance of d miles along a bus route. Along this route, there are n bus stops i, each spaced at a distance of d_i from the starting point. At each bus stop, Justin is faced with a choice: to walk or to wait. If he walks on, he can still catch a bus at the next bus stop–but if a bus passes him while he walks, he is almost assured a longer wait.
Eternity II November 28, 2007Posted by Andre Vellino in Logic.
If you think that P=NP is a purely theoretical question, think again. It’s been more than 30 years now since Cook and Reckow’s seminal paper “On the lengths of proofs in the propositional calculus” and the Millenium Prize (from the Clay Mathematics Institute) for solving P=NP is currently $1M.
It seems bizarre that the stakes are even higher ($2M) for solving a particular instance of an NP-complete problem – the Eternity II puzzle. The prize is a clever marketing ploy to induce the foolhardy to buy it.
Eternity II is a variant on Tetravex and prospective buyers are well advised to read the proof by Takenga and Walsh that Tetravex is NP-complete or the paper “Jigsaw Puzzles, Edge Matching, and Polyomino Packing: Connections and Complexity“. The reader of either paper will quickly conclude that there are easier ways to make $2M – you could take another NP-Complete problem, make a game out of it and sell the game – or you could crack the public key encryption in SSL and become an internet pirate.
Better still, you could just buy a lottery ticket. The probability of your winning a lottery jackpot are quite a bit higher than solving Eternity II. From his explanation of the design of Eternity II, Brendan Owen estimates that the best possible algorithms for this puzzle will require the exploration of 2×10^47 possibilities.
So if you don’t mind making Eternity II’s inventor – a British Viscount who denies the anthropogenic nature of Global Warming and admires Margaret Thatcher – more rich than he already is, then this game’s for you!