Why did Prolog lose steam?

Its declarative lunch was partly eaten

An article making the rounds, "Who Killed Prolog?", asks: of the major programming traditions—imperative, functional, object-oriented, and logic—why did the logic-programming one, exemplified by Prolog, more or less die (to overstate its death a little)? The author, prominent Prolog veteran Maarten van Emden, traces it back to a large Japanese-government project, "Fifth-Generation Computer System", active from 1982 to 1992, which could've been Prolog's killer app, but instead failed and took Prolog down with it. I don't doubt that history might have been different if a few large Prolog projects in the 1980s turned out differently, and this one in particular might be key. But with all due respect, I think a larger reason for Prolog gradually losing steam is that its uniqueness as a declarative language has been chipped away.

Prolog proposed logic programming, the idea that one should write a program's logic not as a set of procedures, nor as a set of functions, but quite literally as logical statements. A lot of the novelty and excitement, though, was not specifically from logic, but from this being the first major proposal of declarative programming, the idea that one should specify what, not how, as a general paradigm for programming computers. That approach, specifying the solution's properties rather than the algorithm for finding it, is still the part that most fascinates computer-science undergraduates first exposed to Prolog. Specifying problems declaratively does predate Prolog, as in traditional mathematical equation solving, and in mathematical programming, scheduling, and planning systems. But these were intended as optimization systems for specific domains, not as a programming paradigm on par with functional or imperative programming. Prolog proposed that declarative programming could be flexible and general. Indeed it's turned out to be, but Prolog's lunch has largely been eaten, as bits of declarative-programming niceness have been incorporated into a variety of other systems, often in simplified and non-logic-based form.

Simpler declarative programming

People quickly noticed that Prolog gave bigger wins with some kinds of programming than others, and that often its full power wasn't actually needed for those wins. Perhaps this could've been solved by clever language engineering, the way that Haskell finally broke through the wall of pure-functional programs being seen as ivory-tower languages that were impractical, or at least really awkward, for any system that needed to do I/O or interact with users. But that didn't happen, at least in the 1970s. Instead, I see two main trends of systems that picked up some of the nicer bits of declarative programming. Both involved mining what was, in my view at least, the single biggest thing that Prolog made easier: the ability to keep track of facts and compute their implications.

The first observation was that rather than viewing Prolog as a way to write programs, it could be viewed as a database with a particularly powerful query language. Instead of just retrieving data, as databases of the time were mainly oriented to do, it could deduce complex results from that data. This way, the purely application logic (GUIs and network protocols and such) could be written in whatever language was most convenient, and Prolog could handle what, in this view, it was most natural for: using rules to deduce facts from other facts. Clearly a retreat from the full logic-programming view, but I think Prolog-as-a-database captures a big portion of the low-hanging fruit, and a lot of Prolog programs had that flavor anyway.

By the late 1970s, some Prolog researchers started working on Datalog, a variant intended to make a Prolog-type system usable as serious next-gen database. Clearly the relational database wasn't displaced, but the declarative-programming-over-data model did win to a certain extent. Today, a lot of logic is written declaratively in SQL, rather than SQL being used purely as a way to retrieve data for processing in an application language. Most of the classic Prolog examples can be written in SQL: if you have tables of sibling and parent relationships, you can write an SQL query that declaratively defines an "uncle" relationship, much as you'd do with the Prolog rule uncle(A,B) :- male(A), sibling(A,C), parent(C,B). The addition of recursive queries to SQL in the 1990s made even more of that sort of thing possible. It's not quite everything Datalog promised, but it incorporates enough of the declarative-programming win while retaining the relational-database model in which a lot is already invested (and where a lot of data is already stored) that it outcompeted Prolog-derived databases.

Another big set of systems that ate a tasty portion of Prolog's lunch were propositional rule-based systems. If you take Prolog's model of facts and Horn clauses, but turn them all into propositional facts and rules, with no logic variables or unification, your statements all look equivalent to either bare assertions of facts, like a., or simple boolean implications, like a :- b, c. This simplification has some advantages. First, and a consideration that probably shouldn't be underestimated: it's simpler to think about. You have facts, and boolean combinations of those facts can imply other facts. Secondly, it's now computationally efficient to calculate the whole set of "things that are true" by forward chaining from the set of initial facts, as opposed to issuing specific queries that are then answered through backwards chaining. Finally, it becomes easier to build truth-maintenance systems, which maintain a dependency between facts and conclusions, so if you add or remove a fact from the database, you can quickly update the set of things that are now true. These simpler kinds of declarative programming, especially production-rule systems like Jess and Drools, are now used to encode the core logic of lots of things, from expert systems to enterprise business logic.

Prospects today

In AI at least, a lot of current interest is in answer-set programming (ASP) [pdf], which uses Prolog-like syntax, extended with some nice new constructs (like choice rules), and inference based on SAT solvers rather than backwards chaining. It's theoretically cleaner than Prolog (its declarative semantics are clear, while Prolog always had the dirty secret of backwards-chaining operational semantics hiding under the hood), and can solve some complex search problems quite fast. Its grounding-based setup and the fact that it takes in a whole program and outputs a whole answer all in one go means that ASP is generally used neither as a general-purpose programming language (as in the original logic-programming ideal), nor as a way of dealing with large databases (as in Datalog), but instead as a way of specifying complex logic on relatively small domains, as with combinatorial search problems. I do find it quite promising, though, and its syntax and semantics are nice.

A different strain of research has carried on the logic-programming-as-general-purpose-programming dream, usually by creating hybrid languages that also incorporate some aspects of functional programming or other programming paradigms. Mercury and Mozart/Oz seem to be two of the more prominent ones, though I sadly know relatively little about them, and they seem to still be searching for their breakthrough.

A quite interesting twist is the addition of LINQ to C#. It's billed as adding something like SQL as a native C# language feature, allowing you to "query" things in the language like arrays, in the same declarative way you might query a database with SQL. This isn't quite logic programming, but you can see it as the reverse of the Prolog→Datalog transition, with SQL as the starting point this time: now we're taking a database query language, and using something modeled on it as a general-purpose programming construct. It's not everything the logic-programming dream promised, but people are today writing applications with large portions of their core logic written declaratively in LINQ, which is quite a win for mainstream adoption of declarative programming. It might be one of the more promising angles for future adoption, as well. If I had to guess, I would put the odds of a successor to LINQ incorporating more features from logic programming higher than I would put the odds of a Prolog descendent finally breaking through to mainstream use. So this might be worth watching.

* * *

I should disclaim that this is all my personal take on the history of how this all played out, based on what I've read and used myself. I'm not a 40-year veteran of the logic-programming scene or anything close to it, so it's not any sort of authoritative conclusion. I might be missing some key information, or misinterpreting bits of the history. I'd love to read a proper historical study on the subject, but for now, these are my guesses and opinions.

Follow-ups: In response to my comment about the possibility of "LINQ incorporating more features from logic programming", Max Strini points out a LINQ interface to the Z3 theorem prover. In regards to the relationship between Rete-based, forward-chaining truth-maintenance engines and Prolog-style query engines, Mark Proctor of Drools writes in to note that Drools has recently added backward chaining as well.