Quantcast
Channel: Natepod
Viewing all articles
Browse latest Browse all 81

Dataspace 14: What Question Should I Have Asked You Instead Of This One?

$
0
0

I don’t want to say that my Dataspace project is on hold, but I have hit a conceptual wall, a very familiar one that I’ve been hitting for a long time.

I’m reasonably happy with Term-Expressions as as an approach to a universal data syntax. They’re not perfect and they can take a bit more ongoing tinkering. (One piece of tinkering I’ve been doing involves looking at Retro Forth and thinking about the possibilities of a word-based rather than character-based syntax).

The wall is more to do with the execution model. What I want, and have wanted since about 2006, is a way of combining the SLD Resolution algorithm from Prolog and Kanren with the more familiar application/evaluation/reduction algorithm from Lisp and lambda calculus. It is a hard problem, or at least it appears to be. If there is a solution, it will appear very simple and obvious once it is solved, because simplicity is the requirement for being an acceptable solution. But I don’t have that solution yet.

The problem comes because there are two dimensions in which Prolog-like resolution expands on Lisp-like evaluation: a space-like dimension and a time-like dimension.

The “space-like” dimension of Prolog’s “logic programming” or Kanren’s “relational programming” comes from the way they both expand functions into relations. If the core abstraction in your programming language is a relation rather than a function, then it means an “evaluation” can return zero, one or more output values rather than just one, for any given input.

This is a good thing, in my opinion, and it’s where a large amount of the power of logic/relational programming comes from. But dealing with every return value being a set (or list) of values means you have to think quite hard about where those values are going to go, and what you are going to do if there are either none, or a very large number of them. And also, how the whole system resolves down to its simplest form, which in normal programming we assume would be just one value. Both Prolog and Kanren (at least Micro-Kanren, its simplest form so far) solve this by giving you only one value at once, with the rest of the values being something like a continuation (in Prolog) or a function you have to force (in Micro-Kanren). This solution, however, forces a linear search strategy on what would or should otherwise be a parallel search space, and that doesn’t seem the best. Prolog came up with a pretty good, cheap linear search strategy (depth-first search) but it fails on some fairly commonplace situations (left recursion). Kanren appears to have a better default search strategy that can cope with more forms of recursion, but it’s still linear.

It’s awkward that we have to think about this extra “space-like” dimension, where every “function call” can stall out with no values or balloon out into a large or even infinite number. But it’s something we can cope with.

The “time-like” dimension, though, is where things get weird. In Prolog and Kanren, you can pass an “unbound variable” to a predicate/relation.

I call this weird unbound-variables behaviour of Prolog “time-like” because in normal “Lisp-style” function calling you are asking:

“My question is X. Your answer to that will be Y. What’s Y?”

But in Prolog, you can also ask a question about questions themselves:

“What question should I have asked you instead of this one to get you to answer Y?”

and you can even have neither question nor answer bound:

“What question should I ask, on any possible subject, to get any answer from you at all? Please list all possible questions and answers, as pairs, in order. I’ll wait.”

(But obviously you don’t want and won’t get every possible question and answer because that’s not just an infinite set, but an infinitely infinite set of the kind that makes Georg Cantor wince, ie, it’s not even countably infinite, but you will get an extremely large set and you don’t want it all at once, so some kind of lazy evaluation will need to be part of even the very simplest solution.)

Prolog handles this sort of weird query (as it handles everything else) via backtracking, which is essentially time travel. Computations run forward until they hit a snag (ie, zero values returned rather than at least one) and when that happens the computation gets unwound and time-travelled back to the last variable binding that worked.

Unbound variables are where the second chunk of Prolog magic happens, and they’re also very very weird to think about in traditional programming terms. They’re like… if you had a function, right, but instead of applying that function to any value, you just…. didn’t apply it, to anything, and instead you examined the type of that function, found its domain, and iterated through the values of that domain, at runtime. So you were deconstructing the type of a function as a side effect of applying it. And that was a perfectly normal thing to do in your program, and every program could do it. There is therefore a perfect equivalence between “functions” and “types” inside a Prolog program, with the two both being constructable and examinable at runtime… or rather, neither functions nor types exist as things-in-themselves but as components or views of a single unified abstraction, predicates/relations. It’s all very interesting indeed. At least I think it is.

This behaviour is something that the good old workhorse of computer science, Lambda Calulus, does not model well at all. Or at least, it does not model it through the usual mechanism of applying a lambda to other things. Computer scientists do use Lambda Calculus in complicated ways to prove all sorts of theorems but they often do it by “cheating”, ie, by directly accessing the components of a lambda expression itself. This is cheating because a computer, at runtime, can’t do this, at least not in languages descended from Lisp, which is most of our current programming languages. One of the ground rules of Lisp and all her children is that once you have a lambda, a function, the only thing you can do with it is apply it to a value and get a value out. It is forbidden to ask “what’s inside the box”. The box is an opaque box and the whole art of compilation rests on the box – the function – remaining opaque.

(It may be important and relevant, though, to note here that Lambda Calculus was not invented to write Lisp, or even to be a computer programming language of any kind. Rather, it was a failed project in logic that John McCarthy dug up a couple of decades later and used for a completely different purpose. So it’s entirely possible that Lambda Calculus is not actually the best tool for doing things related to computer programming – because it wasn’t invented for that. It was just the screwdriver sitting in the tool box that happened to be there and available, when maybe a spanner would have done a better job).

Prolog predicates work differently to Lambdas. They do have a sort of opaque-ness about them… it might not always be possible to ask a Prolog system exactly what the definition of a predicate is, for example. But, you can always “look inside” a predicate in a sense by asking it what its arguments would be (“what question should I have asked to get answer Y?”) in a way that you just can’t do with a Lambda in Lambda Calculus.

(Or to be more precise aren’t allowed to do, if you are not a mathematician with a pencil sitting outside the system, but are instead a computer program – ie, a Lambda – looking at a memory full of other Lambdas. If you are a Lambda, you have only one thing you can do to another Lambda – “eat it” – but maybe what you would like to do is just look at the other Lambda, but you weren’t given eyes by your mathematician creator, only a mouth).

Kanren relations are just functions, so they’re opaque boxes like in Lambda Calculus, but, they’re functions that are pretending to be Prolog predicates, so what they return is the equivalent of “the thing that you can look inside”. They’re opaque boxes holding a transparent box.

To really model what the Prolog SLD Resolution algorithm is doing in Lambda Calculus – to get to the core of what’s going on in the simplest terms – you would have to have some kind of primitive other than ‘abstract’ and ‘apply’, I think.

(Yes, you can implement the Resolution algorithm in Lambda Calculus… that’s what Kanren does. But it doesn’t directly model it. What I mean by that is that Kanren is a layer that runs over another language, like Lisp/Scheme. It is not a language in itself. Kanren sort of operates at two levels: Lambda Calculus at the lower level, interpreting or emulating Prolog at the higher level. That split-level feel about them is what makes it feel a little clumsy and “ugly” to me.)

Finally, Prolog (and Kanren)’s unbound variables don’t really have a parallel in other computing or data storage systems, and that makes people nervous. It makes me nervous anyway. How do you name them? (Micro-Kanren uses integers, but how would that even begin to parallelise? What happens once you make more than BIGNUM queries over the lifetime of any given processing node or data set?) How do you transport them between systems? Do they make a set of data intractable for some processing? For example, one of their possible close relatives, RDF’s Blank Nodes, really scare people because they add all sort of weird logical complexities (and massive CPU processing time sinks) into what would otherwise be simple bulk data handling processes. The SQL rival, Datalog, was derived from Prolog but removed unbound variables, which I think really limits what it can do but no doubt makes it tractable for some of these tasks.

The other side effect of unbound variables is that because of them – because you can ask Prolog or Kanren “what question should I have asked instead of this one?” and get a sensible answer, and the whole system relies on this – it is very hard to cleanly map a Prolog predicate or a Kanren relation onto a “function that you ask questions and get answers”. If you ask a “question”, it’s a question padded with a lot of other stuff (an entire predicate, something like “a database row”) ; if you get an “answer”, it’s an answer padded with even more stuff, and of a completely different shape to the question – not even “a database row” but rather a set of variables and their bindings. And maybe even the variables in that answer don’t have names, they have numbers, but the original question you asked used variables which were names.

None of this feels either simple or “right”. In Christopher Alexander’s terms, it does not have a sense of Beauty to it, which must mean that there are unresolved (or unacknowledged) forces in the problem space that are not getting the attention that they need. I feel that there is an extremely strong and very important sense of Beauty very close to both Prolog and Kanren…. but neither are there yet. Something is either missing, or something unnecessary has been added that breaks a symmetry.

Trying to understand this (on a deep enough level to make it feel simple… in other words, to feel my way to a solution that really has Beauty) is the conceptual wall that I keep hitting.

I am sure that there is something over that wall, but I can’t climb it yet.


Viewing all articles
Browse latest Browse all 81

Latest Images

Trending Articles



Latest Images