Monads in Scheme - Update

Just quickly revisiting the article the other day on monads in Scheme: I’ve just updated my experimental implementation to propagate monad type information both forward from the arguments to the result of the bind operation and backward from the expected result type to the required argument types of the bind. Now Oleg’s example (see near the bottom of the page, where there’s an expression (put-line "Enter a number: ")) can be translated into the following working code:

(define oleg-example-mixed-monad
  (mlet* ((_ (mdisplay "Enter a number: "))
          (n mread)
          (all-n (return (iota n)))
          (evens (return (run-list (mlet* ((i all-n))
                                     (if (even? i)
                                         (return i)
                                         (fail "odd"))))))
          (_ (mdisplay "Computed "))
          (_ (mdisplay (length evens)))
          (_ (mdisplay " evens\n")))
    (return evens)))

When run with (run-io oleg-example-mixed-monad), it produces sessions like the following (user input in italics):

Enter a number: 33
Computed 17 evens
(0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32)

The system could be sweetened up with a bit of generic-function or other OO sugar, but I’d say even without extra sugar that this experiment has been successful in demonstrating the feasibility of the technique. It looks like I’ll be able to use the idea for structuring effects in ThiNG.

Trait Composition in ThiNG

Trait composition is one of the fundamental ideas in ThiNG r3/r4. It’s used to assemble pattern-matching clauses in functions, to assemble definitions into a module, to (functionally-) update data-structure, to augment generic functions with new definitions, and to assemble modules into programs.

define cons [+car: [+cdr: [First: car Rest: cdr]]]
define map [+f: loop=[(cons +a +d): (cons (f a) (loop d)) Nil:Nil]]

Both these definitions use objects with more than one clause - for instance [First: car Rest: cdr] contains two clauses, as does the object after the equal-sign in loop=[...] in map. Multi-clause objects are defined as being constructed from single clauses using the trait composition operators sum (+), override (/), remove (-) and rename (@).

If we stick to an imagined syntax where only single-clause objects and trait composition operators can be used, then the above definitions might look like this:

define cons [+car: [+cdr: ([First: car] + [Rest: cdr])]]
define map [+f: loop=([(cons +a +d): (cons (f a) (loop d))] + [Nil:Nil])]

Note that sum (+) is used instead of override (/), since the pattern-pairs First vs Rest, and (cons +a +d) vs Nil don’t overlap. We could have used override, but we would have forgone the checks made for overlapping patterns that the sum operator provides.

Module definitions and generic functions in ThiNG work similarly. There are two dialects of ThiNG I’ve been thinking about, one where a generic function dispatches each message, and one more traditional dialect where messages are sent directly to explicitly-given receivers. Let’s call these dialects ThiNG/1 and ThiNG/N, for single-distinguished-receiver and multiple-dispatch (N receivers) respectively.

In ThiNG/N, you can think of each definition within a module as being a clause within a (part of a) global generic function. Here’s a hypothetical example, recalling that x y: z is syntactic sugar for the message [0: x y: z]:

module List = [
  [+a cons: +b]: [First: a Rest: b]

  [+f mapOver: (+head cons: +tail)]: ((f head) cons: (f mapOver: tail))
  [+f mapOver: Nil]: Nil
]

(As you can see, this leads to quite a different style of programming compared to ThiNG/1.) Rewriting this using single-clause-with-trait-composition syntax gives something like:

module List =
  [ ([0: +a] + [cons: +b]): ([First: a] + [Rest: b]) ] +
  [ ([0: +f] + [mapOver: (+head cons: +tail)]): ((f head) cons: (f mapOver: tail)) ] +
  [ ([0: +f] + [mapOver: Nil]): Nil ]

Importing a module into the current toplevel is a matter of replacing the current generic-function-dispatching object - let’s call it ENV - with itself either summed or overridden by the imported module definition, i.e. List + ENV or List / ENV.

I was interested to read an old message from Marcel Weiher (who gave an interesting talk the recent UK-Smalltalk meeting - see also here) in which he writes:

I do have a feeling that the direction [toward ultra-simplicity that Self takes], while very fruitful, turned out to be “too uniform”, in some senses. There is no fence at all to the meta-level at one point ( object vs. class ) and still a big one at another ( object vs. code/method ).

In ThiNG, the fence he talks about between objects and methods is removed - or at least made much smaller. Objects are simultaneously data-structures and methods/functions. If an object has a clause keyed on a pattern containing a binding, then it acts more like a method than a field; if the pattern doesn’t contain any bindings, it acts more like a field than a method. Lazy evaluation helps blur the distinction here.

Recent spikes in ThiNG

Since I last posted about ThiNG, my ideas have shifted closer to those expressed in Jay & Kesner’s Pure Pattern Calculus, and I’ve built a few experimental spikes to gain some concrete experience with some of the language features and designs I’ve been thinking about - among other things, that traits and modules share a number of core characteristics, and that using monads to structure effects can be profitable even in a dynamically-typed environment.

A few weeks ago, I implemented a simple, small traits language based on the formal model of traits of Schärli, Nierstrasz et. al. The language has no fields (methods suffice!) and no classes (prototypes suffice!). The implementation, a macro which embeds traits into regular PLT Scheme, is a toy, intended only to explore the idea of traits; in particular, it doesn’t “flatten” composed traits.

PLT Scheme’s match operator was used to match method signatures to method invocations, which is a little unusual - pattern-matching in object-oriented languages is usually restricted to searches by method name (“selector” in Smalltalk terminology).

It turned out to be tiny. Traits are represented as functions from a message to a handling thunk or false (representing “does not understand this message”). Implementing the trait composition operations (sum, override, remove, and rename) is done in a combinator-like way. The core routine for finding a method to invoke for a given message is twenty-one lines long and dirt simple.

If this spike were to be taken further, perhaps integrated with an existing OO system for Scheme, a few things would need to be changed. Firstly, while using full pattern-matching for message dispatch leads to a very user-friendly OO programming system, very few existing object systems do anything more than method name searches, so the traits implementation would have to be changed to be more method name based; and secondly, it’s not clear how a traits system would interact with multimethod-capable generic functions. Brian Rice has done some thinking and experiments along these lines in the context of Slate.

A couple of years ago, I stumbled across Oleg Kiselyov’s “Monadic Programming in Scheme”. Right near the bottom of the page, he points out a problem with his Scheme implementation of monads, which is that you don’t get automatic resolution of overloading of the bind and return operators. In Haskell, the type system figures out which monad is intended, usually without requiring programmer-supplied type annotations, but in Scheme there’s no analogous constraint-propagation system, so you have to select which monad you’re binding or returning into explicitly.

As part of thinking about the design of ThiNG, I’ve more-or-less settled on a strict Haskell-style monad-based separation of effects from computations, and I’ve come up with a simple runtime constraint-propagation-based way of getting the same automatic bind- and return-overloading as Haskell gives you, using OO dynamic dispatch instead of a static type system. The monads are fully “type-checked” before they’re run, so no actions are taken until the whole monad is correctly assembled.

During the weekend, I implemented a first stab at the constraint-propagation idea in Scheme. Currently it only type-checks the monad immediately before running it, rather than type-checking incrementally based on the best available information as the monad is being assembled. I plan to update the implementation to do as much early constraint propagation as possible, rather than waiting until the last minute. This will allow Oleg’s final example, involving simultaneous use of an IO and a List monad, to be automatically disambiguated.

I’ve been meaning to experiment with Haskell for a while, so last weekend I built a toy evaluator for my most recent ThiNG ideas using GHC. I used the Parsec parser-combinator library to build a parser. The parser and printer together took 86 lines of code; the evaluator itself, a mere 90 lines. The type system both helped and hindered: it caught a few stupid errors I made while initially constructing the program, but on the other hand also forced me into a slightly awkward design (almost identical AST, Value and Pattern types, with concomitant almost-duplication of code) that Scheme’s unityped nature would have allowed me to avoid.

The model of the language I built has allowed me to get a clear idea of how scoping should behave. It’s also clarified the semantics enough that I ought now to be able to build a partial-evaluator for it, to see how hard it’s going to be to get a usefully fast implementation.

The Pure Pattern Calculus, and Extensible Records With Scoped Labels

Two papers very similar to the ideas I’ve been developing for ThiNG have popped up in the last couple of weeks:

  1. Daan Leijen’s Extensible records with scoped labels: describes an intriguing system for typing records similar to those I’m trying to define with ThiNG.

  2. Barry Jay and Delia Kesner’s Pure Pattern Calculus: a fascinating language that embeds the lambda-calculus naturally that feels very similar to ThiNG.

Update: Since google finds the paper now, I’ve added the link to Jay & Kesner’s paper.

Design Problems in ThiNG

At the moment, I’ve three main problems in the work I’m doing designing ThiNG: syntax, semantics, and macros. (Doesn’t leave much, does it?) I’ll cover the syntactic and semantic problems tonight — my thinking on macros is much too fuzzy to write down at the moment.

Syntax

I’m leaning toward a concrete syntax that, like Smalltalk and Slate, makes message-sending the only concise operation, leaving application in particular as almost second-class. Since I expect message sending to make up a much larger fraction of code than raw application, it seems sensible to optimise for that case. Arranging things this way means we can avoid a lot of #quoting.

  1. Message sends through the dynamic-dispatcher (arrows, ==>, read more-or-less as “desugars to”):

    exp1, exp2            ==>   (0: exp1 1: exp2)
    exp1 + exp2           ==>   (0: exp1 #(+): exp2)
    exp1 selector         ==>   (#selector: exp1)
    selector: exp1        ==>   (selector: exp1)            "a problem case"
    exp1 selector: exp2   ==>   (0: exp1 selector: exp2)    "the same case"
    
  2. In-place literal object construction:

    [exp1, exp2]          ==>   [0: exp1 1: exp2]
    [exp1 + exp2]         ==>   [0: exp1 #(+): exp2]
    [exp1 selector]       ==>   [#selector: exp1]
    [selector: exp1]      ==>   [selector: exp1]            "NOT a problem case"
    [exp1 selector: exp2] ==>   [0: exp1 selector: exp2]    "illegal because ambiguous"
    
  3. Quoting (since quotation could be implemented as a macro, this will need to become more scheme-like in terms of tying in with macro (and perhaps regular application) syntax):

    # stx
    
  4. Application:

    exp1 $ exp2           ==>   (exp1 $ exp2)
    
  5. The “data” equivalent of application (see also the “semantics” section below):

    [exp1 $ exp2]         ==>   [exp1 $ exp2]
    

The “problem case” above is interesting: in Smalltalk and Slate, selectors, for instance #foo:bar:, are written foo: 1 bar: 2 when they’re used to send a message. The quoting of foo and bar is implicit — those tokens can’t be interpreted as variable references or variable bindings. In ThiNG, on the other hand, the identity function is [x:x], with the x on the left-hand-side of the colon a variable binding rather than a literal symbol to be used in pattern matching. If I wanted to instead produce an object that responded with some value when sent a literal message #x, I’d have to quote the x in the pattern thus: [#x:x].

When I’m sending a message through the dynamic-dispatcher, for instance (foo: 1 bar: 2), what happens under the hood is something like

currentContext $ [foo: 1 bar: 2]

at which point the problem becomes apparent. Both foo and bar occur as variable bindings - not as literal symbols! What’s intended is

currentContext $ [#foo: 1 #bar: 2]

On the other hand, if a user wrote [zot: 3] instead of (zot: 3), then I’d expect no such implicit quoting to happen.

Thinking about what Scheme might be like if it had more than the limited notion of pattern matching it has (see these two threads), we end up in a similar situation:

(lambda (x y) (+ x y))    ;; bind two variables
(lambda ('x y) ...)       ;; require first argument to be symbol x,
                          ;; and bind second argument

I think the solution of having implicit quoting in the syntactic context of message dispatch (round parens, rather than square brackets) is acceptable, because it is very seldom that one will want to send a message to the current context where the message needs to bind a variable, and should one want to do that, it is still possible using the explicit currentContext $ myMessage syntax above. (Caveat: I’m still uncertain about the best way of getting hold of the current context.)

Semantics

At present, matching the value [#x: 1 #y: 2] against the pattern [] will succeed, since the empty pattern places no requirements on its value. This is fine, except that it makes it impossible to introduce pattern-matching based branching!

The problem is this: Assume the existence of the empty object, and objects constructed from other objects, and nothing else. No symbols, integers, or other literals. With these ingredients, we can produce values such as

[]
[ []      : []      ]
[ []      : [[]:[]] ]
[ [[]:[]] : []      ]
[ [[]:[]] : [[]:[]] ]

etc. The problem is that because there is no ordering between clauses within an object/pattern definition, and because patterns within an object must be nonoverlapping, and because [] matches anything (it’s actually equivalent to discard!), we cannot use pattern-matching to distinguish reliably between any two of the values above.

Take, for instance, the pattern

[           []: a     [[]: []]: b ]

This can be equivalently written

[     [[]: []]: b           []: a ]

Now, when applied to a value [], obviously only the clause resulting in a will match, since the guard on the b clause places requirements on the value that it cannot satisfy. When applied to [[]: []], however, we have a problem, since both guards match — [], as a pattern, places no requirements on its argument at all!

When all we have is the empty object and objects constructed from it, we cannot define non-overlapping patterns. We can fix this in (at least) two ways: we can let clauses within an object be ordered, trying to match the leftmost first, for instance, or we can introduce some other kind of distinction that pattern-matching can get a grip on. I want to keep clauses unordered, for reasons involving method dispatch that I’ve not fully worked out yet, which leaves the introduction of a further distinction.

Fortunately, there is one piece of aggregate syntax that we need for other purposes, which can be used here without confusion: the syntax for application (more generally, without interpretation, simply syntax for pairs, analogous to Scheme’s (x . y)). When interpreted as code, it means application; when interpreted as data, I’m experimenting with the idea of making it mean extension. Assume, for the moment, that infix $ is the concrete syntax for application. Then a $ b means, if interpreted as code, “send the contents of variable b as a message to the receiver denoted by variable a”; and if interpreted as data, (loosely) “extend a with the clauses from b, searching a only if a search within b fails”.

We can then use pattern-matching to distinguish between

([] $  []) $ []          "left-heavy, a.k.a. left"
 [] $ ([]  $ [])         "right-heavy, a.k.a. right"

Now that we have a distinction we can make, we can define structures akin to booleans, natural numbers, characters, and symbols: all the literals we need for working with program representations.

(Pseudocode - I still need to work out how to integrate proper macros)

Define F = ([]$[])$[]
Define T = []$([]$[])

Define Nil = F
Define Cons(a, b) = [ T: a F: b ]

Define Zero = Nil (= F)
Define X * 2 + Y = Cons(Y, X), where X is a number and Y is either
  T for a one-bit, or F for a zero-bit

thus 5 = Cons(T, Cons(F, Cons(T, Nil)))
 and 2 = Cons(F, Cons(T, Nil))
       = [ T:F F:[ T:T F:F ] ]
       = [ ([]$([]$[])):(([]$[])$[])
           (([]$[])$[]):[ ([]$([]$[])):([]$([]$[]))
                          (([]$[])$[]):(([]$[])$[]) ] ]

Define a unicode character to be its codepoint as a number
Define a symbol to be a list of characters

Thus: #x = Cons('x', Nil)
         = Cons(120, Nil)
         = Cons(Cons(F, Cons(F, Cons(F, Cons(T,
                     Cons(T, Cons(T, Cons(T, Nil))))))),
                Nil)
           (etc.)

The naive encoding given above isn’t one that would be useful in a real implementation. For a start, it’d be very inefficient, so meta-level objects would probably be used directly instead; and also, even if that weren’t the case, a more realistic encoding would involve higher-level constructs, and wouldn’t be built so directly on primitive patterns of $. For instance, symbols might be represented as a stream of characters, with [#first: 'x' #rest: #nil], instead of the very low-level markers used above. (Although that particular choice of encoding opens up another can of worms: recursive data-structures!)

Note that I’m still working out the precise meaning of $ when interpreted as extension: for instance, when should the extension be visible as an extension, and when should it fade into the background, acting as if the compound object it produces is a simple object?

Chicken Cairo update

I’ve updated chicken-cairo to keep up with a number of changes in the Cairo API. I also shifted it to darcs — GNU/Arch is just too much of a martinet for me.

$ darcs get http://eighty-twenty.org/~mikeb/Darcs/chicken-cairo

I came across an odd problem using Chicken Scheme while I was trying out my updates: I used records to wrap foreign pointers, so I could check types and provide record printers; however, it seems that these pointers are moved around (during garbage collection), meaning that they are trashed in-between getting them from a C function invocation and using them. Using tagged pointers instead of records appears to avoid this problem. I don’t understand very well why this is so, yet.

The code contains annotations, where useful and I’ve remembered, as to what’s changed in the Cairo API.

ThiNG repository available

I’ve been hesitant to make my code publicly available, since it’s embarrassingly messy, but a couple of people have indicated they’re interested in taking a closer look, so here it is:

$ git clone https://gitlab.com/tonyg/smalltalk-tng

The code is mostly written using Chicken (with my chicken-sdl bindings and a version of my packrat parsing library), and is laid out as follows:

./doc
Contains notes on design and implementation, mostly fairly out-of-date, but containing the odd interesting thought. There are also some discarded experiments in concrete syntax here.
./experiments
Contains syntax experiments, and some spike-solutions exploring ideas I'm considering for inclusion, such as STM and partial evaluation.
./lib
Contains third-party library code: at the moment, just a patched version of Dorai Sitaram's pregexp library.
./r1

A first ThiNG spike implementation, borrowing ideas heavily from Slate. The dispatch algorithm is a Scheme reimplementation of Slate's original common-lisp dispatch algorithm (the code is structurally almost identical). The main difference between the two languages is that the only available mutable state is through MVars, and that every subexpression is evaluated in parallel. Have a look at boot.thing to get a feel for the language. (Note in particular the differences between #map: and #mapInOrder:.)

This implementation got as far as having image-based persistence, network I/O, a socket-based REPL, and an interface to the SDL graphics output and UI event input frameworks before I decided I'd learnt enough from it and moved on to the next iteration.

./r2
This was a small step on the way to r3.
./r3
This is the spike I'm currently working on (that I mentioned the other day). At the moment, it's a parser and a simple evaluator. Both are broken right this minute, as I'm working on some quite fundamental changes to the syntax of the language.

From TLA to Darcs

I’ve finally gotten around to moving many of our TLA archives over to Darcs. I simply sealed the TLA versions and imported the sealed snapshot into Darcs, without transferring the history.

The main public projects I’ve darcsified are chicken-sdl and gyre (which is the tool we use to publish this site).

Second ThiNG spike

Over the weekend I built a second experimental implementation1 of ThiNG (older articles: 1, 2), a simple normal-order pattern-matching-based evaluator with a parser built out of packrat parsing combinators and my hacked-on version of Dorai Sitaram’s pregexp library.

I’ve chosen to view functions and records/objects exactly the same way: as pattern-matching message-dispatchers. Here are a few expressions, to give some feel for the language:

"This is a comment, just as in Smalltalk."

[x: x]              "This is the identity function."
[x: x] 123          "This is the identity function applied to the literal integer 123."

"This function takes a 2-tuple, and returns the tuple with its elements swapped."
[[x, y]: [y, x]]

"This is a sketch of a mapping function."
define map [
  (f Nil)           : Nil
  (f [Hd: h Tl: t]) : [Hd: f h Tl: map f t]
]

Note that objects ([Key: value Key: value ...]) are the same as functions ([Pattern: expression Pattern: expression ...]) since (1) the language is lazy, and (2) atoms (keys) are just special-cases of patterns.

I noticed an interesting thing while I was implementing pattern matching. Consider the pattern [Hd: h Tl: t]. The idea is that it should match an object that can receive Hd or Tl as arguments, and bind the variables h and t respectively to the results of applying the object to those arguments. That means that when function/object syntax occurs in value context, it is interpreted as [pattern: value pattern: value ...], but (and this is the interesting bit) in pattern context, it is interpreted as [value: pattern value: pattern ...].

When a pattern [X: v] is being matched against some function, perhaps [X: 1 Y: 2], we see:

    [[X: v]: v] [X: 1 Y: 2]
--> let v = ([X: 1 Y: 2] X) in v
--> let v = 1 in v
--> 1

A slightly more complex example:

    [[[X: 123]: [v, w]]: v] [[X: n]: [n * 2, n * 3]]
--> let [v, w] = ([[X: n]: [n * 2, n * 3]] [X: 123]) in v
--> let [v, w] = [123 * 2, 123 * 3] in v
--> 123 * 2
--> 246

Variable references on the left-hand-side of a colon in value context are binding occurrences, and on the right-hand-side in value context are referencing occurrences; but in pattern context this is reversed, with variables on the left-hand-side of a colon being references and on the right-hand-side being binders. An interesting question about the scope of variable references inside patterns arises: what should their scope be, considering that functions/objects do not have a defined ordering on their pattern-matching clauses?


  1. The first experimental implementation was pretty much a pure-functional dialect of Slate, with ML-style reference cells (sparsely used). The system grew to the point where it could support an SDL interface and multiple socket-based tty consoles, and then I decided I’d learnt enough and it was time to start again.