Control in the Cloud

Another small thought on the topic of control in distributed systems: One of the first libraries people build on top of object-oriented languages is the observer/observable change-notification pattern. I think this is interesting. Object-oriented languages provide linguistic support for control-flow based on unicast message passing to a single receiver (glossing over the slight variation on the theme provided by multiple-dispatch-based languages), but I’m not aware of languages directly supporting message multicast, broadcast or anycast messaging.

FIB in PowerPC assembly and in JONESFORTH

Recently I’ve been teaching myself PowerPC assembly through porting JONESFORTH to PowerPC on Mac OS X. It struck me to run the same little fibonacci-sequence microbenchmark that I ran lo these many years past. The results were interesting:

Language Implementation Detail Time (per (fib 29) call, in milliseconds) Ops/s Ratio (opt. C) Ratio (unopt. C)
PPC assembly - 24 935983000 0.43 0.205
FORTH JONESFORTH ported to PPC 277 81096000 4.95 2.37

The hand-coded assembly beats all the other entrants (perhaps unsurprisingly). The naive indirect-threaded FORTH is the fastest interpreted language, merely 5 times slower than fully optimised C.

Here’s the JONESFORTH code:

: FIB DUP 2 >= IF 1- DUP RECURSE SWAP 1- RECURSE + ELSE DROP 1 THEN ;

and here’s the PPC assembly (arg and result in r3):

_SFIB:  cmpwi   r3,2
        bge     1f
        li      r3,1
        blr
1:      mflr    r0
        stw     r0,-4(r1)
        addi    r3,r3,-1
        stwu    r3,-8(r1)
        bl      _SFIB
        lwz     r4,0(r1)
        stw     r3,0(r1)
        addi    r3,r4,-1
        bl      _SFIB
        lwz     r4,0(r1)
        add     r3,r3,r4
        lwz     r0,4(r1)
        addi    r1,r1,8
        mtlr    r0
        blr

Reflection on floats

Gordon Weakliem writes about literal hash-table syntax, and touches on some of the problems that programmers run in to with floating-point numbers. He writes:

12 years ago, I spent half a day in the debugger trying to figure out why the computer would not believe that 0.0 == 0.0 […]

This triggered two thoughts:

  • floating-point numbers in GUIs should be represented differently depending on whether they are an exact representation or an inexact one - which might have saved Gordon’s 12 hours. For instance, inexact representations could be a slightly different shade from exact representations, making the hidden difference between the two zeros apparent at a glance; and

  • programming languages must provide reflective access to the representation of floating point numbers, if they are used. As a programmer, I must be able to access the exact bit sequence used to represent the sign, mantissa and exponent of the float.

Regarding reflective access to floats, Java provides floatToIntBits and intBitsToFloat, and C provides (quasi-portable) pointer-based memory-aliasing type-punning to the same effect:

  float f = 1.23;
  int i = *((int *) &f);

Submatches vs. Recursion in ThiNG Patterns

Submatching in a pattern, +var@pattern, looks suspiciously similar to recursion in a value, var=value. In each case, we expect a value which is both named and processed, and in each case, a binding is introduced into code. The processing in the submatch case is a further pattern-match operation, and in the recursion case it is a nested evaluation.

The main difference, besides the overall context, between the two forms is the scope of the introduced binding: in the submatch, the pattern variable to the left of the @ sign is bound only in the right-hand-side of the overall pattern, at present, whereas in the recursive value the variable to the left of the = is bound in the right-hand-side of the recursion itself. Another important difference is that in the submatch context, the +var is clearly just a binding form, whereas in the recursion context, var can be seen as acting both as a binding form and a reference to the bound value. Another way of looking at it, though, has the var acting solely as a binding form, with the right-hand-side of the = being the actual result of the expression — the fact that the left and right-hand-sides are equivalent is not relevant, in this interpretation.

What if the two forms were unified, based on their similarities, using either var=pattern-or-value or +var=pattern-or-value for both pieces of syntax? The former seems out-of-place for submatches, since we want to clearly mark bindings that will propagate their bound value into the code on the right-hand-side of an object clause; and the latter seems out-of-place in a recursive value, since +var implies a binding object, which is a first-class datum in ThiNG.

Compare and contrast the following — a string-interleaving routine using the current syntax for both submatch and recursion:

sepList: [+s: loop=[(cons +x +more@(cons _ _)): (x ++ s ++ loop more)
                    (cons +x _): x
                    _: EmptyString]]

The same routine using +var=pattern-or-value for both:

sepList: [+s: +loop=[(cons +x +more=(cons _ _)): (x ++ s ++ loop more)
                     (cons +x _): x
                     _: EmptyString]]

Finally, the same routine using var=pattern-or-value for both:

sepList: [+s: loop=[(cons +x more=(cons _ _)): (x ++ s ++ loop more)
                    (cons +x _): x
                    _: EmptyString]]

Finally, there’s the question of scope: if the same syntax is used for both submatch and for recursion, what happens if we adjust the scoping rules to be as similar as possible? It doesn’t make sense for recursive-value’s scoping rule to be changed to be the same as submatch’s scoping rule — there’s no right-hand-side in a value for the binding to be visible in! — but the other way around might be worth considering: let the binding introduced as part of a submatch be visible as part of the right-hand-side of the submatch. What happens then? Do we end up with infinite patterns? Do we instead end up with a kind of unification-constraint in which an object matching a submatch that is referred to within that same submatch is constrained to have the same recursive topology as the pattern itself?

[Update: Something I forgot to mention — with the scope rule for submatches as it stands, it’s possible to evaluate and then compile patterns ahead of execution time. Changing the scope rule can make compilation of ThiNG code more difficult, forcing the system to be more late-bound than perhaps it needs to be.]

Partial-evaluation of Monads in Scheme - A New Hope?

The other day, I noted that my generic-monad-in-Scheme implementation suffered from the flaw of not being able to type-check monad assemblies before any actions had been executed, and that in general such ahead-of-time checking is not possible without abstract interpretation of some kind, because the functional code in the right-hand-side of the bind operator can decide which action to splice into the monad next:

(define (goes-wrong)
  (run-io (mlet* ((val mread))
            (if val ;; if the user entered #f, we break, otherwise we're fine
                (mlet* ((_ (mdisplay "Was true\n"))) (return 'apples))
                'bananas)))) ;; Note: missing "(return ...)"!

Perhaps a partial-evaluator can play the part of a Haskell-type-system-like abstract interpretation well enough to drive the type-safety checks through the code at PE time (loosely equivalent to static-analysis time) for most cases (leaving the remaining cases for runtime-detection, as in the present implementation, and perhaps warning the user)? In order to find out, my next experiment in this direction will be marrying the monad code to the partial-evaluator I implemented some months ago.

Monads in Scheme - A Retraction

I don’t know what crack I was smoking the other day, but my implementation of general monads not only does not have the property I claimed for it, viz. that it provides the same level of type-checking as Haskell ahead of execution-time, but it cannot provide such a level of ahead-of-runtime checking, whether the hosting language is eager or lazy.

To see the first, try the following example:

> (run-io (mlet* ((_ (mdisplay "HERE\n"))
                  (_ sget))
            (return 232)))
HERE
wrong monad-class ((mclass #5(struct:<monad-class> io [...]))
      (m #3(struct:<monad> #5(struct:<monad-class> state [...]) [...]))) 

Note how an action is performed before the type error is detected. In general, it is not possible to detect the type-error ahead of time without abstract interpretation of the level of sophistication that Haskell’s type-checker provides, as the following example demonstrates:

> (define (goes-wrong)
    (run-io (mlet* ((val mread))
              (if val
                  (mlet* ((_ (mdisplay "Was true\n"))) (return 'apples))
                  'bananas)))) ;; Note: missing "(return ...)"!
> (goes-wrong)
#t
Was true
apples
> (goes-wrong)
#f
<monad>-ref: expects type <struct:<monad>> as 1st argument, given: bananas; other arguments were: 0
[...]

The problem is that the system can decide how to continue based on I/O performed at runtime, and so the composition performed by bind cannot be checked until after the left-hand-side has been fully evaluated. [Aside: the <monad>-ref error was a symptom of the unexpected problem described below — now that the code has been fixed, it instead complains “not a monad bananas”, as it should.]

This affects all my monad implementations, not just the IO monad:

> (run-st (mlet* ((a sget)
                  (b (mdisplay "OH NO\n"))
                  (_ (sput 4)))
            (return (+ a 1)))
          2)
OH NO
(3 . 4)

Actually, and this is unexpected ("OH NO" indeed!), here it’s even worse than just not catching the type-error before any actions have executed: it isn’t catching the error at all. Here’s what that mlet* expands into:

(>>= sget
     (lambda (a)
       (>>= (mdisplay "OH NO\n")
            (lambda (b)
              (>>= (sput 4)
                   (lambda (_)
                     (return (+ a 1))))))))

My implementation of >>= isn’t “reaching into the lambda” in the case of the IO monad. [Interlude; sounds of programming from behind the curtain.] The problem was my use of the (bad) “delayed monad” idea in implementing the IO monad — I’ve just replaced that idea with a more explicit representation of a delayed action, and the code now behaves as I expected. How embarrassing! Here’s the trace now that I’ve fixed that problem:

> (mlet* ((a sget)
          (b (mdisplay "OH NO\n"))
          (_ (sput 4)))
    (return (+ a 1)))
#3(struct:<monad> #5(struct:<monad-class> state [...]) [...])

This shows that before we run the monad, it appears to be a well-behaved state monad instance. Running it now causes the error to be detected:

> (run-st (mlet* ((a sget)
                  (b (mdisplay "OH NO\n"))
                  (_ (sput 4)))
            (return (+ a 1)))
          2)
wrong monad-class ((mclass #5(struct:<monad-class> state [...]))
      (m #3(struct:<monad> #5(struct:<monad-class> io [...]) [...]))) 

This demonstrates what I was hoping to show above, before I was derailed by the delayed-monad issue: that the impossibility of using this technique to type-check monad assemblies ahead of any actions being performed applies to all monad kinds, not just the IO monad. Of course, it’s less of a problem for non-side-effecting monads, such as the state monad, since performing an action (in the case immediately above, reading the hidden state variable) in such a monad has no effect on the outside world by the time the type-error is detected.

In conclusion, I was mistaken about Scheme’s ability to achieve the same level of ahead-of-execution type-checking as Haskell manages — I don’t believe it to be possible without essentially Haskell-equivalent abstract-interpretation machinery bolted on to the language. Nonetheless, the library I’ve developed does catch type errors eventually — so long as the faulty code is eventually run! This is no worse than regular dynamically-typed language code, and is still a useful system, so I’m not completely disappointed. I will still be able to use the idea to structure side-effects in ThiNG.

Monads in Scheme - Update

Just quickly revisiting the article the other day on monads in Scheme: I’ve just updated my experimental implementation to propagate monad type information both forward from the arguments to the result of the bind operation and backward from the expected result type to the required argument types of the bind. Now Oleg’s example (see near the bottom of the page, where there’s an expression (put-line "Enter a number: ")) can be translated into the following working code:

(define oleg-example-mixed-monad
  (mlet* ((_ (mdisplay "Enter a number: "))
          (n mread)
          (all-n (return (iota n)))
          (evens (return (run-list (mlet* ((i all-n))
                                     (if (even? i)
                                         (return i)
                                         (fail "odd"))))))
          (_ (mdisplay "Computed "))
          (_ (mdisplay (length evens)))
          (_ (mdisplay " evens\n")))
    (return evens)))

When run with (run-io oleg-example-mixed-monad), it produces sessions like the following (user input in italics):

Enter a number: 33
Computed 17 evens
(0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32)

The system could be sweetened up with a bit of generic-function or other OO sugar, but I’d say even without extra sugar that this experiment has been successful in demonstrating the feasibility of the technique. It looks like I’ll be able to use the idea for structuring effects in ThiNG.

Trait Composition in ThiNG

Trait composition is one of the fundamental ideas in ThiNG r3/r4. It’s used to assemble pattern-matching clauses in functions, to assemble definitions into a module, to (functionally-) update data-structure, to augment generic functions with new definitions, and to assemble modules into programs.

define cons [+car: [+cdr: [First: car Rest: cdr]]]
define map [+f: loop=[(cons +a +d): (cons (f a) (loop d)) Nil:Nil]]

Both these definitions use objects with more than one clause - for instance [First: car Rest: cdr] contains two clauses, as does the object after the equal-sign in loop=[...] in map. Multi-clause objects are defined as being constructed from single clauses using the trait composition operators sum (+), override (/), remove (-) and rename (@).

If we stick to an imagined syntax where only single-clause objects and trait composition operators can be used, then the above definitions might look like this:

define cons [+car: [+cdr: ([First: car] + [Rest: cdr])]]
define map [+f: loop=([(cons +a +d): (cons (f a) (loop d))] + [Nil:Nil])]

Note that sum (+) is used instead of override (/), since the pattern-pairs First vs Rest, and (cons +a +d) vs Nil don’t overlap. We could have used override, but we would have forgone the checks made for overlapping patterns that the sum operator provides.

Module definitions and generic functions in ThiNG work similarly. There are two dialects of ThiNG I’ve been thinking about, one where a generic function dispatches each message, and one more traditional dialect where messages are sent directly to explicitly-given receivers. Let’s call these dialects ThiNG/1 and ThiNG/N, for single-distinguished-receiver and multiple-dispatch (N receivers) respectively.

In ThiNG/N, you can think of each definition within a module as being a clause within a (part of a) global generic function. Here’s a hypothetical example, recalling that x y: z is syntactic sugar for the message [0: x y: z]:

module List = [
  [+a cons: +b]: [First: a Rest: b]

  [+f mapOver: (+head cons: +tail)]: ((f head) cons: (f mapOver: tail))
  [+f mapOver: Nil]: Nil
]

(As you can see, this leads to quite a different style of programming compared to ThiNG/1.) Rewriting this using single-clause-with-trait-composition syntax gives something like:

module List =
  [ ([0: +a] + [cons: +b]): ([First: a] + [Rest: b]) ] +
  [ ([0: +f] + [mapOver: (+head cons: +tail)]): ((f head) cons: (f mapOver: tail)) ] +
  [ ([0: +f] + [mapOver: Nil]): Nil ]

Importing a module into the current toplevel is a matter of replacing the current generic-function-dispatching object - let’s call it ENV - with itself either summed or overridden by the imported module definition, i.e. List + ENV or List / ENV.

I was interested to read an old message from Marcel Weiher (who gave an interesting talk the recent UK-Smalltalk meeting - see also here) in which he writes:

I do have a feeling that the direction [toward ultra-simplicity that Self takes], while very fruitful, turned out to be “too uniform”, in some senses. There is no fence at all to the meta-level at one point ( object vs. class ) and still a big one at another ( object vs. code/method ).

In ThiNG, the fence he talks about between objects and methods is removed - or at least made much smaller. Objects are simultaneously data-structures and methods/functions. If an object has a clause keyed on a pattern containing a binding, then it acts more like a method than a field; if the pattern doesn’t contain any bindings, it acts more like a field than a method. Lazy evaluation helps blur the distinction here.

Recent spikes in ThiNG

Since I last posted about ThiNG, my ideas have shifted closer to those expressed in Jay & Kesner’s Pure Pattern Calculus, and I’ve built a few experimental spikes to gain some concrete experience with some of the language features and designs I’ve been thinking about - among other things, that traits and modules share a number of core characteristics, and that using monads to structure effects can be profitable even in a dynamically-typed environment.

A few weeks ago, I implemented a simple, small traits language based on the formal model of traits of Schärli, Nierstrasz et. al. The language has no fields (methods suffice!) and no classes (prototypes suffice!). The implementation, a macro which embeds traits into regular PLT Scheme, is a toy, intended only to explore the idea of traits; in particular, it doesn’t “flatten” composed traits.

PLT Scheme’s match operator was used to match method signatures to method invocations, which is a little unusual - pattern-matching in object-oriented languages is usually restricted to searches by method name (“selector” in Smalltalk terminology).

It turned out to be tiny. Traits are represented as functions from a message to a handling thunk or false (representing “does not understand this message”). Implementing the trait composition operations (sum, override, remove, and rename) is done in a combinator-like way. The core routine for finding a method to invoke for a given message is twenty-one lines long and dirt simple.

If this spike were to be taken further, perhaps integrated with an existing OO system for Scheme, a few things would need to be changed. Firstly, while using full pattern-matching for message dispatch leads to a very user-friendly OO programming system, very few existing object systems do anything more than method name searches, so the traits implementation would have to be changed to be more method name based; and secondly, it’s not clear how a traits system would interact with multimethod-capable generic functions. Brian Rice has done some thinking and experiments along these lines in the context of Slate.

A couple of years ago, I stumbled across Oleg Kiselyov’s “Monadic Programming in Scheme”. Right near the bottom of the page, he points out a problem with his Scheme implementation of monads, which is that you don’t get automatic resolution of overloading of the bind and return operators. In Haskell, the type system figures out which monad is intended, usually without requiring programmer-supplied type annotations, but in Scheme there’s no analogous constraint-propagation system, so you have to select which monad you’re binding or returning into explicitly.

As part of thinking about the design of ThiNG, I’ve more-or-less settled on a strict Haskell-style monad-based separation of effects from computations, and I’ve come up with a simple runtime constraint-propagation-based way of getting the same automatic bind- and return-overloading as Haskell gives you, using OO dynamic dispatch instead of a static type system. The monads are fully “type-checked” before they’re run, so no actions are taken until the whole monad is correctly assembled.

During the weekend, I implemented a first stab at the constraint-propagation idea in Scheme. Currently it only type-checks the monad immediately before running it, rather than type-checking incrementally based on the best available information as the monad is being assembled. I plan to update the implementation to do as much early constraint propagation as possible, rather than waiting until the last minute. This will allow Oleg’s final example, involving simultaneous use of an IO and a List monad, to be automatically disambiguated.

I’ve been meaning to experiment with Haskell for a while, so last weekend I built a toy evaluator for my most recent ThiNG ideas using GHC. I used the Parsec parser-combinator library to build a parser. The parser and printer together took 86 lines of code; the evaluator itself, a mere 90 lines. The type system both helped and hindered: it caught a few stupid errors I made while initially constructing the program, but on the other hand also forced me into a slightly awkward design (almost identical AST, Value and Pattern types, with concomitant almost-duplication of code) that Scheme’s unityped nature would have allowed me to avoid.

The model of the language I built has allowed me to get a clear idea of how scoping should behave. It’s also clarified the semantics enough that I ought now to be able to build a partial-evaluator for it, to see how hard it’s going to be to get a usefully fast implementation.

The Pure Pattern Calculus, and Extensible Records With Scoped Labels

Two papers very similar to the ideas I’ve been developing for ThiNG have popped up in the last couple of weeks:

  1. Daan Leijen’s Extensible records with scoped labels: describes an intriguing system for typing records similar to those I’m trying to define with ThiNG.

  2. Barry Jay and Delia Kesner’s Pure Pattern Calculus: a fascinating language that embeds the lambda-calculus naturally that feels very similar to ThiNG.

Update: Since google finds the paper now, I’ve added the link to Jay & Kesner’s paper.