Mobile Batch Photo Conversion on Android

We take a lot of photos on a Sony Xperia Z3 running Cyanogenmod. Unfortunately, the camera support for that particular model introduces pincushion distortion on every photo it takes:

Wonky Fixed

Animation of the correction of the previous images Notice the curved edges of the picture frames in the first image. The second image has been corrected. The animation shows both images superimposed, so you can see the effect of the distortion more clearly.

We haven’t been able to find an app for automatically correcting the input from the camera as it comes in, so we’ve cobbled together a solution based on Dropbox. It’s crude but effective!

Overview of our solution

Schematic of workflow

The solution is to use a cloud server linked to Dropbox to process images placed in a special Dropbox folder.

On the phone, I choose the pictures I want to send through the repair pipeline, and “share” them to the Dropbox app in the correct folder.

Step-by-step, here’s how it works:

  1. Take a photo on the phone.
  2. Copy the image to a special folder in Dropbox.
  3. Dropbox copies it over to the server, which scans the folder periodically.
  4. A little python program fixes the files and puts them into a different Dropbox folder.

After this process, the repaired images are available via the Dropbox app on the phone.

Details

I use djb’s daemontools to run the little program I wrote that scans Dropbox for JPGs. That program calls out to ImageMagick’s convert program to repair the barrel distortion, and then writes the results back to Dropbox and deletes the input file.

We found the right parameters to use, convert input.jpg -distort barrel '0.0132 -0.07765 0.14683' output.jpg, from this XDA-Developers forum page.

I’ve put the source code for the python program on Github:

https://github.com/tonyg/fix-xperia-z3-barrel-distortion

History of Actors

Yesterday, I presented the history of actors to Christos Dimoulas’ History of Programming Languages class.

Here are the written-out talk notes I prepared beforehand. There is an annotated bibliography at the end.


Introduction

Today I’m going to talk about the actor model. I’ll first put the model in context, and then show three different styles of actor language, including two that aim to be realistic programming systems.

I’m going to draw on a few papers:

  • 2016 survey by Joeri De Koster, Tom Van Cutsem, and Wolfgang De Meuter, for taxonomy and common terminology (SPLASH AGERE 2016)

  • 1990 overview by Gul Agha, who has contributed hugely to the study of actors (Comm. ACM 1990)

  • 1997 operational semantics for actors by Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott.

  • brief mention of some of the content of the original 1973 paper by Carl Hewitt, Peter Bishop, and Richard Steiger. (IJCAI 1973)

  • Joe Armstrong’s 2003 dissertation on Erlang.

  • 2005 paper on the language E by Mark Miller, Dean Tribble, and Jonathan Shapiro. (TGC 2005)

I’ve put an annotated bibliography at the end of this file.

Actors in context: Approaches to concurrent programming

One way of classifying approaches is along a spectrum of private vs. shared state.

  • Shared memory, threads, and locking: very limited private state, almost all shared
  • Tuple spaces and my own research, Syndicate: some shared, some private and isolated
  • Actors: almost all private and isolated, just enough shared to do routing

Pure functional “data parallelism” doesn’t fit on this chart - it lacks shared mutable state entirely.

The actor model

The actor model, then is

  • asynchronous message passing between entities (actors)
    • with guaranteed delivery
    • addressing of messages by actor identity
  • a thing called the “ISOLATED TURN PRINCIPLE”
    • no shared mutable state; strong encapsulation; no global mutable state
    • no interleaving; process one message at a time; serializes state access
    • liveness; no blocking

In the model, each actor performs “turns” one after the other: a turn is taking a waiting message, interpreting it, and deciding on both a state update for the actor and a collection of actions to take in response, perhaps sending messages or creating new actors.

A turn is a sequential, atomic block of computation that happens in response to an incoming message.

What the actor model buys you:

  • modular reasoning about state in the overall system, and modular reasoning about local state change within an actor, because state is private, and access is serialized; only have to consider interleavings of messages

  • no “deadlocks” or data races (though you can get “datalock” and other global non-progress in some circumstances, from logical inconsistency and otherwise)

  • flexible, powerful capability-based security

  • failure isolation and fault tolerance

It’s worth remarking that actor-like concepts have sprung up several times independently. Hewitt and many others invented and developed actors in the 1970s, but there are two occasions where actors seem to have been independently reinvented, as far as I know.

One is work on a capability-based operating system, KeyKOS, in the 1980s which involved a design very much like Hewitt’s actors, feeding into research which led ultimately to the language E.

The other is work on highly fault-tolerant designs for telephone switches, also in the 1980s, which culminated in the language Erlang.

Both languages are clearly actor languages, and in both cases, apparently the people involved were unaware of Hewitt’s actor model at the time.

Terminology

There are two kinds of things in the actor model: messages, which are data sent across some medium of communication, and actors, which are stateful entities that can only affect each other by sending messages back and forth.

Messages are completely immutable data, passed by copy, which may contain references to other actors.

Each actor has

  • private state; analogous to instance variables
  • an interface; which messages it can respond to

Together, the private state and the interface make up the actor’s behaviour, a key term in the actor literature.

In addition, each actor has

  • a mailbox; inbox, message queue
  • an address; denotes the mailbox

Within this framework, there has been quite a bit of variety in how the model appears as a concrete programming language.

De Koster et al. classify actor languages into

  • The Classic Actor Model (create, send, become)
  • Active Objects (OO with a thread per object; copying of passive data between objects)
  • Processes (raw Erlang; receive, spawn, send)
  • Communicating Event-Loops (no passive data; E; near/far refs; eventual refs; batching)

We see instances of the actor model all around us. The internet’s IP network is one example: hosts are the actors, and datagrams the messages. The web can be seen as another: a URL denotes an actor, and an HTTP request a message. Seen in a certain light, even Unix processes are actor-like (when they’re single threaded, if they only use fds and not shm). It can be used as a structuring principle for a system architecture even in languages like C or C++ that have no hope of enforcing any of the invariants of the model.

Rest of the talk

For the rest of the talk, I’m going to cover the classic actor model using Agha’s presentations as a guide; then I’ll compare it to E, a communicating event-loop actor language, and to Erlang, a process actor language.

Classic Actor Model

The original 1973 actor paper by Hewitt, Bishop and Steiger in the International Joint Conference on Artificial Intelligence, is incredibly far out!

It’s a position paper that lays out a broad and colourful research vision. It’s packed with amazing ideas.

The heart of it is that Actors are proposed as a universal programming language formalism ideally suited to building artificial intelligence.

The goal really was A.I., and actors and programming languages were a means to that end.

It makes these claims that actors bring great benefits in a huge range of areas:

  • foundations of semantics
  • logic
  • knowledge-based programming
  • intentions (software contracts)
  • study of expressiveness of programming languages
  • teaching of computation
  • extensible, modular programming
  • privacy and protection
  • synchronization constructs
  • resource management
  • structured programming
  • computer architecture

(It’s amazingly “full-stack”: computer architecture!?)

In each of these areas, you can see what they were going for. In some, actors have definitely been useful; in others, the results have been much more modest.

In the mid-to-late 70s, Hewitt and his students Irene Greif, Henry Baker, and Will Clinger developed a lot of the basic theory of the actor model, inspired originally by SIMULA and Smalltalk-71. Irene Greif developed the first operational semantics for it as her dissertation work and Will Clinger developed a denotational semantics for actors.

In the late 70s through the 80s and beyond, Gul Agha made huge contributions to the actor theory. His dissertation was published as a book on actors in 1986 and has been very influential. He separated the actor model from its A.I. roots and started treating it as a more general programming model. In particular, in his 1990 Comm. ACM paper, he describes it as a foundation for CONCURRENT OBJECT-ORIENTED PROGRAMMING.

Agha’s formulation is based around the three core operations of the classic actor model:

  • create: constructs a new actor from a template and some parameters
  • send: delivers a message, asynchronously, to a named actor
  • become: within an actor, replaces its behaviour (its state & interface)

The classic actor model is a UNIFORM ACTOR MODEL, that is, everything is an actor. Compare to uniform object models, where everything is an object. By the mid-90s, that very strict uniformity had fallen out of favour and people often worked with two-layer languages, where you might have a functional core language, or an object-oriented core language, or an imperative core language with the actor model part being added in to the base language.

I’m going to give a simplified, somewhat informal semantics based on his 1997 work with Mason, Smith and Talcott. I’m going to drop a lot of details that aren’t relevant here so this really will be simplified.

e  :=  λx.e  |  e e  |  x  | (e,e) | l | ... atoms, if, bool, primitive operations ...
    |  create e
    |  send e e
    |  become e

l  labels, PIDs; we'll use them like symbols here

and we imagine convenience syntax

e1;e2              to stand for   (λdummy.e2) e1
let x = e1 in e2   to stand for   (λx.e2) e1

match e with p -> e, p -> e, ...
    to stand for matching implemented with if, predicates, etc.

We forbid programs from containing literal process IDs.

v  :=  λx.e  |  (v,v)  |  l

R  :=  []  |  R e  |  v R  |  (R,e)  |  (v,R)
    |  create R
    |  send R e  |  send v R
    |  become R

Configurations are a pair of a set of actors and a multiset of messages:

m  :=  l <-- v

A  :=  (v)l  |  [e]l

C  :=  < A ... | m ... >

The normal lambda-calculus-like reductions apply, like beta:

    < ... [R[(λx.e) v]]l ... | ... >
--> < ... [R[e{v/x}  ]]l ... | ... >

Plus some new interesting ones that are actor specific:

    < ... (v)l ...    | ... l <-- v' ... >
--> < ... [v v']l ... | ...          ... >

    < ... [R[create v]]l       ... | ... >
--> < ... [R[l'      ]]l (v)l' ... | ... >   where l' fresh

    < ... [R[send l' v]]l ... | ... >
--> < ... [R[l'       ]]l ... | ... l' <-- v >

    < ... [R[become v]]l       ... | ... >
--> < ... [R[nil     ]]l' (v)l ... | ... >   where l' fresh

Whole programs e are started with

< [e]a | >

where a is an arbitrary label.

Here’s an example - a mutable cell.

Cell = λcontents . λmessage . match message with
                                (get, k) --> become (Cell contents);
                                             send k contents
                                (put, v) --> become (Cell v)

Notice that when it gets a get message, it first performs a become in order to quickly return to ready state to handle more messages. The remainder of the code then runs alongside the ready actor. Actions after a become can’t directly affect the state of the actor anymore, so even though we have what looks like multiple concurrent executions of the actor, there’s no sharing, and so access to the state is still serialized, as needed for the isolated turn principle.

< [let c = create (Cell 0) in
   send c (get, create (λv . send c (put, v + 1)))]a | >

< [let c = l1 in
   send c (get, create (λv . send c (put, v + 1)))]a
  (Cell 0)l1 | >

< [send l1 (get, create (λv . send l1 (put, v + 1)))]a
  (Cell 0)l1 | >

< [send l1 (get, l2)]a
  (Cell 0)l1
  (λv . send l1 (put, v + 1))l2 | >

< [l1]a
  (Cell 0)l1
  (λv . send l1 (put, v + 1))l2 | l1 <-- (get, l2) >

< [l1]a
  [Cell 0 (get, l2)]l1
  (λv . send l1 (put, v + 1))l2 | >

< [l1]a
  [become (Cell 0); send l2 0]l1
  (λv . send l1 (put, v + 1))l2 | >

< [l1]a
  [send l2 0]l3
  (Cell 0)l1
  (λv . send l1 (put, v + 1))l2 | >

< [l1]a
  [l2]l3
  (Cell 0)l1
  (λv . send l1 (put, v + 1))l2 | l2 <-- 0 >

< [l1]a
  [l2]l3
  (Cell 0)l1
  [send l1 (put, 0 + 1)]l2 | >

< [l1]a
  [l2]l3
  (Cell 0)l1
  [l1]l2 | l1 <-- (put, 1) >

< [l1]a
  [l2]l3
  [Cell 0 (put, 1)]l1
  [l1]l2 | >

< [l1]a
  [l2]l3
  [become (Cell 1)]l1
  [l1]l2 | >

< [l1]a
  [l2]l3
  [nil]l4
  (Cell 1)l1
  [l1]l2 | >

(You could consider adding a garbage collection rule like

    < ... [v]l ... | ... >
--> < ...      ... | ... >

to discard the final value at the end of an activation.)

Because at this level all the continuations are explicit, you can encode patterns other than sequential control flow, such as fork-join.

For example, to start two long-running computations in parallel, and collect the answers in either order, multiplying them and sending the result to some actor k’, you could write

let k = create (λv1 . become (λv2 . send k' (v1 * v2))) in
send task1 (req1, k);
send task2 (req2, k)

Practically speaking, both Hewitt’s original actor language, PLASMA, and the language Agha uses for his examples in the 1990 paper, Rosette, have special syntax for ordinary RPC so the programmer needn’t manipulate continuations themselves.

So that covers the classic actor model. Create, send and become. Explicit use of actor addresses, and lots and lots of temporary actors for inter-actor RPC continuations.

Before I move on to Erlang: remember right at the beginning I told you the actor model was

  • asynchronous message passing, and
  • the isolated turn principle

?

The isolated turn principle requires liveness - you’re not allowed to block indefinitely while responding to a message!

But here, we can:

let d = create (λc . send c c) in
send d d

Compare this with

letrec beh = (λc . become beh; send c c) in
let d = create beh in
send d d

These are both degenerate cases, but in different ways: the first becomes inert very quickly and the actor d is never returned to an idle/ready state, while the second spins uselessly forever.

Other errors we could make would be to fail to send an expected reply to a continuation.

One thing the semantics here rules out is interaction with other actors before doing a become; there’s no way to have a waiting continuation perform the become, because by that time you’re in a different actor. In this way, it sticks to the very letter of the Isolated Turn Principle, by forbidding “blocking”, but there are other kinds of things that can go wrong to destroy progress.

Even if we require our behaviour functions to be total, we can still get global nontermination.

So saying that we “don’t have deadlock” with the actor model is very much oversimplified, even at the level of simple formal models, let alone when it comes to realistic programming systems.

In practice, programmers often need “blocking” calls out to other actors before making a state update; with the classic actor model, this can be done, but it is done with a complicated encoding.

Erlang: Actors for fault-tolerant systems

Erlang is an example of what De Koster et al. call a Process-based actor language.

It has its origins in telephony, where it has been used to build telephone switches with fabled “nine nines” of uptime. The research process that led to Erlang concentrated on high-availability, fault-tolerant software. The reasoning that led to such an actor-like system was, in a nutshell:

  • Programs have bugs. If part of the program crashes, it shouldn’t corrupt other parts. Hence strong isolation and shared-nothing message-passing.

  • If part of the program crashes, another part has to take up the slack, one way or another. So we need crash signalling so we can detect failures and take some action.

  • We can’t have all our eggs in one basket! If one machine fails at the hardware level, we need to have a nearby neighbour that can smoothly continue running. For that redundancy, we need distribution, which makes the shared-nothing message passing extra attractive as a unifying mechanism.

Erlang is a two-level system, with a functional language core equipped with imperative actions for asynchronous message send and spawning new processes, like Agha’s system.

The difference is that it lacks become, and instead has a construct called receive.

Erlang actors, called processes, are ultra lightweight threads that run sequentially from beginning to end as little functional programs. As it runs, no explicit temporary continuation actors are created: any time it uses receive, it simply blocks until a matching message appears.

After some initialization steps, these programs typically enter a message loop. For example, here’s a mutable cell:

mainloop(Contents) ->
  receive
    {get, K} -> K ! Contents,
                mainloop(Contents);
    {put, V} -> mainloop(V)
  end.

A client program might be

Cell = spawn(fun mainloop(0) end),
Cell ! {get, self()},
receive
  V -> ...
end.

Instead of using become, the program performs a tail call which returns to the receive statement as the last thing it does.

Because receive is a statement like any other, Erlang processes can use it to enter substates:

mainloop(Service) ->
  receive
    {req, K} -> Service ! {subreq, self()},
                receive
                  {subreply, Answer} -> K ! {reply, Answer},
                                        mainloop(Service)
                end
  end.

While the process is blocked on the inner receive, it only processes messages matching the patterns in that inner receive, and it isn’t until it does the tail call back to mainloop that it starts waiting for req messages again. In the meantime, non-matched messages queue up waiting to be received later.

This is called “selective receive” and it is difficult to reason about. It doesn’t quite violate the letter of the Isolated Turn Principle, but it comes close. (become can be used in a similar way.)

The goal underlying “selective receive”, namely changing the set of messages one responds to and temporarily ignoring others, is important for the way people think about actor systems, and a lot of research has been done on different ways of selectively enabling message handlers. See Agha’s 1990 paper for pointers toward this research.

One unique feature that Erlang brings to the table is crash signalling. The jargon is “links” and “monitors”. Processes can ask the system to make sure to send them a message if a monitored process exits. That way, they can perform RPC by

  • monitoring the server process
  • sending the request
  • if a reply message arrives, unmonitor the server process and continue
  • if an exit message arrives, the service has crashed; take some corrective action.

This general idea of being able to monitor the status of some other process was one of the seeds of my own research and my language Syndicate.

So while the classic actor model had create/send/become as primitives, Erlang has spawn/send/receive, and actors are processes rather than event-handler functions. The programmer still manipulates references to actors/processes directly, but there are far fewer explicit temporary actors created compared to the “classic model”; the ordinary continuations of Erlang’s functional fragment take on those duties.

E: actors for secure cooperation

The last language I want to show you, E, is an example of what De Koster et al. call a Communicating Event-Loop language.

E looks and feels much more like a traditional object-oriented language to the programmer than either of the variations we’ve seen so far.

def makeCell (var contents) {
    def getter {
        to get() { return contents }
    }
    def setter {
        to set(newContents) {
            contents := newContents
        }
    }
    return [getter, setter]
}

The mutable cell in E is interestingly different: It yields two values. One specifically for setting the cell, and one for getting it. E focuses on security and securability, and encourages the programmer to hand out objects that have the minimum possible authority needed to get the job done. Here, we can safely pass around references to getter without risking unauthorized update of the cell.

E uses the term “vat” to describe the concept closest to the traditional actor. A vat has not only a mailbox, like an actor, but also a call stack, and a heap of local objects. As we’ll see, E programmers don’t manipulate references to vats directly. Instead, they pass around references to objects in a vat’s heap.

This is interesting because in the actor model messages are addressed to a particular actor, but here we’re seemingly handing out references to something finer grained: to individual objects within an actor or vat.

This is the first sign that E, while it uses the basic create/send/become-style model at its core, doesn’t expose that model directly to the programmer. It layers a special E-specific protocol on top, and only lets the programmer use the features of that upper layer protocol.

There are two kinds of references available: near refs, which definitely point to objects in the local vat, and far refs, which may point to objects in a different vat, perhaps on another machine.

To go with these two kinds of refs, there are two kinds of method calls: immediate calls, and eventual calls.

receiver.method(arg, …)

receiver <- method(arg, …)

It is an error to use an immediate call on a ref to an object in a different vat, because it blocks during the current turn while the answer is computed. It’s OK to use eventual calls on any ref at all, though: it causes a message to be queued in the target vat (which might be our own), and a promise is immediately returned to the caller.

The promise starts off unresolved. Later, when the target vat has computed and sent a reply, the promise will become resolved. A nifty trick is that even an unresolved promise is useful: you can pipeline them. For example,

def r1 := x <- a()
def r2 := y <- b()
def r3 := r1 <- c(r2)

would block and perform multiple network round trips in a traditional simple RPC system; in E, there is a protocol layered on top of raw message sending that discusses promise creation, resolution and use. This protocol allows the system to send messages like

"Send a() to x, and name the resulting promise r1"
"Send b() to y, and name the resulting promise r2"
"When r1 is known, send c(r2) to it"

Crucial here is that the protocol, the language of discourse between actors, allows the expression of concepts including the notion of a send to happen at a future time, to a currently-unknown recipient.

The protocol and the E vat runtimes work together to make sure that messages get to where they need to go efficiently, even in the face of multiple layers of forwarding.

Each turn of an E vat involves taking one message off the message queue, and dispatching it to the local object it denotes. Immediate calls push stack frames on the stack as usual for object-oriented programming languages; eventual calls push messages onto the message queue. Execution continues until the stack is empty again, at which point the turn concludes and the next turn starts.

One interesting problem with using promises to represent cross-vat interactions is how to do control flow. Say we had

def r3 := r1 <- c(r2)  // from earlier
if (r3) {
  myvar := a();
} else {
  myvar := b();
}

By the time we need to make a decision which way to go, the promise r3 may not yet be resolved. E handles this by making it an error to depend immediately on the value of a promise for control flow; instead, the programmer uses a when expression to install a handler for the event of the resolution of the promise:

when (r3) -> {
  if (r3) {
    myvar := a();
  } else {
    myvar := b();
  }
}

The test of r3 and the call to a() or b() and the update to myvar are delayed until r3 has resolved to some value.

This looks like it violates the Isolated Turn Principle! It seems like we now have some kind of interleaving. But what’s going on under the covers is that promise resolution is done with an incoming message, queued as usual, and when that message’s turn comes round, the when clause will run.

Just like with classic actors and with Erlang, managing these multiple-stage, stateful interactions in response to some incoming message is generally difficult to do. It’s a question of finding a balance between the Isolated Turn Principle, and its commitment to availability, and encoding the necessary state transitions without risking inconsistency or deadlock.

Turning to failure signalling, E’s vats are not just the units of concurrency but also the units of partial failure. An uncaught exception within a vat destroys the whole vat and invalidates all remote references to objects within it.

While in Erlang, processes are directly notified of failures of other processes as a whole, E can be more fine-grained. In E, the programmer has a convenient value that represents the outcome of a transaction: the promise. When a vat fails, or a network problem arises, any promises depending on the vat or the network link are put into a special state: they become broken promises. Interacting with a broken promise causes the contained exception to be signalled; in this way, broken promises propagate failure along causal chains.

If we look under the covers, E seems to have a “classic style” model using create/send/become to manage and communicate between whole vats, but these operations aren’t exposed to the programmer. The programmer instead manipulates two-part “far references” which denote a vat along with an object local to that vat. Local objects are created frequently, like in regular object-oriented languages, but vats are created much less frequently; and each vat’s stack takes on duties performed in “classic” actor models by temporary actors.

Conclusion

I’ve presented three different types of actor language: the classic actor model, roughly as formulated by Agha et al.; the process actor model, represented by Erlang; and the communicating event-loop model, represented by E.

The three models take different approaches to reconciling the need to have structured local data within each actor in addition to the more coarse-grained structure relating actors to each other.

The classic model makes everything an actor, with local data largely deemphasised; Erlang offers a traditional functional programming model for handling local data; and E offers a smooth integration between an imperative local OO model and an asynchronous, promise-based remote OO model.


Annotated Bibliography

Early work on actors

  • 1973. Carl Hewitt, Peter Bishop, and Richard Steiger, “A universal modular ACTOR formalism for artificial intelligence,” in Proc. International Joint Conference on Artificial Intelligence

    This paper is a position paper from which we can understand the motivation and intentions of the research into actors; it lays out a very broad and colourful research vision that touches on a huge range of areas from computer architecture up through programming language design to teaching of computer science and approaches to artificial intelligence.

    The paper presents a uniform actor model; compare and contrast with uniform object models offered by (some) OO languages. The original application of the model is given as PLANNER-like AI languages.

    The paper claims benefits of using the actor model in a huge range of areas:

    • foundations of semantics
    • logic
    • knowledge-based programming
    • intentions (contracts)
    • study of expressiveness
    • teaching of computation
    • extensible, modular programming
    • privacy and protection
    • synchronization constructs
    • resource management
    • structured programming
    • computer architecture

    The paper sketches the idea of a contract (called an “intention”) for ensuring that invariants of actors (such as protocol conformance) are maintained; there seems to be a connection to modern work on “monitors” and Session Types. The authors write:

    The intention is the CONTRACT that the actor has with the outside world.

    Everything is super meta! Actors can have intentions! Intentions are actors! Intentions can have intentions! The paper presents the beginnings of a reflective metamodel for actors. Every actor has a scheduler and an “intention”, and may have monitors, a first-class environment, and a “banker”.

    The paper draws an explicit connection to capabilities (in the security sense); Mark S. Miller at http://erights.org/history/actors.html says of the Actor work that it included “prescient” statements about Actor semantics being “a basis for confinement, years before Norm Hardy and the Key Logic guys”, and remarks that “the Key Logic guys were unaware of Actors and the locality laws at the time, [but] were working from the same intuitions.”

    There are some great slogans scattered throughout, such as “Control flow and data flow are inseparable”, and “Global state considered harmful”.

    The paper does eventually turn to a more nuts-and-bolts description of a predecessor language to PLASMA, which is more fully described in Hewitt 1976.

    When it comes to formal reasoning about actor systems, the authors here define a partial order - PRECEDES - that captures some notion of causal connection. Later, the paper makes an excursion into epistemic modal reasoning.

    Aside: the paper discusses continuations; Reynolds 1993 has the concept of continuations as firmly part of the discourse by 1973, having been rediscovered in a few different contexts in 1970-71 after van Wijngaarden’s 1964 initial description of the idea. See J. C. Reynolds, “The discoveries of continuations,” LISP Symb. Comput., vol. 6, no. 3–4, pp. 233–247, 1993.

    In the “acknowledgements” section, we see:

    Alan Kay whose FLEX and SMALL TALK machines have influenced our work. Alan emphasized the crucial importance of using intentional definitions of data structures and of passing messages to them. This paper explores the consequences of generalizing the message mechanism of SMALL TALK and SIMULA-67; the port mechanism of Krutar, Balzer, and Mitchell; and the previous CALL statement of PLANNER-71 to a universal communications mechanism.

  • 1975. Irene Greif. PhD dissertation, MIT EECS.

    Specification language for actors; per Baker an “operational semantics”. Discusses “continuations”.

  • 1976. C. Hewitt, “Viewing Control Structures as Patterns of Passing Messages,” AIM-410

    AI focus; actors as “agents” in the AI sense; recursive decomposition: “each of the experts can be viewed as a society that can be further decomposed in the same way until the primitive actors of the system are reached.”

    We are investigating the nature of the communication mechanisms […] and the conventions of discourse

    More concretely, examines “how actor message passing can be used to understand control structures as patterns of passing messages”.

    […] there is no way to decompose an actor into its parts. An actor is defined by its behavior; not by its physical representation!

    Discusses PLASMA (“PLAnner-like System Modeled on Actors”), and gives a fairly detailed description of the language in the appendix. Develops “event diagrams”.

    Presents very Schemely factorial implementations in recursive and iterative (tail-recursive accumulator-passing) styles. During discussion of the iterative factorial implementation, Hewitt remarks that n is not closed over by the loop actor; it is “not an acquaintance” of loop. Is this the beginning of the line of thinking that led to Clinger’s “safe-for-space” work?

    Everything is an actor, but some of the actors are treated in an awfully structural way: the trees, for example, in section V.4 on Generators:

    (non-terminal:
      (non-terminal: (terminal: A) (terminal: B))
      (terminal: C))
    

    Things written with this keyword: notation look like structures. Their reflections are actors, but as structures, they are subject to pattern-matching; I am unsure how the duality here was thought of by the principals at the time, but see the remarks regarding “packages” in the appendix.

  • 1977. C. Hewitt and H. Baker, “Actors and Continuous Functionals,” MIT A.I. Memo 436A

    Some “laws” for communicating processes; “plausible restrictions on the histories of computations that are physically realizable.” Inspired by physical intuition, discusses the history of a computation in terms of a partial order of events, rather than a sequence.

    The actor model is a formalization of these ideas [of Simula/Smalltalk/CLU-like active data processing messages] that is independent of any particular programming language.

    Instances of Simula and Smalltalk classes and CLU clusters are actors”, but they are non-concurrent. The actor model is broader, including concurrent message-passing behaviour.

    Laws about (essentially) lexical scope. Laws about histories (finitude of activation chains; total ordering of messages inbound at an actor; etc.), including four different ordering relations. “Laws of locality” are what Miller was referring to on that erights.org page I mentioned above; very close to the capability laws governing confinement.

    Steps toward denotational semantics of actors.

  • 1977. Russell Atkinson and Carl Hewitt, “Synchronization in actor systems,” Proc. 4th ACM SIGACT-SIGPLAN Symp. Princ. Program. Lang., pp. 267–280.

    Introduces the concept of “serializers”, a “generalization and improvement of the monitor mechanism of Brinch-Hansen and Hoare”. Builds on Greif’s work.

  • 1981. Will Clinger’s PhD dissertation. MIT

Actors as Concurrent Object-Oriented Programming

  • 1986. Gul Agha’s book/dissertation.

  • 1990. G. Agha, “Concurrent Object-Oriented Programming,” Commun. ACM, vol. 33, no. 9, pp. 125–141

    Agha’s work recast the early “classic actor model” work in terms of concurrent object-oriented programming. Here, he defines actors as “self-contained, interactive, independent components of a computing system that communicate by asynchronous message passing”, and gives the basic actor primitives as create, send to, and become. Examples are given in the actor language Rosette.

    This paper gives an overview and summary of many of the important facets of research on actors that had been done at the time, including brief discussion of: nondeterminism and fairness; patterns of coordination beyond simple request/reply such as transactions; visualization, monitoring and debugging; resource management in cases of extremely high levels of potential concurrency; and reflection.

    The customer-passing style supported by actors is the concurrent generalization of continuation-passing style supported in sequential languages such as Scheme. In case of sequential systems, the object must have completed processing a communication before it can process another communication. By contrast, in concurrent systems it is possible to process the next communication as soon as the replacement behavior for an object is known.

    Note that the sequential-seeming portions of the language are defined in terms of asynchronous message passing and construction of explicit continuation actors.

  • 1997. G. A. Agha, I. A. Mason, S. F. Smith, and C. L. Talcott, “A Foundation for Actor Computation,” J. Funct. Program., vol. 7, no. 1, pp. 1–72

    Long paper that carefully and fully develops an operational semantics for a concrete actor language based on lambda-calculus. Discusses various equivalences and laws. An excellent starting point if you’re looking to build on a modern approach to operational semantics for actors.

Erlang: Actors from requirements for fault-tolerance / high-availability

  • 2003. J. Armstrong, “Making reliable distributed systems in the presence of software errors,” Royal Institute of Technology, Stockholm

    A good overview of Erlang: the language, its design intent, and the underlying philosophy. Includes an evaluation of the language design.

E: Actors from requirements for secure interaction

  • 2005. M. S. Miller, E. D. Tribble, and J. Shapiro, “Concurrency Among Strangers,” in Proc. Int. Symp. on Trustworthy Global Computing, pp. 195–229.

    As I summarised this paper for a seminar class on distributed systems: “The authors present E, a language designed to help programmers manage coordination of concurrent activities in a setting of distributed, mutually-suspicious objects. The design features of E allow programmers to take control over concerns relevant to distributed systems, without immediately losing the benefits of ordinary OO programming.”

    E is a canonical example of the “communicating event loops” approach to Actor languages, per the taxonomy of the survey paper listed below. It combines message-passing and isolation in an interesting way with ordinary object-oriented programming, giving a two-level language structure that has an OO flavour.

    The paper does a great job of explaining the difficulties that arise when writing concurrent programs in traditional models, thereby motivating the actor model in general and the features of E in particular as a way of making the programmer’s job easier.

Taxonomy of actors

  • 2016. J. De Koster, T. Van Cutsem, and W. De Meuter, “43 Years of Actors: A Taxonomy of Actor Models and Their Key Properties,” Software Languages Lab, Vrije Universiteit Brussel, VUB-SOFT-TR-16-11

    A very recent survey paper that offers a taxonomy for classifying actor-style languages. At its broadest, actor languages are placed in one of four groups:

    • The Classic Actor Model (create, send, become)
    • Active Objects (OO with a thread per object; copying of passive data between objects)
    • Processes (raw Erlang; receive, spawn, send)
    • Communicating Event-Loops (E; near and far references; eventual references; batching)

    Different kinds of “futures” or “promises” also appear in many of these variations in order to integrate asynchronous message reception with otherwise-sequential programming.

A Universal Modular ACTOR Formalism for Artificial Intelligence

I have scanned the original IJCAI 1973 proceedings print version of

Carl Hewitt, Peter Bishop, and Richard Steiger, “A universal modular ACTOR formalism for artificial intelligence,” in Proc. International Joint Conference on Artificial Intelligence, 1973, pp. 235–245.

The scanned PDF is available to download for anyone interested in seeing the paper as it looked in print.


As part of my research, I’ve been digging through the literature. I couldn’t find a good PDF version of the above, which is one of the original papers on Actors. All I could find were links to this OCRed version, which doesn’t show me the actual paper, just an ugly rendition of the OCRed text. Even the IJCAI official online archive only has the OCRed version.

So I went looking for the original print IJCAI 1973 proceedings in the university library. Lo and behold, it was there!

So I have scanned it and uploaded it here.

Patching gnome-flashback 3.20 to work with GNOME 3.21

I’m running Debian testing, with gnome-flashback. At present, it has installed gnome-flashback 3.20.2-1 alongside gnome-settings-daemon 3.21.90-2.

Symptoms

By default, there are some problems:

  • The “Displays” section of the config tool says only “Could not get screen information”. This prevents GUI access to a lot of functionality, including arrangement of multiple monitors.

  • The brightness adjustment keys do not work.

The root of the problem is that the DBus interface org.gnome.Mutter.DisplayConfig changed between GNOME 3.20 and GNOME 3.21; the GetResources method produces an additional result in 3.21 that is not present in 3.20’s version of the same interface.1

If you see messages similar to the following from stderr of gnome-settings-daemon at startup, you are suffering from this problem too:

(gnome-settings-daemon:7149): power-plugin-WARNING **: Could not create GnomeRRScreen: Method 'GetResources' returned type '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuud)ii)', but expected '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuudu)ii)'
(gnome-settings-daemon:7149): wacom-plugin-WARNING **: Failed to create GnomeRRScreen: Method 'GetResources' returned type '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuud)ii)', but expected '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuudu)ii)'
(gnome-settings-daemon:7149): color-plugin-WARNING **: failed to get screens: Method 'GetResources' returned type '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuud)ii)', but expected '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuudu)ii)'
(gnome-settings-daemon:7149): common-plugin-WARNING **: Failed to construct RR screen: Method 'GetResources' returned type '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuud)ii)', but expected '(ua(uxiiiiiuaua{sv})a(uxiausauaua{sv})a(uxuudu)ii)'

Fixing the problem

To fix the problem, I patched and rebuilt gnome-flashback with these commands:

sudo apt-get build-dep gnome-flashback
apt-get source gnome-flashback
wget "https://eighty-twenty.org/files/gnome-flashback-hack-20160918.patch"
patch -p0 < gnome-flashback-hack-20160918.patch
(cd gnome-flashback-3.20.2; fakeroot ./debian/rules binary)

After these steps, you should have a new gnome-flashback binary, gnome-flashback-3.20.2/gnome-flashback/gnome-flashback. You can now move the existing one out of the way and install it:

sudo mv /usr/bin/gnome-flashback /usr/bin/gnome-flashback-AS-INSTALLED
sudo cp gnome-flashback-3.20.2/gnome-flashback/gnome-flashback /usr/bin
sudo chown root:root /usr/bin/gnome-flashback

Now restart both gnome-flashback and gnome-settings-daemon. (You should arrange for gnome-flashback to be started after gnome-settings-daemon.)

You should no longer have the stderr output complaining about the RR screen DBus signatures, and both the “Displays” section of the config tool and the brightness keys should work.

  1. It’s a shame that there’s no apparent protocol versioning in the GNOME system, or at least none that applies here. If DBus had a serialization language a little more like protobufs, it’d be possible to smoothly add fields in a backwards-compatible way. 

Extensible Double Dispatch for Racket

Both Racket’s object system and its (separate!) generic interface system offer single-dispatch object-oriented programming: the choice of method body to execute depends on the type of just one of the arguments given to the method, usually the first one.

In some cases, the first thing that a method will do is to decide what to do next based on the type of a second argument. This is called double dispatch, and it has a long history in object-oriented programming languages—at least as far back as the original Smalltalk.

As an example, consider implementing addition for classes representing numbers. A different method body would be needed for each pair of representations of numbers.

I stumbled across the need for something like this when implementing Operational Transformation (OT) for Racket. The macro operation-transformer in that code base is almost the double-dispatch macro from this post; the difference is that for operational transformation, the method concerned yields two results, and if the arguments are switched on the way in, they must be switched on the way out.

Basic Double Dispatch

Here’s a basic double-dispatch macro:

(define-syntax-rule (double-dispatch op (arg1 arg2) [pred? body ...] ...)
  (cond
    [(pred? arg2) body ...] ...
    [else (error 'op "Unimplemented for ~v and ~v" arg1 arg2)]))

It assumes that it will be used in a method where dispatch has already been done on arg1, and that the next step is to inspect arg2. It applies the pred?s in sequence until one of them answers true, and then evaluates the corresponding body. If none of the pred?s hold, it signals an error.

It’s often convenient to use it inside a class definition or generic interface implementation with the following macros, which simply define op to delegate immediately to double-dispatch. The first is to be used with Racket’s object system, where the first argument is bound implicitly to this and where predicates should use Racket’s is-a? function. The second is to be used with Racket’s generic interface system, where both arguments are explicitly specified and predicates are more general.

(define-syntax-rule (define/public/double-dispatch (op arg2) [class body ...] ...)
  (define/public (op arg2)
    (double-dispatch (lambda (a b) (send a op b)) (this arg2)
      [(lambda (v) (is-a? v class)) body ...] ...)))

(define-syntax-rule (define/double-dispatch (op arg1 arg2) [pred? body ...] ...)
  (define (op arg1 arg2)
    (double-dispatch op (arg1 arg2) [pred? body ...] ...)))

Commutative Double Dispatch

For commutative operations like addition, it’s common to see the same code appear for adding an A to a B as for adding a B to an A.

The next macro automatically flips its arguments and tries again to see if B’s method has support for A, if it can’t find support for B within A’s method. That way, code for combining B with A need only be supplied in one place. It uses a parameter to keep track of whether it’s currently trying out a flipped pair of arguments.

(define trying-flipped? (make-parameter #f))

(define-syntax-rule (commutative-double-dispatch op (arg1 arg2) [pred? body ...] ...)
  (cond
    [(pred? arg2) (parameterize ((trying-flipped? #f)) body ...)] ...
    [(trying-flipped?) (error 'op "Unimplemented for ~v and ~v" arg2 arg1)]
    [else (parameterize ((trying-flipped? #t)) (op arg2 arg1))]))

Writing a simple wrapper works well for using commutative-double-dispatch in a class definition:

(define-syntax-rule (define/public/commutative-double-dispatch (op arg2) [class body ...] ...)
  (define/public (op arg2)
    (commutative-double-dispatch (lambda (a b) (send a op b)) (this arg2)
      [(lambda (v) (is-a? v class)) body ...] ...)))

but a wrapper for use with the generic interface system needs to take care not to accidentally shadow the outer dispatch mechanism. This macro uses define/generic to make op* an alias of op that always does a full dispatch on its arguments:

(define-syntax-rule (define/commutative-double-dispatch (op arg1 arg2) [pred? body ...] ...)
  (begin (define/generic op* op)
         (define (op arg1 arg2)
           (commutative-double-dispatch op* (arg1 arg2) [pred? body ...] ...))))

Examples

Let’s see the system in operation! First, using Racket’s object system, and then using Racket’s generic interfaces.

Example Scenario

We will first define two types of value foo and bar, each responding to a single doubly-dispatched method, operator which produces results according to the following table:

     | foo | bar |
-----|-----|-----|
 foo | foo | bar |
 bar | bar | foo |
-----|-----|-----|

Then, we’ll extend the system to include a third type, zot, which yields a zot when combined with any of the three types.

Double Dispatch with Classes

(define foo%
  (class object%
    (super-new)
    (define/public/commutative-double-dispatch (operator other)
      [foo% (new foo%)]
      [bar% (new bar%)])))

(define bar%
  (class object%
    (super-new)
    (define/public/commutative-double-dispatch (operator other)
      [bar% (new foo%)])))

Some tests show that this is doing what we expect. Notice that we get the right result when the first operand is a bar% and the second a foo%, even though bar% only explicitly specified the case for when the second operand is also a bar%. This shows the automatic argument-flipping in operation.

(module+ test
  (require rackunit)
  (check-true (is-a? (send (new foo%) operator (new foo%)) foo%))
  (check-true (is-a? (send (new foo%) operator (new bar%)) bar%))
  (check-true (is-a? (send (new bar%) operator (new foo%)) bar%))
  (check-true (is-a? (send (new bar%) operator (new bar%)) foo%)))

Double Dispatch with Generic Interfaces

(define-generics operand
  (operator operand other))

(struct foo ()
  #:methods gen:operand
  [(define/commutative-double-dispatch (operator this other)
     [foo? (foo)]
     [bar? (bar)])])

(struct bar ()
  #:methods gen:operand
  [(define/commutative-double-dispatch (operator this other)
     [bar? (foo)])])

The tests show the same argument-flipping behavior as for the object system above.

(module+ test
  (require rackunit)
  (check-true (foo? (operator (foo) (foo))))
  (check-true (bar? (operator (foo) (bar))))
  (check-true (bar? (operator (bar) (foo))))
  (check-true (foo? (operator (bar) (bar)))))

Extending The Example

First, we implement and test class zot%

(define zot%
  (class object%
    (super-new)
    (define/public/commutative-double-dispatch (operator other)
      [foo% (new zot%)]
      [bar% (new zot%)]
      [zot% (new zot%)])))

(module+ test
  (require rackunit)
  (check-true (is-a? (send (new foo%) operator (new zot%)) zot%))
  (check-true (is-a? (send (new bar%) operator (new zot%)) zot%))
  (check-true (is-a? (send (new zot%) operator (new foo%)) zot%))
  (check-true (is-a? (send (new zot%) operator (new bar%)) zot%))
  (check-true (is-a? (send (new zot%) operator (new zot%)) zot%)))

… and then implement and test struct zot.

(struct zot ()
  #:methods gen:operand
  [(define/commutative-double-dispatch (operator this other)
     [foo? (zot)]
     [bar? (zot)]
     [zot? (zot)])])

(module+ test
  (require rackunit)
  (check-true (zot? (operator (foo) (zot))))
  (check-true (zot? (operator (bar) (zot))))
  (check-true (zot? (operator (zot) (foo))))
  (check-true (zot? (operator (zot) (bar))))
  (check-true (zot? (operator (zot) (zot)))))

Conclusion

Double dispatch is a useful addition to the object-oriented programmer’s toolkit, and can be straightforwardly added to both of Racket’s object systems using its macro facility.


This post was written as executable, literate Racket. You can download the program from here.

Our operating systems are incorrectly factored

Unix famously represents all content as byte sequences. This was a great step forward, offering a way of representing arbitrary information without forcing an interpretation on it.

However, it is not enough. Unix is an incomplete design. Supporting only byte sequences, and nothing else, has caused wasted effort, code duplication, and bugs.

Text is an obvious example of the problem

Consider just one data type: text. It has a zillion character sets and encoding schemes. Each application must decide, on its own, which encoding of which character set is being used for a given file.

When applications get this wrong, both obvious bugs like Mojibake and subtler flaws like the IDN homograph attack result.

Massive duplication of code and effort

Lack of system support for text yields massive code duplication. Rather than having a system-wide, comprehensive model of text representation, encoding, display, input, collation, and comparison, each programming language and application must fend for itself.

Because it is difficult and time consuming to properly handle text, developers tend to skimp on text support. Where a weakness is identified, it must be repaired in each application individually rather than at the system level. This is itself difficult and time consuming.

Inconsistent treatment

Finally, dealing only with byte sequences precludes consistent user interface design.

Consider a recent enhancement to Thunderbird, landing in version 45.0. Previously, when exporting an address book as CSV, only the “system character set” was supported. Now, the user must specify which character set and encoding is to be used:

Illustration from the Thunderbird 45.0 release notes

The user cannot simply work with a file containing text; they must make a decision about which encoding to use. Woe betide them if they choose incorrectly.

A consistent approach would separate the question of text encodings entirely from application-specific UIs. System UI for transcoding would exist in one place, common to all applications.

User frustration

A tiny fraction of the frustration this kind of thing causes is recorded in Thunderbird’s bug 117236.

Notice that it took fourteen years to be fixed.

Ubiquitous problem

This Thunderbird change is just one example. Each and every application suffers the same problems, and must have its text support repaired, upgraded, and enhanced independently.

It’s not only a Unix problem. Windows and OS X are just as bad. They, too, offer no higher-level model than byte sequences to their applications. Even Android is a missed opportunity.

Learn from the past

Systems like Smalltalk, for all their flaws, offer a higher-level model to programmers and users alike. In many cases, the user never need learn about text encoding variations.

Instead, the system can separate text from its encoding.

Where encoding is relevant to the user, there can be a single place to work with it. Contrast this with the many places encoding leaks into application UIs today, just one of which is shown in the Thunderbird example above.

It’s not just text

Text is just one example. Pictures are another. You can probably think of more.

Our operating systems do not support sharing of high-level abstractions of data between documents or applications.

An operating system with a mechanism for doing so would take a great burden off both programmers and users.

Let’s start thinking about what a better modern operating system would look like.

Javascript syntax extensions using Ohm

Programming language designers often need to experiment with syntax for their new language features. When it comes to Javascript, we rely on language preprocessors, since altering a Javascript engine directly is out of the question if we want our experiments to escape the lab.

Ohm is “a library and domain-specific language for parsing and pattern matching.” In this post, I’m going to use it as a Javascript language preprocessor. I’ll build a simple compiler for ES5 extended with a new kind of for loop, using Ohm and the ES5 grammar included with it.

All the code in this post is available in a Github repo.

Our toy extension: “for five”

We will add a “for five” statement to ES5, which will let us write programs like this:

for five as x { console.log("We have had", x, "iterations so far"); }

The new construct simply runs its body five times in a row, binding a loop variable in the body. Running the program above through our compiler produces:

for (var x = 0; x < 5; x++) { console.log("We have had", x, "iterations so far"); }

Extending the ES5 grammar

We write our extension to the ES5 grammar in a new file for5.ohm as follows:

For5 <: ES5 {
  IterationStatement += for five as identifier Statement  -- for5_named

  five = "five" ~identifierPart
  as = "as" ~identifierPart

  keyword += five
           | as
}

Let’s take this a piece at a time. First of all, the declaration For5 <: ES5 tells Ohm that the new grammar should be called For5, and that it inherits from a grammar called ES5. Next,

  IterationStatement += for five as identifier Statement  -- for5_named

extends the existing ES5 grammar’s IterationStatement nonterminal with a new production that will be called IterationStatement_for5_named.

Finally, we define two new nonterminals as convenient shorthands for parsing the two new keywords, and augment the existing keyword definition:

five = "five" ~identifierPart
as = "as" ~identifierPart

keyword += five
         | as

There are three interesting points to be made about keywords:

  • First of all, making something a keyword rules it out as an identifier. In our extended language, writing var five = 5 is a syntax error. Define new keywords with care!

  • We make sure to reject input tokens that have our new keywords as a prefix by defining them as their literal text followed by anything that cannot be parsed as a part of an identifier, ~identifierPart. That way, the compiler doesn’t get confused by, say, fivetimes or five_more, which remain valid identifiers.

  • By making sure to extend keyword, tooling such as syntax highlighters can automatically take advantage of our extension, if they are given our extended grammar.

Translating source code using the new grammar

First, require the ohm-js NPM module and its included ES5 grammar:

var ohm = require('ohm-js');
var ES5 = require('ohm-js/examples/ecmascript/es5.js');

Next, load our extended grammar from its definition in for5.ohm, and compile it. When we compile the grammar, we pass in a namespace that makes the ES5 grammar available under the name our grammar expects, ES5:

var grammarSource = fs.readFileSync(path.join(__dirname, 'for5.ohm')).toString();
var grammar = ohm.grammar(grammarSource, { ES5: ES5.grammar });

Finally, we define the translation from our extended language to plain ES5. To do this, we extend a semantic function, modifiedSource, adding a method for each new production rule. Ohm automatically uses defaults for rules not mentioned in our extension.

var semantics = grammar.extendSemantics(ES5.semantics);
semantics.extendAttribute('modifiedSource', {
  IterationStatement_for5_named: function(_for, _five, _as, id, body) {
    var c = id.asES5;
    return 'for (var '+c+' = 0; '+c+' < 5; '+c+'++) ' + body.asES5;
  }
});

Each parameter to the IterationStatement_for5_named method is a syntax tree node corresponding positionally to one of the tokens in the definition of the parsing rule. Accessing the asES5 attribute of a syntax tree node computes its translated source code. This is done with recursive calls to the modifiedSource attribute where required.

Our compiler is, at this point, complete. To use it, we need code to feed it input and print the results:

function compileExtendedSource(inputSource) {
  var parseResult = grammar.match(inputSource);
  if (parseResult.failed()) console.error(parseResult.message);
  return parseResult.succeeded() && semantics(parseResult).asES5;
}

That’s it!

> compileExtendedSource("for five as x { console.log(x); }");
'for (var x = 0; x < 5; x++) { console.log(x); }'

Discussion

This style of syntactic extension is quite coarse-grained: we must translate whole compilation units at once, and must specify our extensions separately from the code making use of them. There is no way of adding a local syntax extension scoped precisely to a block of code that needs it (known to Schemers as let-syntax). For Javascript, sweet.js offers a more Schemely style of syntax extension than the one explored in this post.

Mention of sweet.js leads me to the thorny topic of hygiene. Ohm is a parsing toolkit. It lets you define new concrete syntax, but doesn’t know anything about scope, or about how you intend to use identifiers. After all, it can be used for languages that don’t necessarily even have identifiers. So when we write extensions in the style I’ve presented here, we must write our translations carefully to avoid unwanted capture of identifiers. This is a tradeoff: the broad generality of Ohm’s parsing in exchange for less automation in identifier handling.

Ohm’s extensible grammars let us extend any part of the language, not just statements or expressions. We can specify new comment syntax, new string syntax, new formal argument list syntax, and so on. Because Ohm is based on parsing expression grammars, it offers scannerless parsing. Altering or extending a language’s lexical syntax is just as easy as altering its grammar.

Conclusion

We have defined an Ohm-based compiler for an extension to ES5 syntax, using only a few lines of code. Each new production rule requires, roughly, one line of grammar definition, and a short method defining its translation into simpler constructs.

You can try out this little compiler, and maybe experiment with your own extensions, by cloning its Github repo.

Racket alists vs. hashtables: which is faster when?

Joe Marshall recently measured alist vs hashtable lookup time for MIT/GNU Scheme. I thought I’d do the same for Racket.

I used a 64-bit build of Racket, version 6.3.0.3.

I ran some very quick-and-dirty informal experiments on my Acer C720 laptop, which is running a 64-bit Debian system and has 2GB RAM and a two-core Celeron 2955U at 1.40GHz.

I measured approximate alist and hash table performance for

  • fixnum keys, using eq? as the lookup predicate (so assq, hasheq) (fixnum program)
  • length-64 byte vector keys, using equal? as the lookup predicate (so assoc, hash) (byte-vector program)

Each chart below has four data series:

  • probe/alist, average time taken to search for a key that is present in an alist
  • probe/alist/missing, average time taken to search for a key that is not present in an alist
  • probe/hasheq or probe/hash, average time taken to search for a key that is present in a hash table
  • probe/hasheq/missing or probe/hasheq/missing, average time taken to search for a key that is not present in a hash table

Fixnum keys

Here are average timings for fixnum keys:

Results for fixnums and assq/hasheq

Things to note:

  • Alists are here always faster than hasheq tables for 7 keys or fewer, whether the key is present or not.

  • When the key is present in the lookup table, alists are on average faster up to around 14 keys or so.

Length-64 byte vector keys

Here are average timings for length-64 random byte vector keys:

Results for length-64 byte vectors and assoc/hash

Things to note:

  • Alists are here always faster than hasheq tables for 4 keys or fewer, whether the key is present or not.

  • When the key is present in the lookup table, alists are on average faster up to around 16 keys or so.

Conclusions

Alists will be faster when you have very few keys - for eq?, around seven or fewer, or for equal?, perhaps only as many as four, depending on the size of each key.

If you expect with high probability that a given key will be present in the table, the picture changes slightly: then, alists may be faster on average up to around perhaps fifteen keys. Specifics of the insertion order of your keys will naturally be very important in this case.

Resources

The programs I wrote:

The data I collected:

gRPC.io is interestingly different from CORBA

gRPC looks very interesting. From a quick browse of the site, it looks like it differs from CORBA primarily in that

  • It is first-order.
  • It eschews exceptions.
  • It supports streaming requests and/or responses.

(That’s setting aside differences between protobufs and GIOP.)

It’s the first point that I think is likely to be the big win. Much of the complexity I saw with CORBA was to do with trying to pass object (i.e. service endpoint) references back and forth in a transparent way. Drop that misfeature, and everything from the IDL to the protocol to the frameworks to the error handling to the implementations of services themselves will be much simpler.

The way streaming is integrated is interesting too. There’s a clear separation between (finite) data, including lists/arrays, in the protobuf message-definition language, and (possibly non-finite) behavior in the gRPC service-definition language. Streams, being coinductive, fit naturally in the service-definition part.

Saving Images Despite Unfriendly Websites

From time to time, one stumbles across a website that has gone out of its way to make it difficult to save images displayed on the page. A common tactic is to disable right-click for the elements concerned.

The following bookmarklet is a simple workaround.

To use it,

  1. drag it to your bookmarks bar
  2. when you’re on a page that won’t let you right-click to save images, click the bookmarklet.
  3. now, click on any image in the page to simply make your browser go directly to that image.
  4. from here, you can use the browser’s “save” functionality directly.