Cross-compiling Rust for Alpine Linux on Raspberry Pi

I’ve just figured out (with the assistance of pages like japaric’s rust-cross guide) how to cross-build a small Rust project for FRμITOS, which is an Alpine Linux based distribution (using musl libc) for Raspberry Pi, using a Debian x86_64 host.

It was ultimately very nicely straightforward.

  1. Install the Debian binutils-arm-linux-gnueabihf package.
  2. Use rustup to install support for the armv7-unknown-linux-musleabihf target.
  3. Set the CARGO_TARGET_ARMV7_UNKNOWN_LINUX_MUSLEABIHF_LINKER environment variable appropriately.
  4. Run cargo build --target=armv7-unknown-linux-musleabihf.

That’s it!

In shell script form:

sudo apt install binutils-arm-linux-gnueabihf
rustup target add armv7-unknown-linux-musleabihf
  cargo build --target=armv7-unknown-linux-musleabihf

#lang something: An alternative syntax for Racket

Recent discussions (e.g. 1, 2) about potentially revising Racket syntax for Racket2 have reminded me I never properly announced #lang something, an experiment from back in 2016.

The main idea is S-expressions, but with usually-implicit parentheses and support for prefix/infix/postfix operators. Indentation for grouping is explicitly represented in the S-expression returned from the reader.

  • (+) keeps a semi-structured input format: reader yields ordinary syntax
  • (+) macros Just Work
  • (+) you can do “if … then … else …”: an example
  • (+) you can do “… where …”: an example
  • (-) uses indentation (though it doesn’t have to; see for example this module)
  • (-) the function syntax isn’t function(arg, ...)

(More links at the bottom of this post.)

In addition to the reader, #lang something provides a small selection of special forms that take advantage of the new syntax, and #lang something/shell adds Unix-shell-like behaviour and a few associated utilities.

This program:

#lang something
for { x: 1 .. 10 }
  def y: x + 1
  printf "x ~a y ~a\n" x y

… reads as this S-expression:

(module something-module something/base
   (for (block (x (block (1 .. 10))))
        (block (def y (block (x + 1)))
               (printf "x ~a y ~a\n" x y)))))

The #%rewrite-body macro, together with its companion #%rewrite-infix, consults an operator table, extendable via the def-operator macro, to rewrite infix syntax into standard prefix S-expressions using a Pratt parser.

The block syntax has many different interpretations. It has a macro binding that turns it into a Racket match-lambda*, and it is used as literal syntax as input to other macro definitions.

For example, here’s one possible implementation of that for syntax:

#lang something


  for-syntax something/base
  prefix-in base_ racket/base

def-syntax for stx
  syntax-case stx (block)
    _ (block (v (block exp)) ...) (block body ...)
      (syntax (base_for ((v exp) ...) body ...))

def-operator .. 10 nonassoc in-range

Notice how the block S-expressions are rewritten into a normal S-expression compatible with the underlying for from racket/base.

Generally, all of these forms are equivalent

x y z          x y z:          x y z { a; b }
  a              a
  b              b

and they are read as

(x y z (block a b))

and are then made available to the normal macro-expansion process (which involves a new infix-rewriting semi-phase).

Colons are optional to indicate a following suite at the end of an indentation-sensitive line. Indentation-sensitivity is disabled inside parentheses. If inside a parenthesised expression, indentation-sensitivity can be reenabled with a colon at the end of a line:

a b (c d:

= (a b (c d (block e f)))

a b (c d

= (a b (c d e f))

Conversely, long lines may be split up and logically continued over subsequent physical lines with a trailing \:

a b c \
  d \

= (a b c d e)

Semicolons may also appear in vertically-laid-out suites; these two are equivalent:

x y z
  b; c

x y z { a; b; c; d }

Suites may begin on the same line as their colon. Any indented subsequent lines become children of the portion after the colon, rather than the portion before.

This example:

x y z: a b
  c d

reads as

(x y z (block (a b (block (c d) e))))

Square brackets are syntactic sugar for a #%seq macro:

[a; b; c; d e f]    →        (#%seq a b c (d e f))

[                   →        (#%seq a (b (block c)) (d e f))
  d e f

Forms starting with block in expression context expand into match-lambda* like this:

  pat1a pat1b

    [(list pat1a pat1b) exp1a exp1b]
    [(list pat2a) exp2])

The map* function exported from something/base differs from map in racket/base in that it takes its arguments in the opposite order, permitting maps to be written

map* [1; 2; 3; 4]
    item + 1

map* [1; 2; 3; 4]
  item: item + 1

map* [1; 2; 3; 4]: item: item + 1

map* [1; 2; 3; 4] { item: item + 1 }

A nice consequence of all of the above is that curried functions have an interesting appearance:

def curried x:: y:: z:
  [x; y; z]

require rackunit
check-equal? (((curried 1) 2) 3) [1; 2; 3]

A few more links:

Another small example “shell script”, which I used while working on the module today:

#lang something/shell

def in-hashlang? re :: source-filename
  zero? (cat source-filename | grep -Eq (format "^#lang.*(~a).*" re))

(find "." -iname "*.rkt"
  | port->lines
  |> filter (in-hashlang? "something")
  |> ag -F "parameterize")

How I keep notes

A few years back, I decided to try to put a little bit of structure on how I kept records of such things as

  • ideas and thoughts I have, related to my work
  • procedures I performed in setting up machines and software
  • phone calls I’d had for arranging real-life things
  • important identifiers and numbers and so on
  • meeting notes
  • to-do lists

I’ve ended up with a loose collection of journal-like documents, each with a different feel.

  • A master org-mode document, which is always open in a buffer in my Emacs session.

    It contains

    • a plain-text time-stamped journal of thoughts, ideas, workings-through of proofs and formalisms, book and paper reviews, talk notes, meeting notes, phone call notes, etc.
    • detailed step-by-step records of how I’ve installed and configured various pieces of software for specific tasks; “how-tos”, essentially, for when I have to do the same kind of thing again
    • detailed step-by-step records of how I’ve set up various servers
    • to-do lists

    I use org-mode sections, with one top-level heading called Journal containing the bulk of the entries.

    To-do items each get a top-level heading of their own and an org-mode TODO tag. Completed to-do items are demoted to second-level and moved into a “done items” top-level heading.

    When I’m doing paid work, I use org-mode’s timesheet-management commands org-clock-in and org-clock-out under a Timesheet top-level heading; a couple of simple scripts help me prepare summaries for invoicing.

    Here’s a sample of just a few headings - each entry in the real document also has a bunch of text contained within it.

    * Journal
      :VISIBILITY: children
    ** (2011-05-08 14:31:13 tonyg) Vertical interpretation vs Horizontal interpretation :STUDY:
    ** (2011-05-19 00:00:00 tonyg) Inter-network routing: should be *tunnelling* not *chaining*
    ** (2011-05-19 11:49:40 tonyg) Memory Pool System - API for virtual machine interface? :STUDY:
    ** (2011-05-19 18:30:50 tonyg) Scripting languages integrate with system languages. :STUDY:
    ** (2011-05-23 00:00:00 tonyg) Origins of Credit-based Flow Control and Acks, functional this time
    ** (2011-05-24 10:13:33 tonyg) Message buffer size should be determined by arrival-time jitter  :IDEA:
    ** (2011-05-24 10:14:08 tonyg) Fine-grain scheduling in a distributed system using PLLs :IDEA:
    * TODO Build an imperative workalike os.rkt that uses real threads  :PROJECT:
    * TODO Upload ~/src/racket-kademlia
    * TODO racket-rc4: RSA, DSA, DH, ECDH, ECDSA, etc
    * TODO Thank-you notes for xmas gifts!
    * Old, done to-do items
    ** DONE Configure Flashbake and git-syncing for Uni
    ** DONE Write presentation
    ** DONE View mini-DVD and write up evaluation

    Some of the journal entries have gone on to become blog posts here.

    The document lives in a git repository, and a git commit is executed by cron every five minutes. I have checkouts of the repository on the two or three machines I use most frequently. I make extensive use of a variant of Emacs’ time-stamp feature:

    (defun stamp ()
      (insert (time-stamp-string)))

    … which inserts text like 2019-05-19 13:15:43 tonyg when I run M-x stamp.

  • A second such document, very similar, that also includes slightly more in the way of private or personal information, that I don’t have checkouts of on as many machines.

  • A paper journal, which I use when I need the different style of thinking it affords. The freedom to draw sketches and rough lines connecting thoughts is useful from time to time. I use a nice fountain pen that a friend gave me, even though it smudges horribly because I’m left-handed.

    A journal entry

  • A Google doc that is a project-specific journal for one of my main projects, Syndicate. It’s a Google doc because I share it with various collaborators, and because I occasionally want to put pictures in the file. Otherwise it’s similar in feel to the main org-mode document I use: a record of thoughts, ideas and workings-through.

In every case, the documents function as append-only logs of thoughts and workings-through. I draw on them when digesting and summarising in later write-ups and in writing actual software. They’re mostly useful as an audit log, for finding out what I was thinking or what I did in the past, or for trying to reconstruct an argument.

The electronic documents are searchable, of course. It’s inconvenient that they are in different places, and I sometimes don’t know which of the small number of documents has the item I am looking for, but it’s not too unmanageable.

The paper documents are a different story. I intend eventually to photograph each page of my paper journal as a kind of backup, though I haven’t yet started doing so. Indexing that archive will require a bit of work too.

Actors for Smalltalk

About two years ago I wrote an Erlang-style Actors implementation for Squeak Smalltalk, based on subclassing Process, using Smalltalk’s Message class to represent inter-actor messages, and using Promise for RPC. Roughly a year ago, I finally dusted it off, documented it, and released it.

It draws on my experience of Erlang programming in a few ways: it has links and monitors for between-actor failure signalling; it has library actors representing sockets; it has a simple tracing facility. There’s crude and no doubt heavily problematic support for basic Morphic interaction.

Installation instructions, comprehensive documentation and tutorials can be found at

It’s by no means as ambitious as other Smalltalk Actor systems: it only deals with single-image in-image messaging between actors, and doesn’t have the E-style ability to refer to objects within a vat. Instead it follows Erlang in having references denote actors (i.e. vats, roughly), rather than anything more fine-grained.

Next steps could be:

  • a Workspace that was actor aware, i.e. each Workspace an actor.
  • better Supervisors.
  • tools for visualizing the current constellation of actors, perhaps based on Ned Konz’s Connectors?
  • an ActorEventTrace subclass that is able to draw message interaction diagrams as a Morph.
  • a screencast of building an IRC client maybe?

To give it a try:

  1. Download and run a recent version of Squeak. For example, I just downloaded

  2. Update your image. Click the Squeak icon in the top left of the window, and choose “Update Squeak”, or execute the following in a workspace:

    MCMcmUpdater updateFromServer
  3. Install the Actors project into your Squeak:

    (Installer squeaksource project: 'Actors') install: 'ConfigurationOfActors'
  4. Follow the documentation, which includes tutorials of various levels of complexity and a detailed user manual,

It could in principle work in Pharo, as well. I did try to port it to Pharo, but found two main obstacles. First, Pharo doesn’t have an integrated Promise implementation; the Actors project makes heavy use of promises. Second, I couldn’t get Pharo’s sockets to behave as reliably as Squeak’s. I don’t remember details, but I’d be very pleased if a Pharo expert were to have a try at porting the code across.

The project has recently been discussed on HN; please feel free to get in touch if you have any questions, comments or feedback.

Why learn Smalltalk?

Smalltalk is three things in one. It is a language; it embodies a language design idea; and it is an operating system.

Learning Smalltalk gives you three things:

  1. An understanding of Smalltalk, the language. This is least essential. It’s also the kind of thing you can pick up in a single 30-minute session.1 The language is tiny and simple.

  2. An understanding of the design idea, uniformly object-oriented programming. This is crucial and will connect with other styles of programming including Actor-based and pure-functional. This is something you will never get from Java, which is not a uniformly object-oriented language.

  3. An understanding of a completely different and (these days) unusual way of designing an operating system. An object-oriented operating system, to boot. The IDE, the debugger, and the other tooling all integrates to give a level of manageability in many ways far superior to e.g. Unix or Windows.

After a while, you may find yourself discovering subtle things about the interplay between the three aspects of Smalltalk that you can then apply in your own designs. For example, Smalltalk is a “dynamic” language, but the way classes and methods are arranged is very rigid, giving a lot of static structure to the system organisation. This static structure is what unlocks the powerful tooling, and is what simplifies the programmer’s mental model to make the system understandable. Comparable OO and almost-OO languages, such as Python, have a more “dynamic” system organisation, making it harder to write tools for automatically manipulating Python systems and making it harder for programmers to understand how Python code, data, state, and processes come together to form a running program image.

No royal road to optics: Functor, Applicative, Monoidal

Yesterday, I became inspired to learn about the kinds of generalized, pure-functional getters and setters that the FP community calls “optics”: Lenses, Prisms, Folds, Traversals, and so on.

I quickly realised that I needed to brush up on some of the core typeclasses involved, namely Functor and Applicative. I spent the rest of the day reading up on the laws and definitions involved.

Brent Yorgey’s Typeclassopedia and Miran Lipovača’s Learn You A Haskell were both great starting points. The latter for the basics, some good examples, and lots of context-setting and motivation, and the former for completeness, concision, and connections to the other foundational ideas in the Haskell standard library.

Applicative laws vs. Monoidal laws

The laws for Applicative functors are complicated, and have obscure, largely unhelpful1 names like “homomorphism” and “interchange”.

The laws for Monoidal functors are simple and familiar: just left and right identity, and an associativity law.

However, each set of laws implies the other!

The two formulations are equivalent. I spent most of yesterday evening proving this, as suggested by the exercises in the Typeclassopedia section dealing with Monoidal functors.

The main thing I learned from this exercise is that the Applicative laws are self-contained. They imply the Functor laws, the Monoidal laws, and the “naturality” law all by themselves. By contrast, the Monoidal laws are not self-contained: in order to derive the Applicative laws, you need appeal to the Functor and “naturality” laws from time to time.

On the one hand, the Applicative laws are ugly, but self-contained. On the other hand, the Monoidal laws are cleanly factored and separate from the Functor laws. But on the gripping hand, programming with Applicative is reported to be much less of a pain than programming with Monoidal. I believe it!

All this suggests that choosing the Applicative formulation over the Monoidal one makes sense when designing a language’s standard library.

After all, proving that an instance of Applicative respects the laws can be done either in terms of the Applicative laws or the Functor+Monoidal laws, meaning that not only does the programmer have the better API, but the implementor has a free choice of “proof API” when discharging their proof obligations.

A handy lemma

Another result of working through the proofs was discovery of this handy lemma:

pure f <*> u <*> pure h = pure (flip f) <*> pure h <*> u

It’s intuitively obvious, and something that I think many users of Applicative will rely on, but it took a bit of thinking to realise I needed it, and a little more thinking to get the proof to go through.

(Exercise: Imagine replacing pure h with v. Why does the new statement not hold?)

Onward to Foldable, Traversable, Lens and Prism

I think today I’ll read the rest of the Typeclassopedia, with particular focus on Foldable and Traversable. If I have time, I’ll get started on Lens.

  1. The names make sense in light of their category-theoretic origins. I’m not well-versed in this stuff, so for me they are only vaguely suggestive. 

Mendeley encrypts your data so you can't access it

Recently, the Mendeley bibliography-management and paper-archive tool started encrypting users’ databases of bibliographic data.

Each such database includes annotations and is the product of a user’s careful, hard manual work.

Previously, users were able to access their valuable data using standard tools. Now, that access has been removed.

I have used Mendeley for almost a decade, now, and I value my database and my data very highly.

Therefore, I looked into what exactly Mendeley has done to prevent me accessing my data, and figured out an awkward manual technique for rescuing enough of it to let me switch to a new bibliography management system.


The reasons for the change are unclear, but it’s possible that this user-hostile encryption was put into the product in retaliation for a competing product, Zotero, adding support for importing records from a Mendeley database. Zotero discusses the problem here, stating that

“Mendeley 1.19 and later have begun encrypting the local database, making it unreadable by Zotero and other standard database tools.”

and the project has also tweeted that

“The latest version of Mendeley prevents you from getting all of your data out of the app.”

Certainly, the old technique for accessing the database no longer works:

$ sqlite3
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> .tables
Error: file is not a database

The new encryption definitely prevents me from getting all of my data from the database I have so carefully constructed.

(Update: Mendeley appears to be claiming that they are required by the GDPR to encrypt local user files! This is a bizarre claim, both to me and to many, many other people.)

The encryption Mendeley is using

Mendeley is using the SQLite Encryption Extension (“SEE”) with a hidden key.

The SEE library is closed-source and very proprietary. Its API is documented, but its on-disk structures are not (publicly) documented, and the source code is not publicly available. Applications using SEE are required to make it impossible to access SEE functionality from outside the application.

For a while, I had hoped that they might have used the open-source sqlcipher library. Unfortunately, they did not. Even more unfortunately, while tooling exists for sqlcipher, and while SEE and sqlcipher are API-compatible, the resulting files are neither binary- nor tooling-compatible.

It is not clear where the key material is coming from—whether the key is per-database, or whether it is a hard-coded key that is part of the Mendeley application binary, or whether it is a key that is retrieved from the Mendeley online API.

The question is ultimately moot, given the closedness of the SEE format and library. Even if I had the key, there is no tooling for making use of it in any other context than the Mendeley application itself.

How to rescue your data

Here’s the technique I used to rescue my data for long enough to allow a Zotero import to work.

Unfortunately, it’s not easy. It works only on Linux (but see below if you are using a Mac), and relies on being able to run gdb to call an internal SQLite SEE routine, sqlite3_rekey_v2.

In the instructions below, modify file paths etc. to line up with your own Mendeley and Unix usernames etc.

  1. Quit Mendeley. You don’t want it running while you’re fiddling with its database.

  2. BACK UP YOUR DATABASE. You will want to put things back the way they were after you’re done so you can use Mendeley again. THE REST OF THIS PROCEDURE MODIFIES THE DATABASE FILE ON DISK in ways that I do not know whether the Mendeley application can handle.

    cd ~/.local/share/data/Mendeley\ Ltd./Mendeley\ Desktop/
    cp ~/backup-encrypted.sqlite
  3. Start Mendeley under the control of gdb.

    mendeleydesktop --debug
  4. Add a breakpoint that captures the moment a SQLite database is opened.

    (gdb) b sqlite3_open_v2
  5. Start the program.

    (gdb) run
  6. The program will stop at the breakpoint several times. Keep continuing the program until the string pointed to by $rdi names the file you backed up in the step above.

    Thread 1 "mendeleydesktop" hit Breakpoint 1, 0x000000000101b1b0 in sqlite3_open_v2 ()
    (gdb) x/s $rdi
    0x1dca928:	"/home/tonyg/.local/share/data/Mendeley Ltd./Mendeley Desktop/Settings.sqlite"
    (gdb) c
    Thread 1 "mendeleydesktop" hit Breakpoint 1, 0x000000000101b1b0 in sqlite3_open_v2 ()
    (gdb) x/s $rdi
    0x1dcb318:	"/home/tonyg/.local/share/data/Mendeley Ltd./Mendeley Desktop/Settings.sqlite"
    (gdb) c

    (… repeats a few times …)

    Thread 1 "mendeleydesktop" hit Breakpoint 1, 0x000000000101b1b0 in sqlite3_open_v2 ()
    (gdb) x/s $rdi
    0x25f1818:	"/home/tonyg/.local/share/data/Mendeley Ltd./Mendeley Desktop/"
  7. Now, set a breakpoint for the moment the key is supplied to SEE. We don’t care about the key itself (for reasons discussed above), but we do care to find the moment just after sqlite3_key has returned.

    (gdb) b sqlite3_key
    Breakpoint 2 at 0x101b2c0
    (gdb) c
    Thread 1 "mendeleydesktop" hit Breakpoint 2, 0x000000000101b2c0 in sqlite3_key ()
    (gdb) info registers 
    rax            0x7fffffffc6b0	140737488340656
    rbx            0x25f0590	39781776
    rcx            0x7fffea9a0c40	140737129352256
    rdx            0x20	32
    rsi            0x260fd68	39910760
    rdi            0x25ef4e8	39777512
    rbp            0x7fffffffc730	0x7fffffffc730
    rsp            0x7fffffffc688	0x7fffffffc688
    r8             0xc1	193
    r9             0x7fffea9a0cc0	140737129352384
    r10            0x0	0
    r11            0x1	1
    r12            0x7fffffffc6b0	140737488340656
    r13            0x7fffffffc6a0	140737488340640
    r14            0x7fffffffc790	140737488340880
    r15            0x7fffffffc790	140737488340880
    rip            0x101b2c0	0x101b2c0 <sqlite3_key>
    eflags         0x202	[ IF ]
    cs             0x33	51
    ss             0x2b	43
    ds             0x0	0
    es             0x0	0
    fs             0x0	0
    gs             0x0	0
  8. Copy down the value of $rdi from the info registers output. It is the pointer to the open SQLite database handle. Then, finish execution of sqlite3_key.

    (gdb) fin
    Run till exit from #0  0x000000000101b2c0 in sqlite3_key ()
    0x0000000000f94e54 in SqliteDatabase::openInternal(QString const&, SqlDatabaseKey*) ()
  9. Use gdb’s ability to call C functions to rekey the database to the null key, thereby decrypting it in-place and allowing Zotero import to do its work.

    Use the value for $rdi you noted down in the previous step as the first argument to sqlite3_rekey_v2, and zero for the other three arguments.

    (gdb) p (int) sqlite3_rekey_v2(0x25ef4e8, 0, 0, 0)
    $1 = 0
  10. If you see $1 = 0 from the rekey command, all is well, and you may now use Zotero to import your Mendeley database. (Update: See below if you don’t see $1 = 0; for example, you might see $1 = 8.)

    While this is happening, leave gdb running and don’t touch it! DO NOT QUIT GDB OR RUN MENDELEY while the import is proceeding. Who knows what might happen to your carefully decrypted database if you do!

    In fact, before you start Zotero, you might like to copy your decrypted database to somewhere safe, so you don’t have to do this again:

    cd ~/.local/share/data/Mendeley\ Ltd./Mendeley\ Desktop/
    cp ~/backup-decrypted.sqlite
  11. Once the import is complete, quit gdb and terminate the associated partially-initialised Mendeley process.

    (gdb) quit
    A debugging session is active.
        Inferior 1 [process 32750] will be killed.
    Quit anyway? (y or n) y
    [322:322:0100/] Invalid node channel message
  12. Finally, restore your backed-up copy of the encrypted database, so that Mendeley will continue to run OK.

    cd ~/.local/share/data/Mendeley\ Ltd./Mendeley\ Desktop/
    cp ~/backup-encrypted.sqlite

You can now, if you like, in addition to using your new Zotero database as normal, access the raw contents of your decrypted Mendeley database using the standard SQLite tools, like you could in previous Mendeley versions:

$ sqlite3 ~/backup-decrypted.sqlite 
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> .tables
CanonicalDocuments       DocumentZotero           NotDuplicates          
DataCleaner              Documents                Profiles               
DocumentCanonicalIds     EventAttributes          RemoteDocumentNotes    
DocumentContributors     EventLog                 RemoteDocuments        
DocumentDetailsBase      FileHighlightRects       RemoteFileHighlights   
DocumentFields           FileHighlights           RemoteFileNotes        
DocumentFiles            FileNotes                RemoteFolders          
DocumentFolders          FileReferenceCountsView  Resources              
DocumentFoldersBase      FileViewStates           RunsSinceLastCleanup   
DocumentKeywords         Files                    SchemaVersion          
DocumentNotes            Folders                  Settings               
DocumentReferences       Groups                   Stats                  
DocumentTags             HtmlLocalStorage         SyncTokens             
DocumentUrls             ImportHistory            ZoteroLastSync         
DocumentVersion          LastReadStates         


No user should have to do this to access their data. I’m lucky I have the skills to do it at all.

I’m sad that Mendeley violated my trust this way, but glad I have an exit strategy now.

Update: what to do if you see $1 = 8

If your p sqlite3_rekey_v2(...) attempt fails, with (say) $1 = 8 as the outcome, then you may have been victim of an unfortunate thread interleaving, or you might have caught a “spurious” opening of the database. It seems that the program sometimes opens the main database at least once in some odd way, before opening it properly for long-term use.

If you think it’s threading, you could try abandoning the procedure and restarting from the beginning: just quit gdb and restart the procedure from mendeleydesktop --debug.

To deal with the “spurious” openings, experiment to see if the program opens the main database a second time. Run the procedure all the way up to the p sqlite3_rekey_v2(...) step, but do not run sqlite3_rekey_v2. Instead, just type c to continue, returning to the step where you inspect each call to sqlite3_open_v2, waiting for one with $rdi pointing to a string with the right database filename. When you see it come round again, then try the sqlite3_rekey_v2 step. If you see $1 = 0 this time, you’re all set, and can proceed as described above for a successful call to sqlite3_rekey_v2.

If you’re still having problems with this procedure, do feel free to email me, and I’ll try to help if I can.

Update: Using a Mac

Steve Laskaridis reports success using this procedure on a Mac. He says that the necessary changes to the procedure above are:

  1. Use lldb instead of gdb;
  2. Use the register read command instead of info registers; and
  3. The database file directory is ~/Library/Application Support/Mendeley Desktop/.

He also published this tweet, which includes a screenshot of the procedure running on a Mac.

Squeak Smalltalk TiledMaps package

Today I released a thing I’ve been hacking on called TiledMaps.

It’s a package for Squeak Smalltalk. It can load and cache static, prerendered map tiles from a variety of sources including OpenStreetMaps, Bing Maps, and so on.

It includes a geocoder query service which maps free-text queries to regions of the planet’s surface. The service can be backed by the Nominatim service (associated with OpenStreetMaps), the Bing geocoder, the Google geocoder, and so on.

Selection of tilesets is independent of selection of geocoder, so you can mix and match.

The package includes a “slippy map” morph called TiledMapMorph, that allows interaction with a map using the mouse. It includes a few hooks for EToys, too, so EToys scripting of the map is possible.

To install it in your Squeak 5.3alpha image, first update to the latest trunk version:

MCMcmUpdater updateFromServer

Then load the TiledMaps package from SqueakSource:

(Installer repository: '')
    install: 'ConfigurationOfTiledMaps'

Once it has loaded, open a new TiledMapMorph with

TiledMapMorph new openInWorld

I’ve recorded a short (< 10 min) demo video showing the package in action:

PS. All the Bing services require an API key. You can get one of your own from Included in the package are a few other tile sources and geocoders that need API keys as well - you’ll need to check the websites and terms & conditions for each service you want to use.

PPS. In particular, Google’s terms & conditions explicitly forbid downloading map tiles without going through their Javascript API. The package honours this restriction.

Lying to TCP makes it a best-effort streaming protocol

In this All Things Linguistic post Gretchen and Lauren introduce a model dialogue:

G: “Would you like some coffee?”
L: “Coffee would keep me awake.”

This might mean Lauren wants coffee—or that she doesn’t want coffee! Her reply on its own is ambiguous. It must be interpreted in a wider context. The post digs into the dialogue more deeply.

In linguistics, the Cooperative Principle can be used to set up a frame within which utterances like these can be interpreted.

Network protocols are just the same. Utterances—packets sent along the wire—must be interpreted using a similar assumption of cooperation. And, just as in human conversation, an utterance can have a deeper, more ambiguous meaning than it seems on its face.

Acknowledgements in TCP

In TCP, the “ack” field contains a sequence number that is supposed to be used by a receiving peer to signal the sending peer that bytes up-to-but-not-including that sequence number have been safely received.

If it’s used that way, it makes TCP a reliable, in-order protocol.

Lying to the sending TCP

But imagine a receiver that didn’t very much care about reliability beyond that offered by the underlying transport. For example, a VOIP receiver, where delayed or lost voice packets are useless, and in fact harmful if carried over TCP because they delay packets travelling behind them, causing the whole conversation to get more and more delayed.

That receiver could lie. It could use the “ack” sequence number field to indicate which bytes it no longer cared about.

In effect, it would ignore lost packets, and use the “ack” field to tell the sender not to bother retransmitting them.1

That decision would make TCP a best-effort, in-order protocol.

Is it really a lie?

Seen one way, this abuse of the “ack” field isn’t a lie. In both cases, the receiver doesn’t care about the bytes anymore: on the one hand, because they’ve been received successfully; on the other, because they’re irrelevant now.

It’s similar to the ambiguity about the coffee in the example above.

The surface meaning is clear. The deeper meaning depends on knowledge about the wider context that isn’t carried in the utterance itself.

We can assume cooperation without assuming motivation

The statement “Don’t send me bytes numbered lower than X” can be interpreted either as “I have received all bytes numbered lower than X” or “I no longer care about bytes numbered lower than X”. Only context can tell them apart.



PS. My dissertation proposes Syndicate, a design for new programming language features for concurrency. Syndicate incorporates ideas about conversation and cooperativity like those discussed here. (See, in particular, chapter 2 of the dissertation.) It’s still early days, but I’ve put up a resource page at that has the dissertation itself, a video of the defense talk I gave, and a few other resources.

  1. Of course, I’m ignoring fiddly details like segment boundaries, interactions between this strategy and flow- and congestion-control, and so on. It’d take a lot of work to really do this! 

PhD Dissertation: Conversational Concurrency

I’ve put up a webpage,, which gathers together a number of resources related to my recently-completed PhD project:

  • The dissertation itself, in both PDF and online HTML;
  • A video recording of my dissertation defense talk;
  • The slides I used for my talk;
  • A source code snapshot of the Syndicate implementations and examples; and
  • The proof scripts accompanying the dissertation.

Syndicate is a programming language design taking a new approach to communication and coordination in a concurrent setting.

It can be seen as a hybrid between Actor-style message-passing and tuplespace-style shared-blackboard communication, with some characteristics of each plus a few unique ones of its own.

Click through for the abstract from the dissertation.