Archive

Monthly Archives: September 2011

A little while ago, I saw @jdanbrown tweet the following “OH: ‘So I did a search for “node.js lock”, and then I was like, “I’m dumb”.’”. Now that’s funny–everyone knows that JavaScript’s event-loop concurrency model means that you don’t have the problems of threads with shared memory, locks, deadlocks, data races, etc that you have in languages like C or Java.

Except that’s not right. JavaScript’s concurrency model is cooperative, which means that you don’t get descheduled if you don’t want to be, but everything else about shared memory concurrency is still there. As a demonstration, here’s some shared-memory concurrency in JS:

var x = 1;
setTimeout(function() { x = 2; }, 2);
setTimeout(function() { console.log(x); }, 0);

What value of x gets logged? Well, that depends on when the two different callbacks get inserted into the event queue, just like the semantics of a shared-memory program with threads.

For a more involved example, here’s a lock in JavaScript:

// a lock with exponential backoff

function Lock() {
    this.state = 0;
    return this;
}

Lock.prototype.acquire = function(cb) { 
    var self = this;
    var check = function(N) {
	if (self.state === 0) {
	    self.state = 1;
	    cb(); }
	else { 
	    console.log("backing off to ", 2*N);
	    setTimeout(check,N,2*N);
	}
    }
    setTimeout(check,1,1);
}

Lock.prototype.tryAcquire = function() { 
    if (this.state === 0) {
	this.state = 1;
	return true;
    }
    return false;
}

Lock.prototype.release = function() { this.state = 0; }

This code isn’t perfect (you can release someone else’s lock, for example), but it demonstrates all the relevant issues. Now we can code up dining philosophers:

var N = 5;
var forks = [];
for (var i = 0; i < N; i++) {
    forks.push(new Lock());
}

function deadlock () {
    for (var i = 0; i < 5; i++) {
	console.log("starting philosopher", i);
	forks[i].acquire(function(i){
 	    console.log("philosopher", i, "picked up fork", i);
	    return function() {
		forks[(i+1)%N].acquire(function() {
		    console.log("philosopher ", i, " eating");
		    forks[i].release();
		    forks[(i+1)%N].release();
		})}}(i));
    }
}

Running deadlock() deadlocks in my version of Firefox, just like you’d expect. We can adopt a lock ordering solution, as described in the Wikipedia article:

function work () {
    for (var i = 0; i < 5; i++) {
	console.log("starting philosopher", i);
	var j1 = Math.min(i,(i+1)%N);
	var j2 = Math.max(i,(i+1)%N);
	forks[j1].acquire(function(i,j1,j2){
	    console.log("philosopher", i, "picked up fork", i);
	    return function() {
		forks[j2].acquire(function() {
		    console.log("philosopher ", i, " eating");
		    forks[j2].release();
		    forks[j1].release();
		})}}(i,j1,j2));
    }
}

Most of the time in JavaScript, you don’t need to write locks explicitly like this. However, you shouldn’t think that event-based concurrency eliminates synchronization, or shared memory, or anything other than preemption.

Of course, JavaScript is in good company here: even Erlang (message sending is arbitrarily-ordered mutation of process inboxes, see also Mnesia) and Haskell (the M in MVar stands for “mutable”) are languages with concurrency and shared mutable state.

The Computer Language Shootout is a popular if not-so-informative way to compare the speed of various language implementations.  Racket does pretty well on their benchmarks, thanks to a lot of effort from various people, especially Eli. They run benchmarks on both 1 core and 4 core machines, so languages with support for parallelism can take advantage in many cases.  However, up until this past week, there were no parallel versions of the Racket programs, and therefore Racket didn’t even show up on the 4-core benchmarks. I set out to fix this, in order to advertise Racket’s up-and-coming parallelism constructs.

There are now two new Racket versions of the benchmarks, one each using futures and places. The mandelbrot benchmark uses futures, getting a speedup of approximately 3.2x on 4 cores, and the binary-trees benchmark uses places, with a speedup of almost exactly 2x.

I learned a few things writing these programs:

  1. Racket’s parallelism constructs, though new, are quite performant, at least on microbenchmarks.  With only two parallel programs, Racket is right now competitive with Erlang on 4 cores.
  2. Futures are really easy to use; places take a little more getting used to. Both are quite simple once you get the hang of it, especially if you’ve written concurrent Racket programs before using Racket’s threads.
  3. It can be very surprising which languages are easiest to translate to Racket.  F# and OCaml were the easiest, with Scala similar.  Programs written in Common Lisp, though fast, were much harder to convert to Racket.
  4. My quick rule of thumb for whether to choose places or futures: if you program does much allocation in parallel, or it needs to synchronize, then use places.  Otherwise, futures are probably easier.  I think this is roughly in line with the original design, and there are more applications where synchronization is unnecessary than you would think.

There are a bunch more programs that could have parallel implementations; feel free to hack on them, or to improve mine.

Follow

Get every new post delivered to your Inbox.