Friday, September 10, 2010

Another Answer to "Why Interpreters?": Microsoft Does Continuations

Note: The parts of this post in parentheses may make no sense (fleegly-dee floo).  You can skip those parts and still get the point!

Yesterday, I chatted about 311 with my friend Dutch Meyer, a systems grad student at UBC and a very sharp person.  He pointed me at a systems/software engineering article about closures and continuations—two powerful concepts we'll run into this term—that he's returned to many times when thinking about concurrent system design.



If you're not a heavy-duty systems person, this may not be the article to clarify closures and continuations for you, but the article's high-level approach does parallel our approach in 311.

The software design team is faced with an existential challenge.  One part of the group knows that automatically managing their call stack is the right thing to do (via, effectively, built-in continuations).  The other part of the group knows equally well that manually managing their call stack is the right thing to do (via manual conversion to continuation-passing style).  To function, the group needs to find a compromise.

And they do, by building what we might think of as little interpreter fragments, which they call adaptors.  They build standardized interfaces between the two systems that simulate the semantics of the prevailing system while using the semantics of the opposite system.  The simulation functions so well that code can call back and forth between the two semantics arbitrarily and still behave consistently.

So, why build interpreters in CPSC 311?  Not necessarily because you'll ever build a whole interpreter for a whole language of your own, even a "little language" like Bentley's.  Rather because when you find the semantics of the language you're using don't offer you the tools you'll need, you'll be ready to custom-build your own semantics to fit.

Cheers,

Steve

P.S. Here's my single favourite quote from the paper, which may make no sense out of context.  Enjoy!  "...with automatic stack management, [changing a function so that it may yield for I/O] is syntactically invisible and yet it affects the semantics of every function that calls the evolved function, either directly or transitively."

1 comment:

  1. Automatic-stack management (in serial operations) vs manual stack management (in cooperative operations) and finding the sweet spot is something I believe most people will come across when programming (probably when hammering out a high-throughput server for the first time). I've actually found F# to be nice in this regard, as the built in async workflow handles this perfectly: i.e. you write traditional 'serial' code and the async operations pass control off to other asynchronous operations behind the scenes. This results in code that looks like it's serial, when it is in fact non-blocking (with respect to other operations)!

    - Paul H.

    ReplyDelete