William John Bert

Loves to write words and code.

Using a Node Repl in Emacs With Nvm and Npm

Running a repl inside Emacs is often convenient for evaluating code, checking syntax, and myriad other tasks. When I wanted to run a Node REPL, I found that I needed to do a little set up to get everything working the way I wanted.

My first question was: which Node? With nvm, I’ve installed multiple version on my machine. So I needed a way to specify one to execute.

Another question was: where to run Node? Since npm looks inside node_modules directories starting with the current directory and working up the file system hierarchy, the current working directory is important. If I want access to the npm modules installed for project A, I need to start my repl’s Node process from path/to/projectA.

But that raises another question: what happens when I want to switch to project B? Do I need to use process.chdir() to switch the Node repl’s current working directory to path/to/projectB? That’s clumsy and annoying.

Here’s how I answered these questions:

nvm.el gives you nvm-use to activate a version of Node within Emacs. It’s basically a nice wrapper around setting the enviroment variables NVM_BIN and NVM_PATH and adding the path to the Node version you want to use to your PATH. Great!

Except for one problem: nvm-use isn’t interactive. It’s meant to be use programmatically. So I needed to write a small do-nvm-use wrapper that lets me specify a version and then activate it:

(require-package 'nvm)

(defun do-nvm-use (version)
  (interactive "sVersion: ")
  (nvm-use version)
  (exec-path-from-shell-copy-env "PATH"))

To specify where to run Node, I wrote another small defun, named run-node, that prompts for a directory in which to start Node. Before it does this, though, it checks whether a program named node is in the exec-path, and if not, it runs do-nvm-use first. Once we have a Node to execute and a directory to execute it in, we can make a new comint buffer bound to the repl process.

To address the issue of different repls needing to be run for different projects, run-node adds the cwd to the buffer name. Repls for project A and project B will live in buffers named *-node-repl-path/to/projectA and *-node-repl-path/to/projectB, respectively—making switching to the right buffer with ido trivial.

(defun run-node (cwd)
  (interactive "DDirectory: ")
  (unless (executable-find "node")
    (call-interactively 'do-nvm-use))
  (let ((default-directory cwd))
        (pop-to-buffer (make-comint (format "node-repl-%s" cwd) "node" nil "--interactive"))))

Now to start my Node repls, I just call run-node and I’m all set!

How Legit HTTP (With an Async Io Assist) Massacred My Node Workers

An uncaught exception in our Node app was causing not only one, but two and then three workers to die. (Fortunately, we hardly ever encounter uncaught exceptions. Really, just this one since launch a few months ago. We’re Node studs! Right?)

The funny thing is that we’re using Express, which (via Connect) wraps each request / response in a try / catch. And we use Express’s error handler, which returns 500 on unhandled errors.

Another funny thing is we use cluster, which isolates workers from each other. They live in separate, solipsistic processes.

But instead of returning 500, our worker simply died. And, as if in sympathy, the rest immediately followed.

Time to get to the bottom of this. A Node stud like me can figure it out. No sweat. Right?

For a sanity check, I went to Chrome and Firefox’s network inspectors. Only one POST, the bad request that triggered the exception. Everything else looks normal. Sanity: verified.

Then it was on to the cluster module. That magical “OS load balancing” seemed highly suspicious. But nope, I asked in #nodejs and they said that only applies at the TCP connection level. Once a connection is assigned to a worker, it never goes to another worker. Meaning that the bad request was isolated—only the worker who received the initial connection could encounter it.

But the workers kept on dying.

These workers morted out fast. They didn’t even return 500, or any kind of response. The more I thought about it, that didn’t really seem right. Not right at all. Why no 500?

But I can only tackle one mystery at a time. I wanted to understand: why did so many workers die?

Furious googling ensued. My efforts were rewarded with this nugget:

If an HTTP/1.1 client sends a request which includes a request body, but which does not include an Expect request-header field with the “100-continue” expectation, and if the client is not directly connected to an HTTP/1.1 origin server, and if the client sees the connection close before receiving any status from the server, the client SHOULD retry the request.

(From the HTTP 1.1 spec, RFC 2616. Original hat tip, which links to this informative post about double HTTP requests.)

My mind was somewhat blown. The browers were right after all. They were just following HTTP. And—helpfully!—hiding the resent POSTs from the network inspector.

But POSTs are dangerous. They mutate resources! I must only click the Order button once or I may get charged multiple times!

I had a thought. One I have often, yet each time, it seems new again: I have much to learn.

Back to the 500s. Or lack thereof. Which got funnier still when I realized that other errors in our controllers that threw exceptions did return 500s. Being a hands-on kind of guy, I added one right at the top of a route controller: throw new Error("uh-oh"). My dev server spat back: 500 Error: uh-oh.

So why did that one particular error never, ever return a 500, or any response of any kind?

It’s my fault, really. I’m still a Node newbie (I must never forget this). I had missed that because async IO callbacks occur in a different call stack from the request / response cyle, one that originates from the event loop, Express’s try / catch doesn’t catch them.

It makes total sense. I have much to learn.

So what to do? require('domain') to the rescue. I can write some middleware (a bit of this, a dash of that) to wrap the request / response in a domain.

But how do I get this domain into my controller? My solution was to attach it to res.locals._domain. Good solution? I don’t know. I suspect there’s a better way. Good enough? It solved my immediate problem:

Model.find({key: value}, res.locals._domain.bind(function(err, docs) {
  // This callback can throw all it wants. My domain will catch it.

Sweet. Now, armed with a reference to res in the domain error handler, I can return a 500. Voila, the browser gets its response. No more helpful resent POSTs. The silent gratitude of the spared workers is its own reward.

Except, do I need to bind every mongoose and other kind of async IO operation in my app? There are many.


I have much to learn.

Allow CORS With Localhost in Chrome

Today I spent some time wrestling with the notorious same origin policy in order to get CORS (cross-origin resource sharing) working in Chrome for development work I was doing between two applications running on localhost. Setting the Access-Control-Allow-Origin header to * seemed to have no effect, and this bug report nearly led me to believe that was due to a bug in Chrome that made CORS with localhost impossible. It’s not. It turned out that I also needed some other CORs-related headers: Access-Control-Allow-Headers and Access-Control-Allow-Methods.

This (slightly generalized) snippet of Express.js middleware is what ended up working for me:

app.all("/api/*", function(req, res, next) {
  res.header("Access-Control-Allow-Origin", "*");
  res.header("Access-Control-Allow-Headers", "Cache-Control, Pragma, Origin, Authorization, Content-Type, X-Requested-With");
  res.header("Access-Control-Allow-Methods", "GET, PUT, POST");
  return next();

With that, Chrome started making OPTIONS requests when I wanted to POST from localhost:3001 to localhost:2002. It seems that using contentType: application/json for POSTs forces CORS preflighting, which surprised me since it seems like a common case for APIs, but no matter:

app.all("/api/*", function(req, res, next) {
  if (req.method.toLowerCase() !== "options") {
    return next();
  return res.send(204);

Emacs Cl-lib Madness

Emacs 24.3 renamed the Common Lisp emulation package from cl to cl-lib. The release notes say that cl in 24.3 is now “a bunch of aliases that provide the old, non-prefixed names”, but I encountered some problems with certain packages searching for–as best I can determine–function names that at some point changed but were not kept around as aliases. This was particularly problematic when trying to run 24.3 on OS X 10.6.8.

In case anyone else runs into this problem, here’s my solution:

;; Require Common Lisp. (cl in <=24.2, cl-lib in >=24.3.)
(if (require 'cl-lib nil t)
    (defalias 'cl-block-wrapper 'identity)
    (defalias 'member* 'cl-member)
    (defalias 'adjoin 'cl-adjoin))
  ;; Else we're on an older version so require cl.
  (require 'cl))

We try to require cl-lib, and when that succeeds, define some aliases so that packages don’t complain about missing cl-block-wrapper, member*, and adjoin. If it doesn’t succeed, we’re on an older Emacs, so require the old cl.


A few days ago, I happened by chance to read these two articles one after the other:

The first is about how good Unix is at scaling the scheduling and distribution of work among processes. The second is about how Unix is the problem when it comes to the scheduling and distribution of work at scale.

The question, of course, is “What scale?”. Like the difference between a cure and a poison is sometimes the dosage.

Zero to Node, Again

At NodeDC’s January meetup, I’ll be giving a reprise of my Zero to Node talk, about designing, coding, and launching my first web service using Node.js. The meetup is Wednesday, Jan 23, at Stetson’s (1610 U St NW). Hope to see you there!

Review of Requests 1.0

Author’s note: This piece was originally published in the excellent literary journal DIAGRAM, Issue 12.6. I’m re-publishing here for formatting reasons.

Identification with another is addictive: some of my life’s most profound, memorable experiences have come when something bridged the gap between me and another human. Because I’m a reader, this can occur across the distance of space and time. It’s happened with minor Chekov characters, and at the end of Kate Mansfield stories. It happens again and again with Norman Rush and George Saunders. The author has pushed a character through the page and connected with me on a deep level: identification.

Identification happens with computer programming, too.

I say this as a reader, writer, and programmer: I experience identification when reading and programming, and I strive to create it when writing and programming.

Though they deal with the messiness of reality differently, several techniques common to both disciplines enable them to achieve this mental intimacy: navigating complexity; avoiding pitfalls that inhibit communication; choosing structure wisely; harnessing expressive power; and inhabiting other minds. The Requests library, a work of computer programming by Kenneth Reitz, illustrates this.

A Case Study of Node.js in Production

I’m giving a talk about my experience developing and deploying a Node.js web service in production at the next Nova-Node meetup, October 30 at 6:30 p.m. Below is the writeup. If it sounds interesting to you, come by!

SpanishDict recently deployed a new text-to-speech service powered by Node. This service can generate audio files on the fly for arbitrary Spanish and English texts with rapid response times. The presentation will walk through the design, development, testing, monitoring, and deployment process for the new application. We will cover topics like how to structure an Express app, testing and debugging, learning to think in streams and pipes, writing a Chef cookbook to deploy to AWS, and monitoring the application for high performance. The lead engineer on the project, William Bert, will also talk about his experiences transitioning from a Python background to Node and some of the key insights he had about writing in Node while developing the application.

Update: here are the slides from the talk.