Nadeesha Cabral


Javascript Promises as a poor person's Either Type

October 17, 2020

    I’ve been thinking about elegantly handling exception scenarios in synchronous Javascript code for a while. I’m a fan of Either types and Future types. And my views about what’s “right” is heavily influenced by using them in production code-bases.

    The Problem

    const getDogBreedColors = (breed: string) => { let validBreed try { validBreed = isValidBreed(breed) } catch (e) { throw new Error(`The breed ${breed} is not valid`) } return breedColorsDictionary[validBreed] }

    I don’t like this code for two reasons.

    1. The let: implies that the value of validBreed will mutate overtime. Essentially, it creates some state within the function context, and it creates room for someone else to extend this by changing the let furthermore.
    2. The try-catch: essentially creates branching logic. Also reduces the readability of the code, because the function exits at the catch.

    An extreme case

    Taken to an extreme, these two can make code look like this:

    // doSomethingSync.ts const doSomething = (param: string): string => { let interim = "value_in_case_of_error" try { const validated = validate(param) interim = doInterimOperation(validated) } catch (e) { // log error } return toOutputResultShape(interim) }

    We may end up with n variables to hold state and up to n+1 exit paths.

    We can bundle the try-catches together, but can we do one better?

    Enter Promise

    I know promises are asynchronous, but humor me for a moment:

    // doSomethingAsync.ts const doSomething = (param: string): Promise<string> => { return Promise.resolve(param) .then(initial => validate(initial)) .then(validated => doInterimOperation(validated)) .catch(error => { // log error return "value_in_case_of_error" }) .then(interim => toOutputResultShape(interim)) }

    Admittedly, constantly deferring from the event loop would make this run slower. (The context switching cost is higher in asynchronous code). How slow? Very. Here’s the benchmarking test you can run yourself. You will notice that it’s almost twice as slower.

    Something about this feels right to me

    I’ll be the first to admit that: intentionally adding a context switching overhead to synchronous code sounds bad.

    But I’d also argue that this approach also has some merits:

    1. Using chained promise syntax avoids shared mutable state. (Getting rid of state from my code is a win in my book)
    2. It improves the readability of code. It’s easy to reason about code that doesn’t have lets.
    3. Code writes the same whether it’s synchronous code or asynchronous code.

      • For example, let’s change the doInterimOperation to a promise returning async function.
      • doSomethingSync.ts would change a lot.
      • doSomethingAsync.ts would stay the same.
    4. Code writes simpler.

      1. If you need to introduce additional error handling, you just have to slot in an extra catch in the promise chain.
      2. In the synchronous version, you’ll have to establish another let and a try-catch block.

    Also, the context switching overhead might not be a problem in real life applications:

    1. If you use JavaScript for computationally intensive tasks, you’re probably doing something wrong. (read: Don’t block the event loop)
    2. Main thread of JavaScript programs spends most of it’s time idling.
    3. Whether you’d do 50,000,0000 ops/sec or 25,000,000 ops/sec rarely makes a difference for IO heavy use-cases.

    Should you write code this way?

    This definitely feels like a trade-off between efficiency and readability. So, I’m not sure.

    Ideally, it’d be nice to have a native promise construct that doesn’t defer if a function returns non-Promise.