Javascript: Require and Import Found Harmful

For the moment, let’s go ahead and make an assumption: automated tests (unit tests, integration tests, etc.) make code safer to write, update and change. Even if tests break, it says something about how the code was written and pulled together while running. I will address this concern in another post at a later date. Nevertheless, I am going to rely on this assumption throughout this post, so if you disagree with the initial assumption, you might be better served to drop out now.

Now that the grumpy anti-testers are gone, let’s talk, just you and I.

I don’t actually believe that require or import — from the freshly minted ES module system — are inherently bad; somewhere, someone needs to be in charge of loading stuff from the filesystem, after all. Nevertheless require and import tie us directly to the filesystem which makes our code brittle and tightly coupled. This coupling makes all of the following things harder to accomplish:

  • Module Isolation
  • Extracting Dependencies
  • Moving Files
  • Testing
  • Creating Test Doubles
  • General Project Refactoring

The Setup

Let’s take a look at an example which will probably make things clearer:

To get a sense of what we have to do to isolate this code, let’s talk about a very popular library for introducing test doubles into Node tests: Mockery. This package manipulates the node cache, inserting a module into the runtime to break dependencies for a module. Particularly worrisome is the fact that you must copy the path for your module dependencies into your test, tightening the noose and deeply seating this dependence on the actual filesystem.

When we try to test this, we either have to use Mockery to jam fakes into the node module cache or we actually have to interact directly with the external systems: the filesystem, and the external logging system. I would lean — and have leaned — toward using Mockery, but it leads us down another dangerous road: what happens if the dependencies change location? Now we are interacting with the live system whether we want to or not.

This actually happened on a project I was on. At one point all of our tests were real unit tests: i.e. they tested only the local unit we cared about, but something moved, a module changed and all of a sudden we were interacting with real databases and cloud services. Our tests slowed to a crawl and we noticed unexpected spikes on systems which should have been relatively low-load.

Mind you, this is not an indictment of test tooling. Mockery is great at what it does. Instead, the tool highlights the pitfalls built into the system. I offer an alternative question: is there a better tool we could build which breaks the localized dependence on the filesystem altogether?

It’s worthwhile to consider a couple design patterns which could lead us away from the filesystem and toward something which could fully decouple our code: Inversion of Control (of SOLID fame) and the Factory pattern.

Breaking it Down

To get a sense of how the factory pattern helps us, let’s isolate our modules and see what it looks like when we break all the pieces up.

With this refactoring, some really nice things happen: our abstractions are cleaner, our code becomes more declarative, and all of the explicit module references simply disappear. When modules no longer need to be concerned with the filesystem, everything becomes much freer regarding moving files around and decoupling concerns. Of course, it’s unclear who is actually in charge of loading the files into memory…

Whether it be in your tests or in your production code, the ideal solution would be some sort of filesystem aware module which knows what name is associated with which module. The classic name for something like this is either a Dependency Injection (DI) system or an Inversion of Control (IoC) container.

My team has been using the Dject library to manage our dependencies. Dject abstracts away all of the filesystem concerns which allows us to write code exactly how we have it above. Here’s what the configuration would look like:

Module Loading With Our Container

Now our main application file can load dependencies with a container and everything can be loosely referenced by name alone. If we only use our main module for loading core application modules, it allows us to isolate our entire application module structure!

Containers, Tests and A Better Life

Let’s have a look at what a test might look like using the same application modules. A few things will jump out. First, faking system modules becomes a trivial affair. Since everything is declared by name, we don’t have to worry about the module cache. In the same vein, any of our application internals are also easy to fake. Since we don’t have to worry about file paths and file-relative references, simply reorganizing our files doesn’t impact our tests which continue to be valid and useful. Lastly, our module entry point location is also managed externally, so we don’t have to run around updating tests if the module under test moves. Who really wants to test whether the node file module loading system works in their application tests?

Wrapping it All Up

With all of the overhead around filesystem management removed, it becomes much easier to think about what our code is doing in isolation. This means our application is far easier to manage and our tests are easier to write and maintain. Now, isn’t that really what we all want in the end?

For examples of full applications written using DI/IoC and Dject in specific, I encourage you to check out the code for JS Refactor (the application that birthed Dject in the first place) and Stubcontractor (a test helper for automatically generating fakes).

Comments Off on Javascript: Require and Import Found Harmful

Coder Joy – Exploring Joyfulness In Creating Software

A couple weeks ago, I attended Agile Open Northwest. As with every other Agile Open conference I attend there were plenty of eye-opening experiences. One experience which surprised me was actually a session I facilitated. I went in with a concept of where it would land and I was dead, flat wrong.

I anticipated my session about creating joy for coders would be small and filled primarily with technical folk who wanted to figure out how they could make their experience incrementally better while they worked daily at their company. Instead I had a large crowd of people who performed a variety of duties at their company and they all wanted to get a perspective on how to make things more joyful. This, in itself, brought me a ton of joy.

The Inspiration

Before I dive into the experiment I ran, I want to share a little background on how I arrived at the idea that there could be joy while coding. I know a lot of developers get a fair amount of joy out of creating something new. That first line of code in a fresh project can create a near euphoric experience for some developers. Later, as the code base ages, it can seem as though the joy has drained completely from the project and all that is left is drudgery and obligation to keep this project on life support.

I felt this very same thing with one of my own open source projects. I was shocked at the very notion I had started developing something which I very much wanted, only to find myself a couple years later feeling totally defeated every time I opened my editor to do a little bug fixing.

I decided I would make one last effort to combat this emotional sinkhole which used to be an exciting project for me. I went to work capturing information at the edges of my code, clarifying language which was used throughout and, ultimately, reformulating the domain language I would use to describe what I was doing. After some time and a little digital sweat, the project came back to life!

I realized I was actually more excited about my project NOW than I had been when I even started writing it in the first place. I actually was experiencing that same joy I used to experience when starting something afresh. Once I found this new dharma in code, I felt it only made sense I try to share it with my team at work. They loved the notion of joyfulness.

An Experiment In Joy

Having found this new, lasting sense of joy, I wanted to share it with the agile community. I didn’t have a deep plan, carefully constructed with layers of meaning and a precise process with which I could lead people to the promised land. I was just hoping someone would show up and be willing to share. With that in mind, here’s what my basic plan was:

  1. Drain the wound
  2. Make a Joy Wishlist
  3. Pave the Path
  4. Take Joy Back to the Team

I hoped this would be enough to get people talking about joy, what it meant and how they could claim it in their team. It seemed to pay out. Just having a list of pithy step names isn’t really enough to make sense of how to run this experiment in another environment, however. Let’s have a look at each step and what it means.

Running the experiment

Before starting the experiment, the facilitator must come in with certain assumptions and be willing to defuse any emotional outbursts that might begin to arise. It’s important to note that defusing is not simply squashing someones feelings, but accepting they have a feeling and help to reframe what might lead to how they are feeling this way.

In severe cases, the feelings that arise while experimenting might actually warrant deeper exploration before continue digging into building joy within your team. These explorations are okay, and should be welcomed since they will help people to start understanding where others on the team are coming from.

Drain The Wound

If we consider how medicine works, if someone is sick because they have a wound which has become infected, the first step to regaining health is to drain the wound in order to get the toxins out of the body. In much the same way, we will never become healthy as a team if we don’t drain the wounds which have accumulated throughout our lifetimes.

The exercise of draining the wound is simple: give people post-its, pens and a place to put everything and have them write down every last negative thing they feel about their work, the process or anything else that has them struggling with emotion coming into the experiment. It is important to discourage using names or pointing fingers since this simply spreads toxins around. We want to capture the toxic feelings and quarantine them.

The most important thing after performing this draining is to take all of the collected post-its and place it somewhere people can see it. Make sure to note: we are not ignoring the pain, we all know it is here and it has been exposed.

This is important.

If something new bubbles up during the course of the experiment, people should be welcomed to continue adding to the quarantine. It’s not off-limits to update, we are just setting all of the infection in a place where it can’t hurt us.

Sometimes wounds need more than one draining. Don’t be afraid of coming back around and doing this multiple times. Always be vigilant and identify new wounds that may be growing. Emotions can be fragile and wounds can be inflicted over time. We’re human, after all.

Make a Joy Wishlist

Everyone has things they wish they had. More time off, more support, faster builds, cleaner code, better communication, etc. Encourage people to simply write these things down. As a facilitator, you can also log ideas as people generate them. The important thing is to avoid filtering. The goal is to identify all the things which would make people happy regardless of how big, small or outlandish they are.

One important item here is, don’t log/allow anything that says stuff like “Bob needs to deliver stuff on time” or “make code suck less.” These kinds of negative statements are not things to aim for. Instead encourage people to think about what it means for code to suck less. Perhaps what Bob needs to deliver faster is support, so try to capture something like “help Bob when he feels overloaded.”

Pave the Path

Once people have come to a natural close on what their joy might look like, it’s time to start looking for ways to take action. It’s not particularly useful to identify things which could be joyful if they are little more than simply a dream never to be realized. Aim to start paving a path people can walk to approach the things which help them feel greater joy.

Once again, our goal is to seek positive actions, which means we want to limit the kind of negativity which might bubble up. The negativity is likely rooted in the toxins we purged at the beginning, so help people to reframe into something positive.

In the session at Agile Open, someone mentioned they believed developers are lazy. Instead of dwelling on it or letting it become a toxin in the middle of our joy seeking, I tried to encourage them to understand the developers and their position, while also pointing out these same developers might not understand that person’s position either. From here, we can see joy might take the form of more open, honest communication. Once we understand the tension, we can seek a problem and pose solutions.

Take Joy Back to the Team

To close the session, I encouraged people to think about ways they could take the message of joy, and what we discovered during our exploration, back to their respective teams. The goal here was not to be prescriptive. Instead each team needs to find their own joy and way to walk the path.throw them out

Within your own team there are several directions you can go from here; any of the following would be reasonable to try out:

  • Dot vote on a point of joy to focus on and develop a path toward joy
  • Pick an action at random, try it for a week and reflect on whether things were better or worse
  • Leave each person with the idea that they must travel their own path to find joy and simply leave a single rule: one person’s path cannot stomp on another’s
  • Devise your own brilliant plan on how to start moving toward joyful coding given what you’ve found

The most important part of this last bit is to ensure people feel empowered to drive toward joy on their own and together. It is up to the team to support each other and elevate the environment they are in.

Closing, Joy and My Approach

I will say, there is a fair amount of time I spend seeking joy on my own. I typically mix a bunch of different ingredients including music, time for recharge, time to vent, and poking the system until it moves in a way that feels better to me.

It’s really hard to say what might bring joy in a team or even for an individual, but I find joy often comes to me as clear communication — both in code and person to person, as a sense of directed purpose, as automation, and as opportunity to share, with others, the things which make my life a little better.

Perhaps you’ll find your team values some of these things as well. In all likelihood, your team will value things I wouldn’t have thought to include in this post; this too is completely acceptable. Unfortunately seeking joy is not a clear path with a process to force it to emerge. Joy is a lot like love, you just know it when you feel it.

Comments Off on Coder Joy – Exploring Joyfulness In Creating Software
Similar posts in Coding, General Blogging

Creating Programmer Joy with Type Enforcement

It’s extremely common for developers who work in statically typed languages to talk about how much easier their code is to maintain and that the code is self-documenting because of the type system.  However, these same programmers often talk about the amount of “ceremony” they have to overcome to work within the type system and language of their choice.  The ceremony issue seems to be reduced to zero within the Javascript community because of the dynamic type system.  On the other hand it is common to hear JS developers complain about the level of difficulty regarding maintaining a codebase which brought them so much joy while they were creating it.

In a project I have been maintaining for the last couple of years, I started off with the creation joy only to find myself fearing the idea of jumping back in and making updates to resolve bugs logged by users.  I started considering options.  Should I rewrite the entire codebase from scratch? Should I write it with a different language altogether?  It is a plugin for VS Code, so Typescript is the preferred option, though everything I wrote was in vanilla Javascript.

Between the creation of JS Refactor and this past November, I started creating a suite of different libraries all of which were employed to solve the exact kinds of problems I had in my plugin: tight coupling, undocumented code, uncertainty, and the requirement of having to go back and reread my code to rebuild context so I could start working again.

The last two issues were my greatest hurdle. I felt completely uncertain about what the code looked like which I had written to create the plugin in the first place and the only way to understand it was to go back and read it again.

Ultimately, for all of the effort I made to keep my code clean, I had still created write-only code and I was miserable about it.  Nevertheless, I started with the first bug that made sense for me to tackle and dove in.  I plodded along and my dread quickly turned into joy.  Something had happened which actually made me want to throw myself back into this (tested) legacy project.

Somewhere in the process I discovered real programmer joy.

Set aside the fact that I created a dependency injection library and integrated it a while back (this was not the source of my joy). Let’s even set aside the tool I created for turning automated tests written in Mocha, Jasmine and Jest into human readable documentation.  The thing that made my life easy and joyful were the types!

No, I didn’t make the switch to Typescript.  For all the good Typescript offers to the user, the issue of being constrained by the type system was still too much for me to bear.  Instead I stayed within plain old Javascript and started really leaning hard on the Signet type system.

First things first, I started identifying the types of objects and data I was going to interact with and I created just enough type information to say something meaningful about it all.  Here’s a sample of what I created:

After I got my type information lined up, I started working. As I slung code and discovered new information about the data I was working with, I tweaked my types to tell future me, or another developer, what kind of information really was lurking in that data with which I was interacting.

As I worked I would forget what a specific API called for, or how it worked. I would open the source code and, instead of trying to interpret the functions I had created, I simply referred to the signatures at the bottom and my context was instantly rebuilt.

What made this such a revelatory experience was not that I simply had type information encoded into my files, but it was always accurate and, if I got something wrong, I would get real, useful information about how I could fix it. The types are evaluated at run time and could verify things like bounded values and in-bound function behaviors. The more code I wrote, the faster I got. The more I introduced types and encoded real, domain-specific information into my files, the better my program became.

My code came to look like the kind of code I always wanted to write: strict and safe at the edges and dynamic in the middle. As long as I know what is coming in and what is going out I am safe to trust myself, or anyone else, to behave as they should in the middle of their function, because they can’t get it wrong.

All of a sudden typed variables became irrelevant and creating something from what existed became an exercise in joy. The game went from dynamic or static to dynamic and dependent. I could encode logical notions into my software and they always led to something better. An example of what this looks like is as follows:

With all of this information encoded in my program, I could start writing tests which actually described what is really happening under the covers. Creating example data could be done relatively effortlessly by simply fulfilling the type contract. Even creating and interacting with automated tests brought me joy:

This meant that all of the code would lead back around to the start again and each piece, type definitions, type annotations and tests, told the story of how the program worked as a whole. For the small amount of extra work at the beginning of a given thread of thought, the payout was tremendous at the end.

Now, does this mean that types ARE joy? No.

All this really says is a good, rich type system can, and should, help tell the story of your program. It is worth noting my code reflects the domain I work in, not the types of data living within objects and values. Arguably, if a programmer goes type crazy and codes something obscure into types (like some of the atrocities committed by overzealous Scala programmers) the types can bring pain. Instead we should aim to speak the same language as other humans who work around us. Never too clever, never too obscure, just abstraction in simple language which helps form immediate context.

At the end of the day, anything could be used to build beautiful abstractions, but why not use a tool that helps you fall into the pit of success?

Comments Off on Creating Programmer Joy with Type Enforcement
Similar posts in Coding, Design Patterns, Javascript, Types

A Case for Quickspec

Any software community has a contingent which agrees that tests are a good thing and testing first leads us to a place of stable, predictable software. With this in mind, the biggest complaint I’ve heard from people is “testing takes too long!”

This blog post covers the testing library Quickspec, which can be installed from NPM, so you can follow along!

All tests in this post will be written to test the following code:

Testing a Composition With Mocha and Chai

Coming from a background of testing first, I am used to writing tests quickly, but even I have to concede, there are times I just don’t want to write all of the noise that comes with multiple use cases. This is especially true when I am writing tests around a pure function and I just want to verify multiple different edge cases in a computation.

If we were going to test several cases for the composition multiplyThenDivide, the output would look a lot like the following:

Testing a Composition with Mocha and Quickspec

Although testing a simple composition like this is not particularly hard, we can see representing all interesting cases actually requires a whole bunch of individual tests. Arguably, there are enough cases, we’d get bored testing before the testing is actually done (and we did!).

What if we could retool this test to be self contained and eliminate the testing waste?

This is the question is precisely what Quickspec aims to answer. Let’s have a look at what a simple Quickspec test might look like:

Why Is This Better?

Separation of Setup and Execution

The first benefit we get from using Quickspec is we can see, up front, what all of the cases are we are going to test. This makes it much easier to see whether we have tested all of the cases we care about. This means we have a clear picture of what our tests care about and it is completely separated from the execution of the code under test.

Deduplication of Test Execution

The second benefit we get is, we eliminate duplicate code. In this example, the duplication is simply a call to multiplyThenDivide and a call to verify, but this could have represented a full setup of modules or classes, dependency injection and so on. It’s common to have this kind of setup in a beforeEach function, but this introduces the possibility of shared state, which can make tests flaky or unstable.

Instead, we actually perform our setup and tie it directly into our test. This means we have a clear path of test execution so we can see how our specifications link to our test code. Moreover, we only have to write any test boilerplate once, which means we reduce the amount of copy-paste code which gets inserted into our test file.

Declarative Test Writing

Finally, if it is discovered a test is missing all it takes is the definition of a test case and we’re done. There is no extra test code which needs to be written, or extra boilerplate to be introduced. Each test is self contained and all specifications are clearly defined, which means our tests are more declarative and the purpose is clear.

Other Testing Capabilities

Async Testing

Quickspec is written around the idea that code is pure, thus deterministic, but it is also built to be usable in asynchronous contexts, which is a reality in Javascript. This means things like native Javascript promises and other async libraries won’t make it impossible to test what would, otherwise, be deterministic code.

Testing with Theorems

Writing theorem tests with Quickspec could be its own blog post, so I won’t cover the entirety here, though this is an important point.

Instead of hand-computing each expected value, Quickspec allows you to write tests where the outcome is computed just in time. This applies especially well where outcomes are easily computable, but the entire process to handle special cases could lead to extra winding code, or code which might actually use an external process to collect values in the interim.

What this all means

In the end, traditional unit tests are great for behaviors which introduce side effects like object mutation, state modification or UI updates, however, when tests are deterministic, Quickspec streamlines the process of identifying and testing all of the appropriate cases and verify the outcomes in a single, well-defined test.

Install Quickspec from NPM and try it out!

Comments Off on A Case for Quickspec
Similar posts in General Blogging

Why Should I Write Tests?

There has been an ongoing discussion for quite some time about whether automated tests are or are not a good idea. This is especially common when talking about testing web applications, where the argument is often made that it is faster to simply hack in a solution and immediately refreshed the browser.

Rather than trying to make the argument that tests are worthwhile and they save time in the long run I would rather take a look at what it looks like to start with no tests and build from there. Beyond the case for writing tests at all, I thought it would be useful to take a look at the progression of testing when we start with nothing.

Without further ado, let’s take a look at a small, simple application which computes a couple of statistical values from a set of sample numbers. Following are the source files for our stats app.



Now, this application is simple enough we could easily write it and refresh the browser to test it. We can just add a few numbers to the input field and then get results back. The hard part, from there, is to perform the computation by hand and then verify the values we get back.

Example of manual test output

Manual test example



Doing several of these tests should be sufficient to prove we are seeing the results we expect. This gives us a reasonable first pass at whether our application is working as it should or not. Beyond testing successful cases, we can try things like putting in numbers and letters, leaving the field blank or adding unicode or other strange input. In each of these cases we will get NaN as a result. Let’s simply accept the NaN result as an expected value and codify that as part of our test suite. My test script contains the following values:

Input Mean Standard Deviation
Success Cases
1, 2, 1, 2 1.5 0.5
1, 2, 3, 1, 2, 3 2 0.816496580927726
Failure Cases
NaN NaN
a, b NaN NaN

Obviously, this is not a complete test of the system, but I think it’s enough to show the kinds of tests we might throw at our application. Now that we have explored our application, we have a simple test script we can follow each time we modify our code.

This is, of course, a set up. Each time that we modify our code, we will have to copy and paste each input and manually check the output to ensure all previous behavior is preserved, while adding to our script. This is the way manual QA is typically done and it takes a long time.

Because we are programmers, we can do better. Let’s, instead, automate some of this and see if we can speed up the testing process a little. Below is the source code for a simple test behavior for our single-screen application.

With our new test runner, we can simply call the function and pass in our test values to verify the behavior of the application. This small step helps us to speed the process of checking the behaviors we already have tests for and we only need to explore the app for any new functionality we have added.

Once the app is loaded into our browser, we can open the console and start running test cases against the UI with a small amount of effort. The output would look something like the image below.

Single-run tests scripted against the UI

Single-run tests scripted against the UI



We can see this adds a certain amount of value to our development effort by automating away the “copy, paste, click, check” tests we would be doing again and again to ensure our app continued to work the way we wanted it. Of course, there is still a fair amount of manual work we are doing to type or paste our values into the console. Fortunately we have a testing API ready to use for more scripting. Let’s extend our API a little bit and add some test cases.

Now, this is the kind of automated power we can get from being programmers. We have combined our test value table and our UI tests into a single script which will allow us to simply run the test suite over and over. We are still close enough to the original edit/refresh cycle that our development feels fast, but we also have test suites we can run without having to constantly refer back to our test value table.

As we write new code, we can guarantee in milliseconds whether our app is still working as designed or not. Moreover, we are able to perform exploratory tests on the app and add the new-found results to our test suite, ensuring our app is quick to test from one run to the next. Let’s have a look at running the test suite from the console.

Test suite run from the browser console

Test suite run from the browser console



Being able to rerun our tests from the console helps to speed the manual write/refresh/check loop significantly. Of course, this is also no longer just a manual process. We have started relying on automated tests to speed our job.

This is exactly where I expected we would end up. Although this is a far cry from using a full framework to test our code, we can see how this walks us much closer to the act of writing automated tests to remove manual hurdles from our development process.

With this simple framework, it would even be possible to anticipate the results we would want and code them into the tests before writing our code so we can simply modify the code, refresh and run our tests to see if the new results appear as we expected. This kind of behavior would allow us to prove we are inserting required functionality into our programs as we work.

Though I don’t expect this single post to convince anyone they should completely change the way they develop, hopefully this gives anyone who reads it something to think about when they start working on their next project, or when they go back to revisit existing code. In the next post, we’ll take a look at separating the UI and business logic so we can test the logic in greater depth. Until then, go build software that makes the world awesome!

Comments Off on Why Should I Write Tests?

Domain Modeling For Better Contracts

In the post about communicating contracts through enforcing endpoint contracts, we took a look at some basic types which are available in Signet. Today we are going to talk about how to add more information to your types by creating your own data types.

Last week we took a look at how to build types as sets with characteristic functions. This week we will apply that information in order to add extra information to our types.

By this point I’m certain there are plenty of people who are thinking I’ve gone completely off the rails. Javascript, after all, is a dynamically typed language. Don’t burden yourself with all of this type stuff and just write some code!

Although this is true, most people view types as a constraint which only causes pain. Though this might be true if you are coming from a language like Java which contains lots of artificial constraints around type creation, and after all is said and done, the types are still weak and restrictive.

On the other hand, if we consider types as a way to add a layer of correctness checking and a tool for communicating with others, types become less a restriction and more a tool we can use to make our programs better. Good types will make a program transparent and predictable. These are traits we definitely want in our programs.

Just as a refresher, let’s have a look at where we left off with our purchase API from the enforcing endpoints blog post. This way we have a common position to understand where we started and where we’re going.

Now that we have our API defined with regard to basic types, we can start to ask more meaningful questions. Instead of asking things like “what does this function do,” we can ask directed questions to inform our programming better: What kind of numbers are they? What is in the array? What kind of argument must the function take?

The last two questions are easiest to answer since we don’t have to look any farther than higher-kinded types. This is, of course, scary sounding the first time you hear it. I had no clue what a higher-kinded type was the first time I heard the word. Fortunately, many of you may already be familiar with them even if you don’t know the word. Java and C# both support higher-kinded types.

Higher-Kinded Types and You

First and foremost, let’s discuss what a higher-kinded type actually is. (It’s a type.) Once we have a better grasp on that, we will use it in code to make everything a little more clear.

A higher-kinded type is simply a type which takes a type as an argument and returns a type. I know, that sounds weird. How does a number take a string and return a type? I asked the same thing.

It turns out, however, that it’s not nearly as foreign as it might seem. One very common type we rarely think about in Javascript which could easily be handled as a higher-kinded type is an array. An array is, itself, a type, but it contains values which are also typed. This means, if we had a language to express it, we could declare an array which contains only a single type.

As it turns out, there are potentially infinite different types which are, or could be, higher-kinded. In this post we are going to look at just two: array and function. With the type signature language available with signet, we can explicitly declare an array type as needed. This means we can do things like the following.

We can see both of the tested arrays are completely valid Javascript arrays, but the second is not an array of exclusively numbers. There are ways we could create an array which would support numbers and strings, but that’s beyond the scope of this post.

Just like we can declare information about arrays, we can also say something about the expectations around a function. Instead of simply saying a value is “function,” we can actually say a value is a “function which takes a number.” In much the same way we declare our types in arrays, our function type declaration is “function.”

Now that we have an expanded type language to draw upon, let’s update our API and clarify the communication of our domain model.

Subtyping With Characteristics

Now we just have the ‘number’ type scattered everywhere throughout our code. Although this is better than nothing, it would be SO MUCH better if we actually knew something about those numbers. What do they mean? How are they used? What are the constraints?

It turns out we have just the thing to remedy this pain, it’s called characteristic functions. As we know from our earlier discussions on characteristics, we can add richness to our type system through set-describing predicate functions. (Protip: all predicates describe sets)

Before we dive into creating new types willy-nilly, let’s take a moment to account for the different number types we have. By properly identifying the actual domain language we care about, we can create better types which will allow us to clearly describe our application to people who might know nothing about it.

Ultimately, we care about tax, price, amount of tax to pay (tax amount) and purchase total. If we were to simplify this list a bit, we can identify a couple of distinct bits of information.

First let’s consider tax. Tax is a percentage amount. Since, where I live, taxes are always greater than or equal to 0%, but always less than 100%, I am going to say tax is a percent value which will always fall between 0 and 1. For example, in San Diego, sales tax is currently around 8% or 0.08.

Now, we can take a look at price, tax amount and purchase total. Each of these is a value which is related to a value an amount our customer will be paying. This means we can roll these all into some aspect of price. We will say a price value will be greater than or equal to 0. This describes our data pretty accurately for the moment, so let’s go with that.

With our base types sorted out in a way we can jump off from, we can start building characteristics. By clearly defining our characteristics, we give our new types programmatic meaning. Let’s see what our basic characteristic functions will look like for price and percent.

The other piece of this puzzle is, we need to register our types with Signet. Fortunately, this is a simple process. We know that each of these types is actually a number, so we can simply use the subtype function and declare these two functions as new types, inheriting from number. This is also why we didn’t need to test each value to see if it is a number, our subtyping will guarantee we only verify numbers.

We can use our price type to create our other two types by simply aliasing them and using the price definition to ensure our data constraints are clear.

Let’s have a look at our updated API and see how our types are coming along!

Duck Typing our Object

At this point, our API is pretty clear, but there is still one last type which just doesn’t quite convey the information we want to know. Our array of purchases is still described, simply, as an array of objects. This could be much better, if only there were a way to check it.

As it turns out, the Go language has popularized the notion of object similarity through duck typing and that is precisely what we are going to do here. If we know enough information, we can tell whether our object satisfies the Liskov substitution principle, and can be used in place of our intended object.

Signet provides a means to perform duck typing as well, so we don’t have to build our characteristic function from the ground up every time, because that could end up being a LOT of repeated code. Let’s build a duck typing characteristic and finish up our API types.

Now we have a name for our purchase object type, which means we can easily check whether our array of purchases actually adheres to our expectations. Plus this will provide a way for others to understand what we intended when we wrote the code, making it much easier to write new code against the existing API.

Wrapping Things Up

Although this just scratches the surface of using types in your program, hopefully this exercise helps you communicate intent and define a clear domain model. By taking core types we already know and applying a small amount of predicate logic, we surface a new way to talk about our program and the data we use.

Instead of simply using old code as a reference for what it does, add a little annotation, a little bit of logic and get a lot more bang for your buck. In the end, types don’t make everything correct all the time, but they do a lot to make you and others like you a lot more awesome!

Comments Off on Domain Modeling For Better Contracts

Types, Sets and Characteristics

A couple weeks ago, we looked at using Signet and some of the core types to add type information to function calls. Although it is handy to have a variety of base types available to provide signatures for your functions, sometimes you want more control and finer-grained behavior.

At the most foundational level, applied types can be viewed as sets of values. This means, for any type, we can easily construct a set which will describe that type. For instance, the type ‘string’ can be written as the set of all values which are strings. Although this may seem like a trivial way to perform a conversion of a type to a set, it gives us a way to start rethinking the way we interact with type information.

Sets as Types

We can, somewhat informally, say that sets are types. Although this doesn’t capture the nuance of types, it allows us to capture a lot of power in a simple idea. We looked at defining the string type as a set of values. What this really means is, strings are a certain set of values within all values assignable.

If we begin our sets by considering all values assignable and available in within Javascript, we can refer to that set as our “universe.” Within that universe, we could choose a variety of different sets, but regardless of which set we choose, the new set will be contained completely within the original universe.

Using this universal definition, we can consider our strings again and consider how we might describe our set of all strings. First of all, we can ask if a value is contained within our universe. A good example of a value which is distinctly NOT in the Javascript universe is 1000! (1000 factorial). Although this is an integer which actually exists, Javascript will simply evaluate it as infinity. This is not something we will need to test for, it is simply the indication of an upper bound in Javascript.

We could, however, define a set of numbers we declare as our domain set. We can call this an explicit set definition. By turns, we can define a set by way of excluding items which are not in the set. This inverse set can, equally, be declared explicitly, or we can define a function which will simply tell us whether something is in the set or not. This implicit method of creating a set can be referred to as an induced set.

Let’s take a look at a meaningful question we could ask. Let’s ask if a value is a number. This means we are going to call a function which accepts type of * and returns a boolean. This kind of function is called a characteristic function.

Although this function, by itself, is not a big step away from what we already know, it lays the groundwork for defining a richer set of types we can interact with. From the function isNumber, we get an induced set of all values which are numbers, rather than defining the type set explicitly.

Propositions and Predicates

It has been shown in the academic community that propositions are types. What this means is we can actually consider propositions such as A and B in the expression A ∨ B as types unto themselves.

Any proposition can either resolve to a theorem (⊤) or an antitheorem (⊥) which roughly equates to the idea of true or false. In other words, we can ask a question and the answer will either be “yes” or “no.” Although this seems non-obvious, at its face, this is the foundation we will use to construct a richer set of types within Javascript.

We have already seen type construction where we use a predicate to manage inclusion in a type set. We are stating that a value α must be in the set of numbers if α is a number. Even stronger than that, we can say, since our set contains ONLY values which are numbers that if α is in the set of numbers it has a type of number. This relationship between sets and type implementations is important for capturing greater amounts of information about a value as we construct subsets from sets we have defined. Let’s have a look at the logical notation to see predicates in action, so we can tie that together with our predicate notation.

Njs = the set of all values which are Javascript numbers
A: α is a Javascript number
B: α ∈ Njs;

A → B
B → A

Of course in our implementation we really only worry about the second relation, i.e. that if a value is in the set of values which conform to the number type, the value must also conform to the number type. Given the definition of B, α ∈ Njs;, we can actually conclude that the first relation is true given the definition of Njs.

We can actually reformulate this to express the type-set relation more generally. If we simply replace our specific number set with a set of any type Τ we get a new, very useful formulation we can use to extend our type reach well beyond the specifics of the language.

Τjs = the set of all values expressible in Javascript of type τ
A: ατ is a Javascript value of type τ
B: ατ ∈ Τjs

A → B
B → A

That’s a lot of symbols, words and relations. What this really means is we can identify and define any arbitrary type logically and, in turn, define a set containing all values of that type, which will induce an “if and only if” (iff) relationship. That’s a lot of words and symbols. Let’s take a look at how we might use this to implement our own type.

Defining a New Javascript Type

Clearly we won’t be able to build our type into the core language without getting on TC39, issuing a new standard and waiting for several years while everyone adopts it, but we can induce our type through a new predicate function. Let’s suppose we want to define a new type, Integer. We could express our type in the following way:

Intjs = the set of all values expressible in Javascript of type number which are integers
αint ∈ Intjs

With this, we can define a function expressing this relationship, which we can use to verify whether a value is in our set Intjs or not. With regard to the relation between types, expressible values in Javascript and our integer set, we can guarantee the stability of our type and the correctness of our verification.

Although this function is sufficient for verifying whether a value is an integer, we are actually duplicating our efforts. Moreover, it lacks a certain expressiveness which we might like to see. Let’s use our original isNumber function to say a little more about the meaning of our int type.

This new function performs the same check as the original, but it reflects a deeper relationship between our number set, Njs, and our integer set, Intjs. In other words, what we can see expressed here is the typical inheritance property of the is-a relationship.

The Is-A Relationship of Types

As is true for objects in classical object oriented programming, types can also have an inheritance relationship where one type is a subtype of another. This is what we mean by is-a relationship. We can say an integer is a number, or a name is a string. Although an integer can be a type in its own right, we know the number type is the foundation type in Javascript for any numeric representation. This means, for any function which requires a number, an integer is an acceptable value.

Our isInt function demonstrates the is-a relationship by using the number set definition as the first requirement of our check for set inclusion. Let’s continue the chain and create a characteristic function to defining our natural number. Our natural number set will be a strict subset of our integer set.

Now we can see that a natural number is an integer which is a number. This, of course, is similar to OO subtyping with regard to relationship, but is compositional in nature. In fact we can actually describe this type relationship as a relationship of sets, like so:

Naturaljs ⊂ Intjs ⊂ Numberjs

With the repeated behavior of including a function call from the superset, we can start looking for a way to uniformly describe our sets and their relationships. Let’s create a new function, subtype, to help us create set relationships in order to streamline the process of defining type relationships.

Subtype allows us to define our types with functional composition and define our new characteristics with the assumption that we are already working from within a specific type. Let’s rewrite our isNatural check using subtype.

Now the body of our characteristic function is expressed with an implicit relation to the superset of natural numbers, integers. This kind of higher-order function use to express set relations is extremely powerful for defining and describing value types we can use in our development.

Wrapping Up

This was a somewhat dense tour of how we can construct types in Javascript, so don’t worry if it takes a little while to pull the pieces together. The important take-away is that we can construct our own types with meaningful names and clear relationships in order to better understand the way our programs work.

At the end of the day, we are human, so expecting us to actively deal in generalized abstractions such as strings and numbers may not be a reasonable request. Instead, we can reclaim the reins and define our own type language which speaks to future developers in the language of our intent. Go make types and make your programs better!

Comments Off on Types, Sets and Characteristics

Currying Matters: Clarifying Contracts

Function contracts are a tricky thing. Ultimately what they define is an API for your application, but they also define how you write your internal behaviors. This balancing act can either lead to clear, well written code, or it can quickly devolve into ball of tangled string.

Walking the clean code, clean API line can seem to be a daunting task. It’s common to hear people say this is precisely what Classical OOD is built for. By maintaining state, methods can accept partial requirements and allowing the developer to build their behavior in time. I argue this kind of state management leads to extra cognitive load as the developer is required to keep track of the managed state. By currying and clearly exposed intent, incremental behavior building becomes a trivial task done in a reliable set of steps. It also leads to better program design and behavior determinism, making testing much easier to reason about.

A Small Example

Let’s have a look at a slice function as an example. It’s common to want to call slice on arguments objects and arrays alike. This means we have to vary our behavior for each different behavior. Instead of showing example of different usages, I’m going to jump to a general slice implementation similar to what was written in JFP v2.x.

This function makes use of a convenience function which we will continue to use throughout this post. Here is the implementation of pickEnd.

Let’s have a look at how we might accomplish a few simple tasks using our original slice function. We will create a function which will slice an arguments object or copy an array, a function which will drop the first three elements in an array and, finally, a function which will capture the elements from an array from indices 1 to 3.

As we can see, using our slice function forces us to either bind arguments, or actually wrap the entire behavior in a function. This kind of inconsistency makes our slice implementation difficult to use. There must be a better way!

Currying The Slice

The application inconsistencies in our new code leads me to believe we need a better solution. When dealing with a single function API, currying can, often, be illuminating regarding argument order and function implementation. At the very least we might land on a first uniform, stable application. Let’s have a look at what currying is and how we can apply it to our existing function.

Formal currying is defined as converting a function of multiple arguments to a series of on-argument functions. This means currying a function of 3 elements would go a little like this:

If we apply this formal definition to our function, it will produce a new series of functions which are called in order. This means each function progressively accumulates information about execution state without needing any external management system or object. Let’s take a look at a formal currying of slice.

If we use this new definition of slice, we will need to revise the implementation of our functions. Let’s dig in and see how currying makes application more uniform.

Although we have to wrap each new function in an anonymous wrapper, we now have complete uniformity in how we apply slice. With this uniformity, we can now, safely, reorganize code and guarantee code depending on our API won’t break.

Three Arg Monte

Since each of our derivative functions only take an array as an argument, we can fiddle with the inner workings so long as we don’t alter the output. Suppose we swap the order of the output functions, capturing values at different stages of execution. It is, in fact, no different than if we had reordered the parameter list in the original function, but without the pesky .bind() bit.

Let’s take our curried function and move our end parameter up the chain, next to the start parameter. This means our function series will always take a start and end value, which makes our values parameter the last argument. Let’s see the resulting reorganization.

With this new parameter order, we actually move the values parameter to the correct position; in other words, we can apply all of our indices first and take the values argument last, leaving us in a position which is correct for creating a variety of novel behaviors directly.

Our application of slice has remained uniform, while allowing us to exclude the anonymous function wrapper for all three applications. This is definitely closer to what we really meant to say at the beginning. If only we could get rid of that required second call.

Collapsing The Calls

Now that we have found a parameter order which serves us the best, let’s get rid of the extra function call. It is useful for takeFrom1to3, but it actually makes the application of slice for argumentsToArray and dropFirstThree unnecessarily complicated since we call a function with no argument. We want to eliminate confusion where possible.

Since curried functions can be expanded from multiple argument functions, what’s to stop us from reversing the process? Moreover, it is reasonable to collapse only the parameters we want at a given level. Let’s reverse the currying process for start and end and see what we get.

Slice has been collapsed back into a function which captures our indices early and applies them, lazily, to the final argument as we need execution to complete. This means we can actually get the uniformity we saw in earlier reorganizations, with the API sanity of a well-defined function, which happens to optionally take an extra argument after a start index. This is probably best viewed in an example.

Throughout the process, all three of our derivative functions have only ever taken a values argument, but the application of our deeper=level function has brought us to a point where the contract is most sensible for flexible application and reuse. Better yet, each of the slice applications expresses the intent more closely since only the indices we intend to use are used at application time.

API Clarity, At Last

Someone said, recently, on Twitter that what most people call currying is actually partial application. As it turns out this is only partially true. The line between currying and partial application is so blurred, I am inclined to argue that partial application is merely a special case of the more general form of function currying.

Moreover, currying is not a one-way street. Instead, it is a tool to help us identify better ways to express our programs through expanding and collapsing arguments. By better understanding how currying works, we can actually experiment with different configurations of our functions, ideally, without overhauling contracts, types and the like.

When used with intent and care, currying enables us to create functions which have sane, meaningful and expressive contracts while also providing the flexibility to fluidly apply a general-purpose function in a variety of different situations. In other words, if your function contract stinks, maybe it’s time to apply a little currying and make your code awesome!

Comments Off on Currying Matters: Clarifying Contracts

Enforcing Endpoints: Types and Signet

What a ride! I spent the last month preparing a talk for and presenting at Lambdaconf. If you haven’t been, you should. Of the conferences and coding-related events I have been to, this was probably the coolest, toughest, mind-bendiest one. It was awesome. I learned a lot about myself while I was there and a lot about the world beyond the horizon of what we consider “conventional production development.” More than that, it’s all coming to a developer shop near you sooner than you think.

You should go.

I have been talking about types in Javascript lately and this post continues the tradition. As I have been working on, and with, Signet, it becomes more and more obvious why types and signatures are fantastic in simple, raw Javascript code.

There is a lot of discussion about languages which compile to Javascript which support types. This includes Elm, TypeScript and PureScript, though there are more out there. Although I feel these languages may bring something interesting to the table, I feel they are largely akin to writing a language which compiles to C. If there is a flaw in the underlying language, compiling to the flawed language without actually addressing the problem is a band-aid, not a fix. We actually need real types in real Javascript.

I am not a die-hard type convert who wants everything to be typed to a ridiculous degree. Instead, I actually believe that blended dynamic and static typing can lead to an amazing, joyful programming experience. Imagine a world where you can bolt down the contract on things the world touches while leaving the internals to move fluidly through refactorings without having to worry about whether you’re violating a contract.

Programming in a Dynamic World

Let’s imagine you have a bit of code which takes a single purchase record and computes the final total for that record including tax. The code might look a little like what I have outlined below:

We’re not going to dive into why some of the functions are curried, let’s just accept that’s the way they are for this post.

Everything in this code has a clear name and, ultimately, speaks to the intent of the behavior. I’ll assume you are in an agile shop where your code is not thoroughly documented. Instead, you are relying on tribal knowledge to ensure people understand what this code does and how to interact with it.

The likelihood is someone is going to do it wrong.

This brings us to the way Javascript behaves. Javascript will, with all the best intent in mind, try to do the “right” thing. This means, passing numbers instead of objects, strings instead of numbers and NaN could all result in a running, though wholly incorrect program.

The internals of this small module might not need to be protected since anyone working in the file will be compelled to read the code and make sense of the words on the screen, but people who have never seen this code, and perhaps never will, still need to understand what correctness means. Do they know the functions are curried? No. Do they know the names of the variables? Probably not.

The fluid awesomeness of Javascript’s dynamic nature just bit us. Hard.

If You Liked It You Should’a Put A Type On It

One of the greatest failings of assuming clear names will make things manageable is that the names are rarely if ever seen outside of code they are used in. Some editors like WebStorm and Visual Studio Code will pick up the names within modules given the programmer is working with node imports and everything is properly exported, named and referenced.

Even TypeScript can’t save us from this kind of problem since the types are only supported at transpile time, so type erasure eats our one standing bastion of truth. What if we added a little signature and type help to tell others what we are expecting and what they can expect in return?

This is where Signet comes in. By using a modified Hindley-Milner type notation we can actually read what the API does and how we can interact with it. On top of that, we get real, fast type checking at runtime, which means type erasure is a thing of the past. Let’s have a look at our API definition with type enforcement.

The signature annotation not only tells us the kinds of values our function expects, it actually tells us that after the first execution we can expect a function back again. This means we can gain a tremendous amount of insight about our function without knowing anything about the internal workings of the function. Instead of having a true black box, we now have a black box with instructions on the side telling us how to use the thing. We don’t know how it is implemented, but we know it works the same way every time.

With this new enforcement, we get the following behavior:

Closing up Shop

In the end, the challenge in any programming project is not about whether or not you can write simply maintainable code, or whether you should use types or not. Really, it is about making sure you are clearly communicating with the people who rely on your code to do what it says on the label. This means, within the code itself, it should be clear, obvious and intentional. From the outside, any code which is accessible to others, including your future self, should declare what it does, and we should make use of every tool we can to simplify the process of gaining an understanding of what to expect.

By signing and enforcing your API, you get all the benefits of a type checker, plus signature metadata which means you don’t have to go rifling through code that is not immediately related to the task at hand. Meanwhile, under the covers, we can rely on patterns, good naming and clean code to ensure our code continues to deliver value and convey meaning. Now, go add some types to your code and make life better for your team!

Static Methods on Javascript Objects

I’m a big proponent of unit testing. This means that any code I can test, I do. When I work in the browser, however, it becomes more challenging to effectively unit test all of the code I write without spinning up an instance of PhantomJS. On top of that, most of the code I write in the browser, now, uses Angular as the underlying framework, which means my requirements are even more restricted since the go-to testing environment for Angular is Karma, which uses PhantomJS to satisfy Angular’s dependency on a live DOM.

When we consider testing requirements along with the desire to share code between Node and client-side Javascript, it becomes critically important to decouple our core functionality from the framework and environment it runs within. Although some projects can benefit from Browserify and Webpack, it is equally common for developers to fight against the build step which happens before running everything in the browser.

I have spent a fair amount of time off and on trying to find the best solution for each of these problems, which would solve all of them together. Ultimately, the solution came to me while working with Scala. In Scala, it is possible to define a class and an associated object. The object exposes functions as static methods on a namespace, while the class acts as an instantiable object which can be used in Classical OO applications.

A Basic Object

This inspired my thinking and I started looking at ways I could drop this same philosophy into Javascript. Ultimately, I landed on the concept of static functions on an object. In order to get some perspective on where this train of thought will take us, let’s take a look at a simple controller object like we might create in Angular.

This controller is actually pretty typical. There is a little bit of functionality which goes through and modifies the controller state. In order to properly test this behavior, we have to modify the controller state, then run each method and test that the mutation was correct. In a world where good functional practices are possible, this seems unnecessarily fiddly.

Moving to Static

If we rewrite this controller just a little bit, we can start separating behaviors and decouple the computational bits from the state mutation. This means the bulk of the work can be bundled up inside a pure function which is easy to test and think about. Once that is complete, the mutation behavior becomes trivial to test and reason about because it is merely setting a variable.

This change is important because we are actually modifying the base object which introduces functions which are not part of the instantiable object. Due to this, we can actually start moving the functionality out of the primary object scope altogether and, instead, only reveal the parts of our code which we really want to expose for use.

Full Extraction and Namespacing

Once we have extracted the base functionality, we can actually move all of our logic into a factory function. This will allow us to close over utility functions and reveal just the resulting composite functions which can be attached to our object just in time.

We can actually call our factory function within our tests to ensure the logic is correct, meanwhile, nothing is exposed to the outside world accidentally. This means we can attach these functions to the controller, if desired, just before we use them in our prototypal functions.

By attaching the functions as static methods, we give them a unique namespace, perform safe data hiding and ensure that our controller functions can refer to them without needing to be bound to a local context. This actually frees us quite a bit since much of the complexity related to Classical OO in Javascript is related to context switching depending on whether a function is called within the object scope or not.

I created a small utility to perform a no-frills merge of properties onto an object. This is only for illustration and would probably be best done with a reliable library like lodash or JFP. Let’s take a look at attaching our functions to our object for namespacing purposes.

We can see here the attachment of our functions is exclusively for the purpose of namespacing, much the same way we might see this in Scala or other functional language. Now that our functions are separated and declared within a factory function, we can actually work toward our second goal.

Externalizing Our Code

By separating our functionality, we can actually lift the entire factory function into a conditionally exported node module. On top of that, we get extra security because our factory function closes over our functions, making them completely inaccessible to tampering. This means, once our app is loaded in a browser, we get the same kind of separation from the world we normally see from an IIFE.

Moreover, because our code can be conditionally exported, we can require our behaviors directly and test them outside of the browser context. This means our tests will run faster and we don’t need to rely on as many, potentially flaky, integration tests. Here’s our final behavior code.

Summary

In the face of a new world for Javascript, it is important to capture every advantage we can in order to make our code clean, efficient and stable. By splitting our behaviors out of the strict confines of a framework structure and pulling them into a file for easy testing, we simplify the testing story and make it easier to share behavior between the browser and the server.

We can use simple patterns to build well defined, pure functions which give us a clear way to write and share code, while keeping it safe from attackers and stable for our users. The next time you find yourself working on a full-stack Javascript application, how are you going to split your app?

Comments Off on Static Methods on Javascript Objects