Javascript: Require and Import Found Harmful

For the moment, let’s go ahead and make an assumption: automated tests (unit tests, integration tests, etc.) make code safer to write, update and change. Even if tests break, it says something about how the code was written and pulled together while running. I will address this concern in another post at a later date. Nevertheless, I am going to rely on this assumption throughout this post, so if you disagree with the initial assumption, you might be better served to drop out now.

Now that the grumpy anti-testers are gone, let’s talk, just you and I.

I don’t actually believe that require or import — from the freshly minted ES module system — are inherently bad; somewhere, someone needs to be in charge of loading stuff from the filesystem, after all. Nevertheless require and import tie us directly to the filesystem which makes our code brittle and tightly coupled. This coupling makes all of the following things harder to accomplish:

  • Module Isolation
  • Extracting Dependencies
  • Moving Files
  • Testing
  • Creating Test Doubles
  • General Project Refactoring

The Setup

Let’s take a look at an example which will probably make things clearer:

To get a sense of what we have to do to isolate this code, let’s talk about a very popular library for introducing test doubles into Node tests: Mockery. This package manipulates the node cache, inserting a module into the runtime to break dependencies for a module. Particularly worrisome is the fact that you must copy the path for your module dependencies into your test, tightening the noose and deeply seating this dependence on the actual filesystem.

When we try to test this, we either have to use Mockery to jam fakes into the node module cache or we actually have to interact directly with the external systems: the filesystem, and the external logging system. I would lean — and have leaned — toward using Mockery, but it leads us down another dangerous road: what happens if the dependencies change location? Now we are interacting with the live system whether we want to or not.

This actually happened on a project I was on. At one point all of our tests were real unit tests: i.e. they tested only the local unit we cared about, but something moved, a module changed and all of a sudden we were interacting with real databases and cloud services. Our tests slowed to a crawl and we noticed unexpected spikes on systems which should have been relatively low-load.

Mind you, this is not an indictment of test tooling. Mockery is great at what it does. Instead, the tool highlights the pitfalls built into the system. I offer an alternative question: is there a better tool we could build which breaks the localized dependence on the filesystem altogether?

It’s worthwhile to consider a couple design patterns which could lead us away from the filesystem and toward something which could fully decouple our code: Inversion of Control (of SOLID fame) and the Factory pattern.

Breaking it Down

To get a sense of how the factory pattern helps us, let’s isolate our modules and see what it looks like when we break all the pieces up.

With this refactoring, some really nice things happen: our abstractions are cleaner, our code becomes more declarative, and all of the explicit module references simply disappear. When modules no longer need to be concerned with the filesystem, everything becomes much freer regarding moving files around and decoupling concerns. Of course, it’s unclear who is actually in charge of loading the files into memory…

Whether it be in your tests or in your production code, the ideal solution would be some sort of filesystem aware module which knows what name is associated with which module. The classic name for something like this is either a Dependency Injection (DI) system or an Inversion of Control (IoC) container.

My team has been using the Dject library to manage our dependencies. Dject abstracts away all of the filesystem concerns which allows us to write code exactly how we have it above. Here’s what the configuration would look like:

Module Loading With Our Container

Now our main application file can load dependencies with a container and everything can be loosely referenced by name alone. If we only use our main module for loading core application modules, it allows us to isolate our entire application module structure!

Containers, Tests and A Better Life

Let’s have a look at what a test might look like using the same application modules. A few things will jump out. First, faking system modules becomes a trivial affair. Since everything is declared by name, we don’t have to worry about the module cache. In the same vein, any of our application internals are also easy to fake. Since we don’t have to worry about file paths and file-relative references, simply reorganizing our files doesn’t impact our tests which continue to be valid and useful. Lastly, our module entry point location is also managed externally, so we don’t have to run around updating tests if the module under test moves. Who really wants to test whether the node file module loading system works in their application tests?

Wrapping it All Up

With all of the overhead around filesystem management removed, it becomes much easier to think about what our code is doing in isolation. This means our application is far easier to manage and our tests are easier to write and maintain. Now, isn’t that really what we all want in the end?

For examples of full applications written using DI/IoC and Dject in specific, I encourage you to check out the code for JS Refactor (the application that birthed Dject in the first place) and Stubcontractor (a test helper for automatically generating fakes).

Comments Off on Javascript: Require and Import Found Harmful

Coder Joy – Exploring Joyfulness In Creating Software

A couple weeks ago, I attended Agile Open Northwest. As with every other Agile Open conference I attend there were plenty of eye-opening experiences. One experience which surprised me was actually a session I facilitated. I went in with a concept of where it would land and I was dead, flat wrong.

I anticipated my session about creating joy for coders would be small and filled primarily with technical folk who wanted to figure out how they could make their experience incrementally better while they worked daily at their company. Instead I had a large crowd of people who performed a variety of duties at their company and they all wanted to get a perspective on how to make things more joyful. This, in itself, brought me a ton of joy.

The Inspiration

Before I dive into the experiment I ran, I want to share a little background on how I arrived at the idea that there could be joy while coding. I know a lot of developers get a fair amount of joy out of creating something new. That first line of code in a fresh project can create a near euphoric experience for some developers. Later, as the code base ages, it can seem as though the joy has drained completely from the project and all that is left is drudgery and obligation to keep this project on life support.

I felt this very same thing with one of my own open source projects. I was shocked at the very notion I had started developing something which I very much wanted, only to find myself a couple years later feeling totally defeated every time I opened my editor to do a little bug fixing.

I decided I would make one last effort to combat this emotional sinkhole which used to be an exciting project for me. I went to work capturing information at the edges of my code, clarifying language which was used throughout and, ultimately, reformulating the domain language I would use to describe what I was doing. After some time and a little digital sweat, the project came back to life!

I realized I was actually more excited about my project NOW than I had been when I even started writing it in the first place. I actually was experiencing that same joy I used to experience when starting something afresh. Once I found this new dharma in code, I felt it only made sense I try to share it with my team at work. They loved the notion of joyfulness.

An Experiment In Joy

Having found this new, lasting sense of joy, I wanted to share it with the agile community. I didn’t have a deep plan, carefully constructed with layers of meaning and a precise process with which I could lead people to the promised land. I was just hoping someone would show up and be willing to share. With that in mind, here’s what my basic plan was:

  1. Drain the wound
  2. Make a Joy Wishlist
  3. Pave the Path
  4. Take Joy Back to the Team

I hoped this would be enough to get people talking about joy, what it meant and how they could claim it in their team. It seemed to pay out. Just having a list of pithy step names isn’t really enough to make sense of how to run this experiment in another environment, however. Let’s have a look at each step and what it means.

Running the experiment

Before starting the experiment, the facilitator must come in with certain assumptions and be willing to defuse any emotional outbursts that might begin to arise. It’s important to note that defusing is not simply squashing someones feelings, but accepting they have a feeling and help to reframe what might lead to how they are feeling this way.

In severe cases, the feelings that arise while experimenting might actually warrant deeper exploration before continue digging into building joy within your team. These explorations are okay, and should be welcomed since they will help people to start understanding where others on the team are coming from.

Drain The Wound

If we consider how medicine works, if someone is sick because they have a wound which has become infected, the first step to regaining health is to drain the wound in order to get the toxins out of the body. In much the same way, we will never become healthy as a team if we don’t drain the wounds which have accumulated throughout our lifetimes.

The exercise of draining the wound is simple: give people post-its, pens and a place to put everything and have them write down every last negative thing they feel about their work, the process or anything else that has them struggling with emotion coming into the experiment. It is important to discourage using names or pointing fingers since this simply spreads toxins around. We want to capture the toxic feelings and quarantine them.

The most important thing after performing this draining is to take all of the collected post-its and place it somewhere people can see it. Make sure to note: we are not ignoring the pain, we all know it is here and it has been exposed.

This is important.

If something new bubbles up during the course of the experiment, people should be welcomed to continue adding to the quarantine. It’s not off-limits to update, we are just setting all of the infection in a place where it can’t hurt us.

Sometimes wounds need more than one draining. Don’t be afraid of coming back around and doing this multiple times. Always be vigilant and identify new wounds that may be growing. Emotions can be fragile and wounds can be inflicted over time. We’re human, after all.

Make a Joy Wishlist

Everyone has things they wish they had. More time off, more support, faster builds, cleaner code, better communication, etc. Encourage people to simply write these things down. As a facilitator, you can also log ideas as people generate them. The important thing is to avoid filtering. The goal is to identify all the things which would make people happy regardless of how big, small or outlandish they are.

One important item here is, don’t log/allow anything that says stuff like “Bob needs to deliver stuff on time” or “make code suck less.” These kinds of negative statements are not things to aim for. Instead encourage people to think about what it means for code to suck less. Perhaps what Bob needs to deliver faster is support, so try to capture something like “help Bob when he feels overloaded.”

Pave the Path

Once people have come to a natural close on what their joy might look like, it’s time to start looking for ways to take action. It’s not particularly useful to identify things which could be joyful if they are little more than simply a dream never to be realized. Aim to start paving a path people can walk to approach the things which help them feel greater joy.

Once again, our goal is to seek positive actions, which means we want to limit the kind of negativity which might bubble up. The negativity is likely rooted in the toxins we purged at the beginning, so help people to reframe into something positive.

In the session at Agile Open, someone mentioned they believed developers are lazy. Instead of dwelling on it or letting it become a toxin in the middle of our joy seeking, I tried to encourage them to understand the developers and their position, while also pointing out these same developers might not understand that person’s position either. From here, we can see joy might take the form of more open, honest communication. Once we understand the tension, we can seek a problem and pose solutions.

Take Joy Back to the Team

To close the session, I encouraged people to think about ways they could take the message of joy, and what we discovered during our exploration, back to their respective teams. The goal here was not to be prescriptive. Instead each team needs to find their own joy and way to walk the path.throw them out

Within your own team there are several directions you can go from here; any of the following would be reasonable to try out:

  • Dot vote on a point of joy to focus on and develop a path toward joy
  • Pick an action at random, try it for a week and reflect on whether things were better or worse
  • Leave each person with the idea that they must travel their own path to find joy and simply leave a single rule: one person’s path cannot stomp on another’s
  • Devise your own brilliant plan on how to start moving toward joyful coding given what you’ve found

The most important part of this last bit is to ensure people feel empowered to drive toward joy on their own and together. It is up to the team to support each other and elevate the environment they are in.

Closing, Joy and My Approach

I will say, there is a fair amount of time I spend seeking joy on my own. I typically mix a bunch of different ingredients including music, time for recharge, time to vent, and poking the system until it moves in a way that feels better to me.

It’s really hard to say what might bring joy in a team or even for an individual, but I find joy often comes to me as clear communication — both in code and person to person, as a sense of directed purpose, as automation, and as opportunity to share, with others, the things which make my life a little better.

Perhaps you’ll find your team values some of these things as well. In all likelihood, your team will value things I wouldn’t have thought to include in this post; this too is completely acceptable. Unfortunately seeking joy is not a clear path with a process to force it to emerge. Joy is a lot like love, you just know it when you feel it.

Comments Off on Coder Joy – Exploring Joyfulness In Creating Software
Similar posts in Coding, General Blogging

A Case for Quickspec

Any software community has a contingent which agrees that tests are a good thing and testing first leads us to a place of stable, predictable software. With this in mind, the biggest complaint I’ve heard from people is “testing takes too long!”

This blog post covers the testing library Quickspec, which can be installed from NPM, so you can follow along!

All tests in this post will be written to test the following code:

Testing a Composition With Mocha and Chai

Coming from a background of testing first, I am used to writing tests quickly, but even I have to concede, there are times I just don’t want to write all of the noise that comes with multiple use cases. This is especially true when I am writing tests around a pure function and I just want to verify multiple different edge cases in a computation.

If we were going to test several cases for the composition multiplyThenDivide, the output would look a lot like the following:

Testing a Composition with Mocha and Quickspec

Although testing a simple composition like this is not particularly hard, we can see representing all interesting cases actually requires a whole bunch of individual tests. Arguably, there are enough cases, we’d get bored testing before the testing is actually done (and we did!).

What if we could retool this test to be self contained and eliminate the testing waste?

This is the question is precisely what Quickspec aims to answer. Let’s have a look at what a simple Quickspec test might look like:

Why Is This Better?

Separation of Setup and Execution

The first benefit we get from using Quickspec is we can see, up front, what all of the cases are we are going to test. This makes it much easier to see whether we have tested all of the cases we care about. This means we have a clear picture of what our tests care about and it is completely separated from the execution of the code under test.

Deduplication of Test Execution

The second benefit we get is, we eliminate duplicate code. In this example, the duplication is simply a call to multiplyThenDivide and a call to verify, but this could have represented a full setup of modules or classes, dependency injection and so on. It’s common to have this kind of setup in a beforeEach function, but this introduces the possibility of shared state, which can make tests flaky or unstable.

Instead, we actually perform our setup and tie it directly into our test. This means we have a clear path of test execution so we can see how our specifications link to our test code. Moreover, we only have to write any test boilerplate once, which means we reduce the amount of copy-paste code which gets inserted into our test file.

Declarative Test Writing

Finally, if it is discovered a test is missing all it takes is the definition of a test case and we’re done. There is no extra test code which needs to be written, or extra boilerplate to be introduced. Each test is self contained and all specifications are clearly defined, which means our tests are more declarative and the purpose is clear.

Other Testing Capabilities

Async Testing

Quickspec is written around the idea that code is pure, thus deterministic, but it is also built to be usable in asynchronous contexts, which is a reality in Javascript. This means things like native Javascript promises and other async libraries won’t make it impossible to test what would, otherwise, be deterministic code.

Testing with Theorems

Writing theorem tests with Quickspec could be its own blog post, so I won’t cover the entirety here, though this is an important point.

Instead of hand-computing each expected value, Quickspec allows you to write tests where the outcome is computed just in time. This applies especially well where outcomes are easily computable, but the entire process to handle special cases could lead to extra winding code, or code which might actually use an external process to collect values in the interim.

What this all means

In the end, traditional unit tests are great for behaviors which introduce side effects like object mutation, state modification or UI updates, however, when tests are deterministic, Quickspec streamlines the process of identifying and testing all of the appropriate cases and verify the outcomes in a single, well-defined test.

Install Quickspec from NPM and try it out!

Comments Off on A Case for Quickspec
Similar posts in General Blogging

Why Should I Write Tests?

There has been an ongoing discussion for quite some time about whether automated tests are or are not a good idea. This is especially common when talking about testing web applications, where the argument is often made that it is faster to simply hack in a solution and immediately refreshed the browser.

Rather than trying to make the argument that tests are worthwhile and they save time in the long run I would rather take a look at what it looks like to start with no tests and build from there. Beyond the case for writing tests at all, I thought it would be useful to take a look at the progression of testing when we start with nothing.

Without further ado, let’s take a look at a small, simple application which computes a couple of statistical values from a set of sample numbers. Following are the source files for our stats app.



Now, this application is simple enough we could easily write it and refresh the browser to test it. We can just add a few numbers to the input field and then get results back. The hard part, from there, is to perform the computation by hand and then verify the values we get back.

Example of manual test output

Manual test example



Doing several of these tests should be sufficient to prove we are seeing the results we expect. This gives us a reasonable first pass at whether our application is working as it should or not. Beyond testing successful cases, we can try things like putting in numbers and letters, leaving the field blank or adding unicode or other strange input. In each of these cases we will get NaN as a result. Let’s simply accept the NaN result as an expected value and codify that as part of our test suite. My test script contains the following values:

Input Mean Standard Deviation
Success Cases
1, 2, 1, 2 1.5 0.5
1, 2, 3, 1, 2, 3 2 0.816496580927726
Failure Cases
NaN NaN
a, b NaN NaN

Obviously, this is not a complete test of the system, but I think it’s enough to show the kinds of tests we might throw at our application. Now that we have explored our application, we have a simple test script we can follow each time we modify our code.

This is, of course, a set up. Each time that we modify our code, we will have to copy and paste each input and manually check the output to ensure all previous behavior is preserved, while adding to our script. This is the way manual QA is typically done and it takes a long time.

Because we are programmers, we can do better. Let’s, instead, automate some of this and see if we can speed up the testing process a little. Below is the source code for a simple test behavior for our single-screen application.

With our new test runner, we can simply call the function and pass in our test values to verify the behavior of the application. This small step helps us to speed the process of checking the behaviors we already have tests for and we only need to explore the app for any new functionality we have added.

Once the app is loaded into our browser, we can open the console and start running test cases against the UI with a small amount of effort. The output would look something like the image below.

Single-run tests scripted against the UI

Single-run tests scripted against the UI



We can see this adds a certain amount of value to our development effort by automating away the “copy, paste, click, check” tests we would be doing again and again to ensure our app continued to work the way we wanted it. Of course, there is still a fair amount of manual work we are doing to type or paste our values into the console. Fortunately we have a testing API ready to use for more scripting. Let’s extend our API a little bit and add some test cases.

Now, this is the kind of automated power we can get from being programmers. We have combined our test value table and our UI tests into a single script which will allow us to simply run the test suite over and over. We are still close enough to the original edit/refresh cycle that our development feels fast, but we also have test suites we can run without having to constantly refer back to our test value table.

As we write new code, we can guarantee in milliseconds whether our app is still working as designed or not. Moreover, we are able to perform exploratory tests on the app and add the new-found results to our test suite, ensuring our app is quick to test from one run to the next. Let’s have a look at running the test suite from the console.

Test suite run from the browser console

Test suite run from the browser console



Being able to rerun our tests from the console helps to speed the manual write/refresh/check loop significantly. Of course, this is also no longer just a manual process. We have started relying on automated tests to speed our job.

This is exactly where I expected we would end up. Although this is a far cry from using a full framework to test our code, we can see how this walks us much closer to the act of writing automated tests to remove manual hurdles from our development process.

With this simple framework, it would even be possible to anticipate the results we would want and code them into the tests before writing our code so we can simply modify the code, refresh and run our tests to see if the new results appear as we expected. This kind of behavior would allow us to prove we are inserting required functionality into our programs as we work.

Though I don’t expect this single post to convince anyone they should completely change the way they develop, hopefully this gives anyone who reads it something to think about when they start working on their next project, or when they go back to revisit existing code. In the next post, we’ll take a look at separating the UI and business logic so we can test the logic in greater depth. Until then, go build software that makes the world awesome!

Comments Off on Why Should I Write Tests?

The Shodan Programmer (Reprise)

Quite some time ago, Michael O. Church wrote a blog post about the Shodan Programmer (beginning degree programmer, or journey beginning programmer). In this post he detailed a gradient system for identifying programmers with regard to skill, experience and knowledge. Two or three weeks ago, I discovered his post was either removed or his site had been deleted. I’m not entirely sure what led to his Shodan post going MIA, but it was an important idea in the greater discussion of progressing skill and ability.

The last remaining reference to his Dan idea is on a Quora post about the top 1% of programmers. In this post, the person posing the question was trying to find out how some of the greatest developers think and work in order to identify common habits of skilled and high-performing programmers.

With all of this in mind, I decided it is important to provide a new voice to the discussion about the Shodan Programmer. I believe that Church’s metaphor for describing programmer skill and ability holds up quite well against scrutiny. Ultimately, the idea is, programmers come in three broad-scope forms and the line is blurred when digging in and identifying when someone has moved from one part of the scale to another.

Where I currently work, we have three conceptual roles in which a developer falls: learner, contributor and mentor. Although this is, effectively, more closely related to the concept of apprentice, journeyman and master, it does line up, in part, with the greater idea Church presents in his Shodan Programmer post.

The Dan scale goes from 0.0 to 3.0 and, it could be said, this scale is logarithmic. The closer you get to the highest point, the harder it is to actually move the needle and the closer to 0 you are, the more likely you are to see large amounts of measurable improvement. This means it becomes increasingly more difficult to identify the gradation between two programmers with high proficiency.

Let’s have a glance at the three levels of the Dan.

Level 1 programmers, or Adders, are programmers who typically perform the bulk of the programming work in a software shop, solving common problems and interacting with systems. These programmers, given enough time, could eventually solve most problems which are necessary

Level 2 programmers, or Multipliers, are programmers who are capable, not only, of solving a problem, but are versed in higher-level ideas like architecture or library development and maintenance. Multipliers provide means to solve more general problems in order to reduce the amount of time spent solving the same problem over and over.

Level 3 programmers, or Global Multipliers, are programmers who tend to think beyond the scope of the immediate problem and look to solve large-scale problems. Although level 3 programmers can perform the tasks of level 1 and 2 programmers, they tend to take research positions and work on conceptual or innovative new technologies which proliferate through the entire community over time.

Level 1 Programmers

Level 1 programmers are what people would typically think of as entry- and mid-level programmers. These people are more focused on expanding their knowledge either through a learning program or day to day development projects which push their boundaries. Even programmers who have barely written their first Hello World program fall into Level 1, though they are usually not ready for production development for some time yet.

Church writes that a level 1 programmer is typically found on the scale between 0.0 and 1.4. If we were to consider the least experienced developer being hired as a junior or associate developer, we would likely be looking at developers who are at ~0.8 on the scale. Any programmer falling above 1.4 is starting to move into the realm of a Level 2 programmer.

We can see, of course, that this scale leaves a rather wide division between Level 1 and Level 2 since Level 2 actually begins well before a programmer actually reaches the 2.0 mark. It’s highly likely that someone who is working at a 1.2 level is probably starting to move into a mid-level programmer position and they are able to start looking at larger, more complex problems, though they are unlikely to be prepared to do architectural analysis

Church states that many programmers don’t progress beyond 1.2 because they stop actively investing in their own knowledge and education. It is common for someone to become comfortable with the technology stack and problem space they know and settle in.

Level 2

As you might imagine, a programmer who is at 1.5 is lumped into a Level 2 programmer. This does not mean that everyone who has reached level 1.5 is prepared to take on all tasks a Level 2.0 programmer would be able to accomplish, but they are starting to learn methods and techniques for tackling more challenging problems.

Level 2 programmers fall between 1.5 and 2.4 on the scale and perform a variety of tasks which might be considered too difficult, or too obscure for a Level 1 programmer. This does not imply any sort of superiority, however. More often reaching Level 2 is a product of independent learning, and industry experience. Level 2 programmers, typically, tap into knowledge which comes exclusively through time.

Church states that Level 2+ programmers can be identified through a variety of outlets such as library development or maintenance, speaking at conferences and mentorship. Due to the nature of their output, Level 2 programmers tend to multiply the efforts of a team rather than simply adding to it. This is where we can really see the difference between the Level 1 and Level 2 programmers.

Simply stated, by bringing Level 1 programmers onto a team the team is likely to benefit in an additive way. It is not uncommon to have a development team made up, exclusively, of Level 1 programmers, who produce working software. Typically, by introducing Level 2 programmers, the code quality will increase and the overall team will learn new skills and become a more effective team. This kind of improvement is greater than an additive effect, giving Level 2 programmers the Multiplier name.

The important aspect of a Multiplier is they provide large scale benefit to a company and are often as influential to an organization as a manager or executive. High-performing Multipliers not only raise the skill and ability of the people around them, but they steer projects clear of known pitfalls or dangers and empower the programmers they work with to take the software they touch to greater vistas.

Level 3

Church calls Level 3 programmers Global Multipliers, but it might be easier to envision them as theorists and researchers. These programmers are at 2.5 and above in the Dan gradient and it is highly unlikely anyone will actually reach 3.0, given the logarithmic nature of the scale.

Typically, Level 3 programmers work on bleeding edge projects and provide the technical and theoretical vision for the result. Level 3 developers are often found working in the R&D departments of large companies like Google, Microsoft or Apple or as high-paid, uniquely skilled contractors and trainers.

Level 3 programmers often have a deep understanding of more than just software development. They typically understand the deep underpinnings of computer science and mathematics. An academic equivalent would be a post-doctorate computer scientist leading research on a funded long-term project.

As with the gap between Level 1 and Level 2 programmers, a Level 3 programmer grows to their position through constant research, development and experience. Both Level 2 and Level 3 programmers have learned through a long process of failure and success. There is no simple path to reaching Level 3 and many programmers may never reach Level 3 simply because they lack the desire to put forth the tremendous amount of effort needed to increase their place on the scale by one or two tenths of a point.

Level Analysis

The way Church originally designed this scale, the intent was to identify appropriate skill level for people working in the industry and understand how people might attack a particular problem. Anyone who is at the whole number increments, e.g. 1.0 or 2.0, it is likely they would be able to solve 95% of the problems which could be identified at that skill level.

In other words, someone who is identified at a level of 2.0 would be able to successfully work on and solve 95% of Level 2 type problems. In much the same way, they are likely only going to be able to solve about 5% of Level 3 problems. This works the same way for someone at a level of 1.0, who would be able to solve 95% of Level 1 problems, but only 5% of Level 2 problems.

This means that for each point closer to the next level, a programmer would be able to solve a greater number of problems which would be distinct for that level. This kind of ability growth leads directly to the logarithmic nature of the graph.

Where this level designation comes in handy in day to day work is understanding the level of difficulty for a specific problem. If something is a straightforward coding problem which can be solved through a brute force approach, the problem is likely a level ~1.0 problem. On the other hand, if the problem is largely architectural or generic in nature it is probably more of a level ~2.0 problem.

By understanding where a developer currently falls on the scale will provide insight into the likelihood they will succeed at solving a problem on their own. In other words, if a developer is performing at level ~1.2, they will probably find a problem at level ~2.1 frustrating or inscrutable. On the other hand, they will also, likely, find a level ~0.3 problem trivial or boring.

This gives us a heuristic for maintaining developer engagement. Church claims that developers seeking a challenge perform best at about 65-75% above their skill level. This means we could expect a developer to exhibit moderate growth with a healthy mixture of problems which they consider “simple” for confidence, or 50% below their level, problems at +/-10% of their level and problems which are up to
50% above their level.

What it Means

Ultimately, this scale is not an indictment of anyone’s skill or ability regardless of where they are in their career. Regardless of whether you are just starting out, or you are a multi-decade veteran of the software industry, your place in the Dan is uniquely your own. Some people enjoy the act of writing code and will happily continue forward on that path, while others have ambitions of experience beyond where they are today. The choice is personal and it’s not one I would be comfortable guiding in any way.

Ideally, Church’s Dan should provide insight into the road a programmer could walk to achieve their desires and set a course for their career. Level 1.0 or 1.1 could realistically be reached in a couple of years and, perhaps, a year or so for a particularly dedicated developer. Church states that moving up the scale at 0.1 points per year after the first level is likely an aggressive schedule for many programmers. This means reaching level 2.0 will likely take a dedicated programmer 10 or 11 years on average, but might be achieved in about 8 for someone who is especially active in their research and experience.

I would suggest that the closer a programmer gets to 2.0, the slower the progress is going to feel simply because it takes more effort to cover the ground necessary for each tenth of a point. This is especially true since computer science is a field which is continually expanding and knowledge of topics continues to deepen year over year.

After reaching level 2.0, it could, realistically, take a lifetime to reach 2.5 or above. This is completely dependent upon the programmer and what their desires are. This does not necessarily mean reaching for the pinnacle is not a worthy goal outright. Instead what this means is each person will have to decide on their own whether they want to continue to climb, and what it will mean for their career.

In the end, this scale is simply a means for every person to understand where they currently are in their career, and provide a way to assess problems which will crop up while working on a particular project. In much the same way, revisiting this idea and the scale that comes with it has helped me to understand how to, both, pick appropriate level problems for people I work with as well as understand problems I have yet to face for myself.

In the end, I hope this has helped others to get a taste of the scale Church presented and, perhaps, draw a clearer line from where they are now to where they want to be. Everyone walks the path, the question is where does the journey ultimately lead?

Comments Off on The Shodan Programmer (Reprise)
Similar posts in General Blogging

Unit Testing Express Routing

If you are unit testing your code, and you should be, then it is likely you have encountered certain patterns which make testing more difficult. One of the patterns which pops up often is centered around Express.js routes. Although the router has a nice, simple API to code against, the actual testing of the route action code can be a little challenging if you aren’t used to using tools such as mocks, fakes and stubs.

Today we are going to explore using the Mockery library and an express router fake module to simplify the process of reaching into our modules and getting ahold of the route actions we provide to express for the sake of testing and ensuring program correctness.

The Libraries

The post today will make use of Mocha, Mockery and Express Route Fake. The first two you may have heard of, the last you likely have not. Let’s walk the list and get a handle on what each of these tools does for us at a high level.

Mocha

If you have done any TDD or unit testing in Javascript and especially in Node, you have likely already encountered Mocha. Mocha is a popular BDD tool for testing Javascript. We will have examples of what test code in Mocha looks like later.

Mockery

Mockery is a tool for manipulating packages and modules which are required within script files for Node. Although Mockery is quite useful for breaking tight coupling, it is not as well known as Mocha. Unit testing scripts which are tightly coupled through require statements is challenging since there is no clean way to inject a dependency into a dependency chain without a third party tool. Mockery is key to getting good tests around Express route actions as we will see.

Express Route Fake

Express Route Fake is a module I wrote to emulate behavior we use at Hunter to gain access to our route actions as we get tests written around our code. The core idea of Express Route Fake is to substitute the fake module in for Express in order to capture references to the actions which are assigned to different routes. We will explore this shortly.

Registering Replacements with Mockery

I am assuming you are already familiar with a testing framework, so I am not going to cover the basics of using Mocha. Instead we are going to start off looking at how to register a faked module with Mockery so we can break a dependency in Node.

Mockery actually manipulates the Node module cache and updates it with code of our choosing. This gives us the ability, at test time, to dig in and create a fake chunk of code which we control so we can ensure our tests actually send and receive the right output without doing dangerous things like interacting with a network connection or a database. We want the scope of our unit tests to be as tight as is reasonable so we can ensure the code under test actually performs the correct action every time.

This kind of guarantee around tests is called determinism. Deterministic tests are tests which, when provided a single input, always return the same output. Ideally any interactions which would break the determinism of our functionality should be stripped away and replaced with code which always does the same thing. This gives us guarantees and makes it easier to identify and test a variety of different cases. Let’s have a look at an example of breaking a dependency with Mockery and replacing the code with something else.

The beforeEach block sets up a fake MySQL module with a query method which immediately calls a passed callback. Mocking MySQL does two things for us. First it removes the asynchronous behavior which comes from interacting with a database, ensuring all of our tests run inside the event loop. The second thing this does for us is it allows us to provide a guarantee that the results passed back from our MySQL fake are always the same. This means we don’t have to set up and tear down an actual database instance. We can simply test the code we care about and assume the database library does what it is supposed to.

Important items to note about the code in the beforeEach and afterEach blocks: Mockery takes a certain amount of work to get working. The enable call in beforeEach starts Mockery working in memory and tells it not to post warning messages every time it does something. It is safe to assume, if you see the results you want, that Mockery is working. The code in afterEach returns the cache back to its original state, leaving everything clean for the following test.

Faking Router For Great Good

Now that we have looked a little bit at how Mockery works, we are ready to start digging into what we really care about. Let’s start testing our Express route actions. The first thing we should look at is a little bit of example Express routing code. Below is a simple route example which just receives a request and then responds with 200 and a little message. It’s not exciting code, but we can actually test it.

We can see a few things here which will be really important to get the tests right. First, Router is a factory function. This means anything that is going to stand in for express will have to implement this correctly. The next thing we know is, the router object which is returned has methods on it like ‘get.’ We will want our fake object to replicate this as well. Now is probably a good time to take a look at the source code for Express Route Fake.

Express Route Fake is a small module which packs a pretty massive punch. Everything is set up for us to quickly capture and test our route actions. The very first thing we have is a cache object which will store key/value pairs so we can request whichever function we want to test easily.

After the cache object we have a simple function which captures the route method, the route path and the route action. This function is the real workhorse of the entire module. Every route call will go through this one function and be stored in our cache. It is critical to understand that all of the methods, paths and actions will be captured and related so we can test them later.

Finally we have the Router factory fake, getRouteAction and reset functions. Reset is exactly what one might expect, it resets the cache to empty so the entire process can be repeated without starting with a dirty object. getRouteAction performs two critical activities. First, it verifies the route exists and throws an error if it doesn’t. Secondly, it passes back the route action so we can test the function outside of the express framework.

A side note on the getRouteAction error behavior; it is important and useful to get clear errors from our fake in this case. Over time my team has run into confusing situations because our implementation was home-grown and does not throw useful errors. This means we get an error stating “undefined is not a function” which does not really tell us what is wrong. By getting an error which informs you the route doesn’t exist, you can immediately verify whether the route is being created correctly and not need to troubleshoot your tests.

Putting the Setup Together

Now that we have the tools and have taken a look at them, let’s start constructing what our tests would look like. We know Mockery is going to substitute in our fake module and we also know that Express Route Fake will capture the actions. All we really need to do is a little setup to get things rolling.

In our setup we have a little bit of extra setup. Since Node and Express interact with the http module through request and response objects (typically called req and res respectively), we will need to create objects we can pass through and use as well. Considering the calls we are making on res, I just included the bare minimum functionality to actually test our route action: status, send and end.

Please note we are actually requiring the module under test AFTER we perform our Mockery setup. It’s really important to do this, otherwise Node will provide the actual Express module and our fake code won’t be used.

Now that we have our code set up and ready to go, let’s take a look at what our tests look like.

We can see that actually testing the actions, now, has become three simple lines of Javascript. What is happening in each of these tests is, our module is registering actions against routes, which are stored by our Express Route Fake module. Once the test starts, we simply ask for the function and execute it as if it were called by Express because of an HTTP request. We pass in our fake objects and capture the result of our action behavior. This allows us to see directly inside of our method and test the interesting parts, throwing away the stuff that would be, otherwise, obscured by frameworks, libraries and Node itself.

A second, important item to note is that we get extra guarantees around our route paths. If someone were to come along and change the path in our module, but not update the tests, or vice versa, we would get immediate feedback since getRouteAction would throw an error telling us the path does not exist. That’s a whole lot of security for a little bit of up-front work!

Summing Up

Today, wee looked at how to use just a couple of modules to insert a fake for Express and get better tests around our code. By using Mockery and Express Route Fake, you can capture route actions and get them under test. Moreover, you can do this while only writing code that is specific to the tests you are writing.

As you write more tests, it might become useful to create a factory for creating custom request and response objects which would simplify the test code even more. Of course, with all of this abstraction it does become more challenging to see what is happening under the covers. Ultimately, this kind of tradeoff can be useful in situations like this where repeated code is more of a liability than a help.

The next time you sit down to create new functionality and wire it into an Express route, consider starting off with Mockery and Express Route Fake. They will simplify the tests you need to write and limit the amount of code you have to change in order to get tests in place. Happy coding!

Comments Off on Unit Testing Express Routing

Pattern Matching in Javascript

For more than a year I have been considering the idea of pattern matching in Javascript. I know I am not the only one trying to solve this problem because there are a handful of resources where people have put together propositions for solutions, including a Sweet.js macro called Sparkler, and Bram Stein’s blog post and associated Github repo.

Although I really, really like the idea of a macro to handle pattern matching, I fear people will throw it out immediately since pattern matching by itself is already an, sadly, obscure topic. This means anything that requires a macro package will probably turn the general populace off, especially since I haven’t met anyone in my area who has heard of Sweet.js except me.

Although I like Bram’s approach to handling macros with a function library, it looks like he didn’t get a chance to make much headway. That’s really unfortunate since I think he was headed in a good, although kind of simple direction.

Before we go any further, it is important to discuss the idea of pattern matching for the uninitiated. Pattern matching is a functional means to quickly look at the shape and signature of data and make a programmatic decision based on what is there. In other words, pattern matching is like conditionals which have spent the last 10 years at the gym.

Imagine, if you will, conditional statements which looked like this:

Even this isn’t really powerful enough to truly capture what pattern matching does, but I think it gives a little insight. Nonetheless, if we had a construct which would give us the ability to match on an entire set of conditions within our data structures, the face of Javascript programming would be a very different place.

Pattern matching is not just about reducing keystrokes, but it actually redefines what it means to actually describe and interact with your data altogether. It would do for data structures what regular expressions have done for string manipulation.

Pattern matching is the dream.

So, after doing a lot of thinking, I think I have settled on a means to give this dream the breath of life. Unfortunately, I believe it is unlikely that the path to data Nirvana will be easy. I have a suspicion, this is the same issue Bram encountered. Looking at the ~1400 lines of code that make up the Sparkler macro, pattern matching could be a tricky problem.

Function and Contract

I looked at the Sweet.js macro, Bram Stein’s early exploration and the match behavior in both Scala and Racket. I decided I needed something which felt like fluent Javascript instead of succumbing to the Racket nut which lives inside me, so I opted to avoid the hardcore Lisp syntax. Scala had a closer feel to what I wanted so I kept that in mind. Bram’s example felt close, but a little too muddy and the Sweet.js macro just felt a little too much like operators instead of functions. What I landed on was this, () => () => any; that is to say a function ($match) which returns a function (pattern assembly) which returns the final result of the pattern matching. Here’s an example of a simple exploration, drawing against Bram’s factorial implementation.

It’s easy to see the first call is just $match(n). This returns another function which takes a function as an argument; i.e. $match is a higher-order function which returns a higher order function, which takes a function as an argument, which then does stuff and returns a result. Another way of looking at this is $match is a function which chains to a function which is designed to perform pattern assembly. Once the pattern is assembled, everything is executed and we get a result.

Clear as mud?

Expanding the Concept

This small example seems pretty simple. Check for equality and if nothing works, then use the else clause. This is a little condition-block feeling, but I think people will understand the else clause better than anything else I might have put in there.

Anyway, digging in a little further, I wanted to also be able to quickly and easily match arrays, objects or other things as well. This simple equality checking was not enough, so I started expanding, moving into some sort of factory behavior to create matchers on the fly. This brought me to an example which was a little more interesting and a lot more complex.

Here I am returning a string based on the pattern of a pair (2-valued array) array being treated like a vector. If the first three patterns match, we get a string describing the axes the vector lives on. If the vector is not a pair, or it contains something other than numbers, I want to return a type error with a message explaining the provided vector was invalid.

This bit of logic is significantly more complex than our previous factorial example and leaves us in a place where the code is descriptive, but perhaps not as readable as we would like. Moreover, I thought about what I really want to be able to say and it made more sense to, perhaps, create something of a pattern matching DSL. (domain specific language)

Matching on a DSL

If I was going to invent any kind of simple language for expressing matching behaviors, I wanted it to be less cryptic than regular expressions, but I also wanted it to be terse enough that it didn’t become some giant, sprawling string someone would have to mentally parse to really understand what was happening. What I landed on was a simple near-Javascript expression of what the values should look like when they properly match our criteria. This turns our earlier expression into the following.

I reduced the type descriptor for brevity, opting for an angle-bracketed character. Now we can wrap up our expression in a single pattern call and get something the matcher can quickly and easily execute to verify whether the vector matches our requirements or not. This, however, also means I am responsible for generating an AST (abstract syntax tree) for these expressions. Of course, if I am going to do that, it would be best to create one by hand so I can see what I am actually contending with.

Matcher Abstract Syntax Tree

The AST for my matcher language would, ultimately, link in with an underlying state machine of some sort, but I won’t dig that deep right now. Nonetheless, what I ended up with, when constructing an AST is long, but relatively declarative. This means, I could, theoretically, start the entire process NOT at the language level, but instead at a place which people would more readily understand. Let’s have a look at the AST replacement for our matching behavior.

It’s long, and probably overkill for the problem presented here, but it gives us a real view into the guts of the problem and a way out of the mud. This is, also, unfortunately all I could pull together in time for this post, but I feel like we covered a tremendous amount of ground. I will continue to experiment with pattern matching and, perhaps, by next time we could even have a working object model to build our tree with. Until the next post, keep coding!

Comments Off on Pattern Matching in Javascript

Learning Javascript Differently

On Thursday and Friday I was at a convention called Agile Open San Diego. The core idea is people getting together and having two full days of water cooler discussions about agile development and leadership. It’s pretty cool and you should go to one near you. Anyway, something happened on Friday morning: I realized we need a new way of approaching language learning.

Currently there are a number of ways people can learn languages including classes, videos, code schools, code katas and more. The most important thing I have noticed is this: none of these really do a good job of building the language into your brain quickly or permanently.

I have watched people who are new at programming struggle through code katas after working through videos and online learning processes. I think it’s time for something different. Borrowing against the ideas of code cooking and the way martial arts are learned, I created the first form in a series of forms for Javascript learning.

What a Form Is

In in certain martial arts, a form (much like a kata) is a scripted set of motions which help to define a dictionary of movements the practitioner will use when sparring or in a fight situation. The form is important for developing muscle memory and deepening the mental relationship with the art.

When developing a relationship with a new language any experienced programmer picks a project to do which will force them to go through the motions of asking all the questions they need to ask to really understand how the new language ticks. This is the driving idea behind doing code katas. Katas force a programmer to look at the language they are working with and ask the same kinds of questions.

What if the new language is also the FIRST language?

A new programmer won’t necessarily even know the first questions to ask to get to the right questions. This is why katas help sharpen developers but don’t bring new developers up quickly. This is what a form will help to fix. Instead of placing a problem in front of a new developer, it places the code and asks them “what does this do?”

Here’s a quick start video I made to give a better idea:

Creating the First Form

In the language forms design I envisioned a total of (probably) six forms. Of course before there are six, there must be one. I wanted to emulate a system I already understood, which meant that the first form should be the gateway to greater understanding. I wanted to introduce some of the most common aspects of the language and use a problem which had a lot of room to grow.

Ideally, if the language student were to work through only first form, they would have enough knowledge of the language to start solving problems. The solutions may not be elegant and they would likely not seem refined to the veteran programmer, but solutions and doing are the path to greater understanding.

With all of this in mind, I chose a problem which covered interaction with vectors. Though this sounds pretty “mathy,” anyone who has completed an intermediate course in Algebra should be able to understand what is happening. Vectors represent just the right amount of leg room that the problem would be understandable by the accomplished first form student and had plenty to grow on for the third form student.

After choosing a problem that met my needs, I started creating the code. I want first form to represent the way someone would actually work through the problem in an agile environment. This means every step the student takes, there is a test to validate their progress. It becomes far less important for them to sit and scrutinize their code to make sure they got every character right on the first pass since the process is: read the test, write the code, run the code. If the code runs and it looks like what the golden example presents, it must be right. This gives students early comfort through TDD and small-step development.

Leveling

Something I personally have a terrible time with when I am trying to teach someone something is setting aside what I know. I want to give the student everything and have them see why I take such joy in what I do. It doesn’t ever work that way. Instead, I end up overloading them and they have trouble absorbing what I am trying to share.

Lynn Langit encourages the idea of leveling in her Teaching Kids Programming process. Leveling is the idea of presenting a simple idea that does one thing. Once the core idea is understood, then enhancing the idea with a greater concept, while reinforcing the original idea. All programming builds on more foundational concepts. This means that any topic can be taught through a leveling approach.

Language forms should work this way. The first form actually opens with a greeting. This mirrors the kung fu tradition of performing a greeting or salute to Buddha. The greeting is, essentially the foundational Hello World program, TDD style. We could call this a salute to the programming force, or something. I haven’t come up with a name yet, but it’s coming.

After the greeting, the very next thing first form does is it presents the concept of squaring a number. We don’t need to understand the entire process yet and the full problem is largely academic so the student can focus more on growing to solve the entire problem instead of trying to understand what it does. As we said before, katas are great for sharpening problem solving, this is all about developing a relationship with the language itself.

As we move through the process of solving the entire problem, the student will go through the process of small-step design, abstraction, TDD, conditionals, loops, variables, functions and so on. First form is a guided tour though the way a professional programmer works without the fear and intimidation that comes with getting thrown in with the sharks.

Later Forms

After the first form student has successfully grown to proficiency, they will need to move to the next form. Currently, only the first form is complete, but the second and third forms will actually make subsequent passes over the same code that was covered before. This process of enhancing existing code provides direct insight into identifying patterns, refactoring and smoothing out the edges on legacy systems.

Later forms will pull in concepts like closures, object orientation, higher order functions and so on. The process of creating custom types will be brought to the fore and, by the third form, the student will start learning ways to escape common pitfalls and smells by using well known patterns. They will be able to use what they have learned to improvise on solutions and find their way out of sticky situations.

Forms beyond the third are still in the embryonic stage in my mind, so they will come as I discover how to develop them. So far, I know they will likely not cover the same ground, but will, instead, dig into deeper topics like Gang of Four patterns, algorithms and data structures and greater language mastery. Understandably, not all students will want to go this far with a single language, but ideally, this would provide a means for the dedicated language student to open up greater discovery channels they can follow on their own.

In the end, all of the forms should create a comprehensive system for teaching language through immersion, muscle memory, leveling and, in the end, understanding. The forms are made, not to be done once, but to be repeated again and again to develop a comfort with the language without trying to force the student through solving puzzles and toy problems which they may not have the answers for yet.

Check out my progress creating Javascript forms on GitHub.

Comments Off on Learning Javascript Differently

Visual Studio Code Extensions: Editing the Document

I have been supporting an extension for Visual Studio Code for about a month now. In that time I have learned a lot about building extensions for an editor and static analysis of Javascript. Today is more about the former and less about the latter. Nevertheless, I have found that creating snippets, modifying the status bar and displaying messages is trivial, but modifying the current document is hard when you don’t know how to even get started.

The other important thing I have noted about writing extensions for VS Code is, although the documentation exists and is a, seemingly, exhaustive catalog of the extension API, it is quite lacking in examples and instructions. By the end of this post I hope to demystify one of the most challenging parts of document management I have found so far.

The VSCode Module

The first thing to note is anything you do in VS Code which interacts with the core API will require the vscode module. It might seem strange to bring this up, but it is rather less than obvious that you will have to interact with the vscode module often.

Under the hood, the vscode module contains pretty much everything you need to know about the editor and its current state. The module also contains all of the functions, objects and prototypes which you will need in order to make any headway in your code whatsoever. With that in mind, there are two ways you can get this module or any of its data into your current solution. The first option is to require it like any other node module.

As of the latest release of VS Code, you now have to explicitly specify the vscode module in dev-dependencies in your packages.json file. Below is what my dependencies object looks like:

For now, don’t sweat the other stuff I have required. It is all test library stuff, which we will look at in a future post. Anyway, that last line is my vscode requirement. By adding this dependency, vscode will be available in your development environment, which makes actually getting work done possible. To install and include vscode, copy and paste the following at the command line inside your extension project:

Making a Document Edit

The reason it was important to briefly cover the actual vscode module is, we are going to live on it for the rest of the post. It will be in just about ever code sample from here to the end of the post.

So…

By reading the VS Code extension API it is really, really easy to get lost when trying to push an edit into the view. There are, in fact, 5 different object types which must be instantiated in order and injected into one another to create a rather large, deeply nested edit object hierarchy. As I was trying to figure it out, I had to take notes and keep bookmarks so I could cross-compare objects and sort out which goes where.

I will start off by looking at the last object in the sequence and then jump to the very first objects which need to be instantiated and work to our final implementation. When you want to make an edit to the code in the current document, you need to create a new WorkspaceEdit which will handle all of the internal workings for actually propagating the edit. This new WorkspaceEdit object will be passed, when ready, into an applyEdit function. Here’s what the final code will look like, so it is clear what we are working toward in the end:

In this sample code, the _uri refers to the document we are interacting with, coords contains the start and end position for our edit and content contains the actual text content we want to put in our editor document. I feel we could almost spend an entire blog post just discussing what each of these pieces entails and how to construct each one. For now, however, let’s just assume the vsEditor is coming from outside our script and provided by the initial editor call, the coords are an object which we will dig into a little more soon, and content is just a block of text containing anything.

The Position Object

In our previous code sample, there is a function called setEditFactory. In VS Code there are two types of document edits, set and replace. So far I have only used a set edit and it seems to work quite nicely. With that in mind, however, there is a reason we are using a factory function to construct our edit. A document edit contains so many moving parts it is essential to limit exposure of the reusable pieces to the rest of the world since they largely illuminate nothing when you are in the middle of trying to simply add some text to the document.

Let’s actually dig into the other end of our edit manufacture process and look at the very first object which need to be constructed in order to actually produce a change in our document: the position. Every edit must have a position specified. Without a position, the editor won’t know where to place the changes you are about to make.

In order to create a position object, you need two number values, specifically integers. I’m not going to tell you where, exactly, to get these numbers because that is really up to the logic of your specific extension, but I will say that a position requires a line number and a character number. If you are tinkering with code in your own extension, you can actually make up any two numbers you want as long as they exist as coordinates in your document. line 5, char 3 is a great one if it exists, so feel free to key the values in by hand to get an idea of how this works.

Anyway, once we have a line and a character number, we are ready to construct a position.

That’s actually all there is to it. If you new up a Position object with a line and character number, you will have a new position to work with in your extension.

The Range Object

The next object you will need to display your document change is a range object. The range object, one would think, would simply take bare coordinates. Sadly this is not the case. What range actually takes is a start and end position object. The range tells VS Code what lines and characters to overwrite with your new content, so it must go from an existing line and character to an existing line and character, otherwise known as a start position and end position. Here’s how we create a range object.

So far our factories are nice and clean, which makes their intent pretty obvious. This is handy because it gets really strange and hard to follow quickly without some care and feeding of clean code. Anyway, our range takes two positions, so we provide them as named above, start and end.

The TextEdit Object

The TextEdit object is where things start to really come together. Now that we have a range made of two positions, we can pass our range and our text content through to create a new edit object. The edit object is one of the key objects we need to actually perform our document change. It contains almost all of the necessary information to actually make a document change. Let’s look at how to construct a TextEdit object.

Keep in mind, though we have only written a few short lines of code we have now constructed an object tree containing 4 nested objects. Are you still keeping up?

Building an Edit

Now that we have gotten through the nitty gritty of constructing individual objects for our tree, we are ready to actually build a full edit and pass it back to the caller. This next function will make use of our factories in order to construct an object containing all dependencies in the right nesting order.

Does anyone else feel like we are putting together a matryoshka doll?

Anyway our next function will also follow the factory pattern we have been using so we get a clean string of function calls all the way up and down our stack which will, hopefully, keep things easy to follow.

As you can see, we are assembling all of the pieces and the stacking them together to build our document edit. The fully constructed edit will contain all of the instructions VS Code needs to modify the selected document. This will be useful as we construct our next object to interact with.

Yes, there’s more.

The WorkspaceEdit Object

In order to introduce our edit into a document, we need to build a workspace edit to interact with. This workspace edit is, you guessed it, another object. A workspace has no required dependencies up front, so we are safe to construct this bare and interact with it later. Here’s our factory:

This new workspace edit is where we will do our final setup before applying our edits into the document we started with, originally. Once we have a workspace edit, we can perform behaviors like set and replace. Here’s our last factory of the day, where we actually kick off the process. This actually brings us full circle, back to our edit application we looked at in the very first example. Let’s look at the code.

Now we can see where all of our coordinates, content and that mysterious uri went. Our setEditFactory takes all of our bits and pieces and puts them together into a single edit which we can then apply to our workspace edit object, which is then passed back for the program to operate on.

Summary

Even after having figured this out from the VS Code documentation and implementing it in an extension, this is a lot to keep in your head. The bright side of all this work is, if done correctly, this can be wrapped up in a module and squirreled away to just be used outright. The surface area on this module is really only a single function, setEditFactory. This means, once you have it running correctly, it should be simple to call a single function with the right parts and get back a fully instantiated, usable object which can be applied to the editor.

Hopefully this is useful for someone. Identifying this and putting it all together with clear intent was a challenge with no examples. If there were ever one place I would complain about VS Code it is the documentation. I hope my post helps clear up the obscurities and makes it easier for people to dig into making their own extensions or contributing to an extension someone else has built.

Getting Started Writing Visual Studio Code Extensions: Action Handlers

I started writing a Visual Studio Code extension about two and a half weeks ago and, as is little surprise, writing an extension with new, uncharted functionality takes more knowledge than you can find in the basic tutorial for creating a hello world extension. As I learned, I thought, I should write what I learned down so other people would have a little more illumination than I had.

If you haven’t read the Hello World tutorial, you should. It has a handy step-by-step recipe for creating a baseline extension. No, really, go read it. I’ll wait.

Great, now that you have Hello World set up, we’re ready to start really building something. So the first thing that I ran into was I wanted to change things a little and see if everything still worked. Text changes were obvious, so I decided I wanted to scrape something out of the editor and show that instead.

I made small changes… and broke the extension.

The first challenge I ran into is, these extensions give you almost no visibility into what you are doing. You can use the debugger, but, if you are like me, you probably have one screen to work on, so you will have a lot of flipping back and forth to spot the broken stuff. The best friend you have during this entire process is your debugger output. Log to it. A lot.

This is kind of like old school web development. If you want to see what you are doing, use console log. If you want to see what the data looks like you are tinkering with, use console log. When you really need to dig deep, hey, you’re already in an editor!

Anyway, like I said, I broke my extension right away. The first thing I did wrong was I messed up the action name. Don’t worry about what a disposable function is for now. We can cover that next time. The important issue is I mismatched my action name. Let’s take a look at a bit of the code in my main extension file.

Action handler names

Our action name tells VS Code what our action is named. Our action name is shared between the extension file where we set up our action behaviors and two locations in our package file. I may have missed it, but I didn’t see anything in the documentation regarding lining up the name in several locations. Here’s what it looks like in the package file:

All of these separate lines need to match or when you try to test run your extension, your new action won’t work. This is surprisingly hard to debug. There is no unit test, scenario or any other process to properly check that all of the command name strings properly match. This is probably the first problem you are going to run into.

If you try to run a named action that isn’t fully declared, you will get an error. “No handler found for the command: ‘cmstead.jsRefactor.wrapInCondition’. Ensure there is an activation event defined, if you are an extension.”

The takeaway of today’s post is this: if you see this handler error, check that your action is declared in your extension Javascript, in actionEvents and in commands. If any of these identifiers are missing, your action will fail. By double checking your action declarations, you will have a smoother experience getting your extension up and running. Happy coding!

Comments Off on Getting Started Writing Visual Studio Code Extensions: Action Handlers