Math for Programmers: Union and Intersection

Last week we talked about sets and how they relate to arrays. This week we will take a look at how to interact with arrays and apply two common mathematical operations on them to produce new, refined sets of data with which we can interact.

Two of the most common and well known actions we can take on sets are union and intersection. The union operation combines two sets and creates a new single set containing the elements of each of the original sets. Intersection is also a combinatorial operation, but instead of combining all elements, it simply returns a set containing the shared elements of each set.


Before we can address union and intersection, we have to deal with the state of uniqueness. Last week we looked at how an array can be converted into a set by viewing each index and value as a vector. Though this is useful for seeing the relation between mathematical sets and arrays in programming, it is not quite so useful when trying to actually accomplish set operations.

The biggest issue we encounter when looking at an array is that it is more closely related to a vector in nature and behavior. If we discard the importance of array ordering, it becomes a little more set like. Let’s take a look at what this means.

Our second array closely matches our needs for a set, so it would be ideal to have a function which takes an array of values and returns a list with all duplicated values removed. We can annotate this function like this: (array) -> array

Although this function has been implemented in several libraries, it is easy enough to create we’ll just build it here. This not only gives us insight into how a “unique” function could be built, but it also gives us a vanilla implementation of our functionality so as we build on top of our behaviors, we know where we started and how we arrived at our current place.

Now we have a clear way to take any array and create an array of unique values in linear, or O(n), time. This will become important as we move forward since we want to ensure we don’t introduce too much overhead. As we introduce new functions on top of unique, it would be easy to loop over our loop and create slow functions which can be disastrous when we rely on these functions later for abstracted behavior.


To really talk about the union operation it can be quite helpful to take a look at what a union of sets might look like. In words, union is an operation which takes two sets and creates a new set which contains all members, uniquely. This means, the union of {1, 2, 3} and {2, 3, 4} would be {1, 2, 3, 4}. Let’s look at a Venn diagram to see what this means graphically.

Venn diagram of a union of sets

For small sets of values, it is pretty easy to perform a union of all values, but as the sets grow, it becomes much more difficult. Beyond this, since Javascript does not contain a unique function, i.e. the function we built above, nor does it contain a union function, we would have to build this behavior ourselves. This means we have to think like a mathematical operator to create our function. What we really need is a function with accepts two sets and maps them to a new set which contains the union of all elements. Using a little bit of visual mathematics, our operation looks like the following:

Black box diagram of a union function

This diagram actually demonstrates one of the core ideas behind functional programming as well as giving us a goal to work toward. Ultimately, if we had a function called union which we could use to combine our sets in a predictable way, we, as application developers, would not need to concern ourselves with the inner workings. More importantly, if we understand, at a higher abstraction level, what union should be doing we will be able to digest, fairly immediately, what our function should take as arguments and what it will produce. Our union function can be annotated as (array, array) -> array. Let’s look at the implementation.

With our unique function already constructed, this is a pretty trivial function to implement. There is, of course an item of interest here. Union is almost done for us by the concat function. Concat makes the same assumption our original exploration of converting an array to a set does: arrays are sets of vectors, so a concatenation would be an introduction of two sets of vectors into a new set, reassigning the indices in each vector to map to a new unique set.

This concatenation behavior can be quite useful, but it is not a union operation. In order to perform a proper union of the values in each array we will need to ensure all values of the returned array are actually unique. This means we need to execute a uniqueness operation on the resulting set to get our array which is similar to a set. I.e. if we have an array representing set A, [A], and an array representing set B, [B], then union([A], [B]) ~ A ⋃ B.


Much like the union operation, before we try to talk too deeply about the intersection operation, it would be helpful to get a high-level understanding of what intersection means. Intersection is an operation which takes two sets and creates a new set which contains only the shared elements of the original sets. This means the intersection of {1, 2, 3} and {3, 4, 5} is {3}. Visually, intersection looks like the following diagram.

Venn diagram of an intersection of sets

The darker region represents the intersection of sets A and B, which, from our first example, is a set containing only the value 3, or {3}. Much like the union operation, we can create a function intersect which takes two sets and returns a new set containing the intersection. We can diagram this in the same way we did with union.

A black box diagram of an intersection function

This diagram shows us the close relation between intersect and union functions. The annotation for our intersection function is, actually, identical to our union function: (array, array) -> array. This means they share the same contract and could be used on the same sets to produce a result which, incidentally, will match the contract for any function which takes a set of values as a list. Let’s have a look at what the implementation of intersect looks like in Javascript.

As expected, the difference between union and intersection is in the details. Where union performed the combination before we performed a unique operation, intersections can only be taken if all of the values are already unique, This means intersections are slightly more computationally complex than a union, however, in the large, intersection is still a linear operation. We know this by performing the following analysis:

  1. unique is O(n) as defined before
  2. reduce is O(n) by the nature of reduction of a list
  3. buildSetMap is also O(n) as it was the defining characteristic of unique.
  4. intersect is the sum of three O(n) operations, or intersection performs 3n operations, making it, also, O(n)

This algorithmic analysis is helpful in understanding the general characteristic of a function and how it will impact execution time in a larger system. Since union and intersect are both O(n) functions, we can easily use them in a chained way, resulting in a new O(n) function. What this also tells us is, union and intersection are sufficiently performant for small sets of data and acceptable for medium sets. If our sets get large enough we might have to start looking at ways to reduce the number of computations needed to complete the process, but that’s another blog post.


We can actually use our union and intersect functions together to quickly perform complex mathematical behavior on even non-optimal arrays. Since these functions perform normalization on our sets of data, we can use rather poorly defined arrays and still get meaningful results. Let’s take a quick look at a small example where we set A, B and C as poorly defined arrays and then perform A ⋃ B ⋂ C.

In this post we discussed performing union and intersection operations on arrays of data, as well as implementations for each and their performance characteristics. By understanding these core ideas, it becomes easier to understand how data can be quickly and descriptively modified programmatically. This core understanding is useful both for working with arrays inside of your application as well as better understanding the way data is interrelated in database considerations. Now, go munge data and make it work better for you!

Math for Programmers: Arrays, Objects and Sets

I’ve had conversations with a programmers with varied backgrounds and experience levels. One thing which has come up in several conversations is math, and how much math a programmer needs to be effective. I have a formal background in math, so I tend to lean heavily on the side of more math is better. Other programmers argue that only select topics in math are important to make a professional programmer effective.

Arguably, for day to day programming needs, it is unlikely you will need to demonstrate a strong understanding for the proof of indefinite integrals over n-space, however there are topics which seem to come up often and would be useful for a programmer to understand. I decided that the first in the series of math for programmers should cover something that ever programmer has to think about at one time or another: sets.

If you are working with data coming from persistent storage like a database, sets should be your bread and butter. Most of your time will be spent thinking about how sets work together and how to combine them to capture a snapshot of the data you need. On the other hand, if you are working with data at another layer somewhere above data access, your interactions with sets are going to be a little more subtle. Arrays and maps are sets of data with added restrictions or rules, but they are, at their core, still sets in a very real way.

If we look at an array of integers, it’s not immediately obvious you are working with a set. You could, in theory, have a duplicate number in an array. Arrays also have the characteristic of being ordered. This means that each element will come out of an array in the same order each time. Let’s take a look at an array of integers.

Honestly, this array is most reminiscent of a vector in mathematics. This could easily describe a point in a nine-dimensional space, which is kind of hard to get a visual on. Nevertheless, we can do just a little bit of reorganization and turn this into a set which adheres to all the normal rules a given mathematical set has.

Rules of a Set

All sets are unordered. This means two sets are equal if both sets contain the same members, regardless of the order. For instance, {1, 2, 3} and {3, 2, 1} are the same set since they each contain the members 1, 2 and 3 and ONLY those members. Regardless of the order chosen to represent the elements, the set is guaranteed to be unique given the elements it contains alone.

Sets may not contain duplicate values. Each value in a set is uniquely represented, so {1, 1, 2, 3} would be correctly represented as {1, 2, 3}. This uniqueness makes sets well defined. Well defined simply means our sets are unambiguous, or any set can be constructed to be clearly defined and distinctly represented.

Sets may be constructed with individual values, like our set of integers, or with more complex structures like a set of sets or a set of vectors. This means {{1}, {2}, {3}} is not the same as {1, 2, 3} since the first set is a set of sets containing 1, 2 and 3 respectively, while the second set is a set containing the numbers 1, 2 and 3. Understandably, this is kind of abstract, so think about an array of arrays, versus an array containing integers. The relation is quite close.

Thinking of Arrays as Sets

Let’s take another look at our original array. If we were to take our array and interpolate it into an object literal instead we can start to see the relation between a set and an array. Let’s see what our object literal would look like.

Now we can see how each element in our array maintains uniqueness. Our object has a key, which represents the array index and a value, which is the original value in an array. This data formation makes creating a set trivial. We can describe each value in our array as a vector containing two values, . By using vector notation, we can make our array conform exactly to the definition of a set. Our array can be rewritten as a set this way, {<0, 1>, <1, 2>, <2, 3>, <3, 4>, <4, 2>, <5, 5>, <6, 1>, <7, 1>, <8, 7>}.

This vector notation helps us to tie together two separate data structures into a single, unified mathematical form. By understanding this form, we can see how values could be easily stored and organized in memory to either be sequential, or fragmented throughout memory with pointers stored in a sequential data structure for faster access.

Being able to move between the array or object formations we use day to day to set notation gives us a lot of power. We can abstract away the physical computer and think less about the language constructs, in order to solve data set problems in a language agnostic way first and then weave the solution back into our code in order to improve our thought process and, ideally simplify the logic we use to organize our thoughts in a program.

This is a first slice cutting across programming and its relation to math. There are techniques which can be used to dissect problems and solve them directly without code as well as different programming paradigms which abstract away the nuts and bolts of loops, conditions and other detail-related behaviors in favor of a more general approach, simply declaring the operation to be done and then working with the result. We will explore these techniques and the math related to them in upcoming posts.

Getting Started Writing Visual Studio Code Extensions: Action Handlers

I started writing a Visual Studio Code extension about two and a half weeks ago and, as is little surprise, writing an extension with new, uncharted functionality takes more knowledge than you can find in the basic tutorial for creating a hello world extension. As I learned, I thought, I should write what I learned down so other people would have a little more illumination than I had.

If you haven’t read the Hello World tutorial, you should. It has a handy step-by-step recipe for creating a baseline extension. No, really, go read it. I’ll wait.

Great, now that you have Hello World set up, we’re ready to start really building something. So the first thing that I ran into was I wanted to change things a little and see if everything still worked. Text changes were obvious, so I decided I wanted to scrape something out of the editor and show that instead.

I made small changes… and broke the extension.

The first challenge I ran into is, these extensions give you almost no visibility into what you are doing. You can use the debugger, but, if you are like me, you probably have one screen to work on, so you will have a lot of flipping back and forth to spot the broken stuff. The best friend you have during this entire process is your debugger output. Log to it. A lot.

This is kind of like old school web development. If you want to see what you are doing, use console log. If you want to see what the data looks like you are tinkering with, use console log. When you really need to dig deep, hey, you’re already in an editor!

Anyway, like I said, I broke my extension right away. The first thing I did wrong was I messed up the action name. Don’t worry about what a disposable function is for now. We can cover that next time. The important issue is I mismatched my action name. Let’s take a look at a bit of the code in my main extension file.

Action handler names

Our action name tells VS Code what our action is named. Our action name is shared between the extension file where we set up our action behaviors and two locations in our package file. I may have missed it, but I didn’t see anything in the documentation regarding lining up the name in several locations. Here’s what it looks like in the package file:

All of these separate lines need to match or when you try to test run your extension, your new action won’t work. This is surprisingly hard to debug. There is no unit test, scenario or any other process to properly check that all of the command name strings properly match. This is probably the first problem you are going to run into.

If you try to run a named action that isn’t fully declared, you will get an error. “No handler found for the command: ‘cmstead.jsRefactor.wrapInCondition’. Ensure there is an activation event defined, if you are an extension.”

The takeaway of today’s post is this: if you see this handler error, check that your action is declared in your extension Javascript, in actionEvents and in commands. If any of these identifiers are missing, your action will fail. By double checking your action declarations, you will have a smoother experience getting your extension up and running. Happy coding!

Javascript Refactoring and Visual Studio Code

About a month ago, I started working at Hunter. Now, I have been pretty aware of refactoring, design patterns, good practices and common practices. I don’t always agree with what everyone else says or does, but I typically have a good reason to do it the way I do. For whatever I do, Hunter does more so. I am a notorious function extractor and deduplicator, but never more than what I have seen or done in the last month, or until now.

C# has a bunch of really cool tools and toys, the likes Javascript developers have never known, until now. Over the last couple of weeks, I have been working on an extension for Visual Studio Code to help even the odds. I’m no full-time tool builder, so I won’t be matching the quality of Jet Brains or the like, but I’m giving it my best go.

In future posts I will start covering some of the discoveries I have made while building this plugin, but this week is all showboat. I haven’t gotten my extension released on the Visual Studio Marketplace… yet. While that gets finished up, I do have everything together in a github repository.

Currently, I think the issues list is actually longer than the list of things JS Refactor (my extension) does, but what it does is kind of nifty. How many times do you discover you actually want to pull some code up and out of a function, into its own function? Probably a lot. The worst part of it all is all the goofing around you have to do with going to the top of the code, adding a function declaration, going to the bottom of the code and closing the function definition, making sure you matched all your braces and finally, when everything looks just right, you finally indent everything to the right place…


JS Refactor has an action called “wrap in function.” Wrap in function will ask for a function name, wrap up your code, and indent everything using your preferred editor settings.

I KNOW, RIGHT? It’s awesome!

Seriously, though, wrap in function is nice except when it gets the job wrong. Sorry, it’s not perfect, yet, but I am working on it. Along with that, there are also a wrap in anonymous function and extract to function actions. These are a first go, so they still need some love, but they make things faster all the same.

Another part of this plugin, which generally works more reliably than the the actions, are the snippets. Fortunately, the snippets rely on code written by the good folks on the Visual Studio Code team. The snippet functionality really shines through when you start writing lots of code. It’s like having a miniature code generator living in your editor.

Currently I have a handful of snippets available for use and they work pretty darn well. Strict declarations, functions, and a couple other things I have actually noticed a significant increase in the speed of code generation, which gives me more time to spend just thinking about the problem I am most interested in solving.

I am not going to give a rundown of all the features of JS Refactor, instead I would encourage you to go play with it. Take a look at the code that drives the whole thing. Give me feedback. It’s part solving a problem and part learning how to even do the code analysis to make things work. I won’t promise to solve all of the issues quickly, but I will sure try to solve them eventually.

So, until next week, please take a look at JS Refactor and use it on your code. I think you’ll like it and I hope it will make your life easier. Next week we will start taking a look at building VS Code extensions and some of the stuff that I wish someone had put in a blog so to make the discovery process easier.

Similar posts in Coding, General Blogging, Javascript

Anonymous Functions: Extract and Name

It’s really great to see functional patterns become more accepted since they add a lot of really powerful tools to any programmer’s toolbox. Unfortunately, because functional programming was relegated primarily to the academic world for many years, there aren’t as many professional programmers who have developed a strong feel for good patterns and share them with more junior programmers. This is not to say there are none, but it is important to note that most programmers think of functional programming and say “it has map, filter and reduce; it’s functional.”

Though having those three higher-order functions does provide a functional flavor, it is more important that there are higher-order functions at all. With higher-order functions come the use of anonymous functions. Anonymous functions (also known as lambda functions) provide a great facility for expressing singleton behavior inline. This kind of expressiveness is great when the function is small and does something unexciting, like basic arithmetic or testing with a predicate expression. The problem is anonymous functions introduce cognitive load very quickly which makes them a liability when code gets long or complex.

Today I’d like to take a look at a common use of anonymous functions and how they can cause harm when used incorrectly. There are many times that anonymous functions are assigned directly to variables, which actually introduces one of the same issues we are going to deal with today, but I am not going to linger on that topic. Please consider this a more robust example of why even assigning anonymous functions to variables is dangerous.

Jumbled Anonymous Functions – Our First Contestant

In Javascript, people use promises; it’s a fact of life. Chris Kowal’s Q library is a common library to see used in a variety of codebases and it works pretty well. Now, when someone writes an async function, it’s common to return the promise so it can be “then’ed” against with appropriate behavior. The then function takes two arguments, a resolve state function and a reject state function. These basically translate into a success and error state. I’ve created a common promise scenario so we have something to refer to.

Extract Method

The very first thing I see here that is a problem is, we have two functions logging an error. This behavior is not DRY which is a code smell and violates a commonly held best practice. There is a known refactoring for this kind of redundancy called “extract method,” or “extract function.” Technically we already have a function in place, so we can simply lift it and name it. This will reduce our footprint and make this code cleaner already. Let’s see what this would look like with our logging behavior extracted.

With this simple extraction, we now know more about what our function does and our code has become more declarative. Although logError is a one-line function, the fact that it does exactly one thing makes it both easy to reason about and easy to test. We can inject a fake logger and capture the logging side effect, which gives us direct insight into what it does. Another benefit we get is that we can hoist this function further if need be, so we can reuse it across different modules or files.

Debugging Problems

Now we get to the real nitty gritty. We have two anonymous functions which do not explicitly tell us what they do. Instead, they just contain a much of code which performs references into an object. We run up against two different issues because of this. First, the lack of declarative code means the next person who looks at this, which might be you, will have to sit and stare at this to understand what is happening.

Another, bigger issue than immediate comprehension is debugging. Suppose we take this file and concatenate it with all of the other files in our project and then uglify the whole thing and deploy it out for use in someone’s browser. All of our code now lives on a single line and may not even have meaningful variable names anymore. Now, suppose one of the data objects comes back null. Our debugging error will contain something like “error at line 1:89726348976 cannot treat null as an object.”

This is bad, bad news. Now we have an error which we can’t easily identify or triage. One of the calls we are making no longer does what we think it does and it’s causing our code to break… somewhere. Whoops! We can actually use the same pattern we used for our error logging to extract our methods and make sense of the madness. Let’s take a look at what our refactoring would look like.

Now that we have lifted our last two functions out of our promise chain, everything makes a little more sense. Each of our behaviors is easy to reason about, we can test each function independently and all of our functions have a unique identifier in memory which saves us from the insidious debugger issue which can cost time and money.

There are other places we could go from here with our code to make it more fault tolerant, but that’s outside of the scope of this article. Instead, when you look at your code, see if you can easily understand what is going on. Look at it like you’ve never seen it before. How many anonymous functions are you using? How many different steps are crammed into a single function?

When you see this kind of muddy programming, think back on our reduction to simpler functions, avoid complex anonymous functions and think “extract and name.”

Comments Off on Anonymous Functions: Extract and Name

Stupid Javascript Object Tricks

I usually write something meaty for my posts, but sometimes it’s worthwhile to just see some of the strange stuff you can do with a language because it’s informative. I remember one of the most enlightening things I experienced while I was working on my degree was taking a class on Assembly. I don’t use Assembly at all in my day to day, but there was a lot of good knowledge I gained just from taking the class.

Probably the most profound item I experienced in that class was creating a struct in C and watching it take up real memory in the debugger. It’s one thing to say “such and such takes a double word” but to actually see it happen in real time is something completely different.

Today we are going to play with a feature of Javascript that is earmarked for deprecation in ES-next, however, it is so valuable to gaining greater insight into how the language works, I hope its never actually removed. The feature I am talking about is hand-instantiation of base objects.

Before we dive in, let’s back up a little and take a look at some code which should be familiar for people who have written any OO Javascript. This is just a basic object setup, so don’t get too excited.

Tell Jim, I’m a programmer, not a magician.

Anyway, this is essentially intro to Javascript object creation for OO programming. The interesting thing is, we can actually look up the chain and see the same kind of code repeated at the language definition level. Let’s dig in and have a look at the Object object. (Yes, I just said that.)

The reason this is interesting is this, we can actually do some strange stuff with our base object. Normally if we wanted an object, we could just create an object literal. There are actually two other ways we can create an object as well. Behold!

Okay, here’s where it starts to get weird. You can do this, not only, with Object, but also with Function. Let’s create a Function without using the function keyword. Like Samuel L. Jackson said in Jurassic Park, “hold on to your butts.”

These are actually just creating anonymous functions with a global scope, but the fact that you can do this at all is kind of amazing. It gets even weirder, though. Let’s actually dig into the inheritance hierarchy of the Javascript language a little bit and see what lives underneath it all.

So far, so good. We know that identity is a simple function and that any function should be an instance of the parent object Function. This all strings together nicely. It also means we get all of the standard function-related behaviors like call and apply as we would expect. This makes things reasonably predictable and sane. If we dig a little deeper, though, we discover something surprising.

Translating from abstract weirdness to English, what this means is Function inherits from Object. More to the point, functions are not only data, they are actually objects!

Let that soak for a moment.

This helps the idea we could attach properties to functions make a lot more sense. In fact, if numbers and strings weren’t evaluated through and converted to their rough primitive equivalents, they would actually be Object instances too. A good way to see this is to open a REPL and try the following:

All of a sudden everything in Javascript wraps up a lot more nicely. Literally any value we deal with in Javascript originates from an object with inherits from Object. This literally means we deal in objects everywhere. This, in turn means, creating new types is as straightforward as generating new objects, which is really what objects are largely about in Javascript anyway.

Let’s play one last game. Let’s create a function which returns a base value and attaches an error state if something goes wrong, otherwise the error is null. This kind of behavior is something that can be performed in Go. Given the power we get with the Javascript Object hierarchy, we can really do some amazing things.

This kind of programming wanders dangerously close to the meta-programming world, so we should probably stop before we spin off into madness. At the end of the day, Javascript has some really amazing, powerful features which are not generally publicized, but can provide for as rich and powerful a programming experience as you might ever want. The next time you wonder if you can do something, open up a REPL and try it out. The worst that will happen is your idea doesn’t work. The best is you will discover a new, relatively unexplored corner of the language. What’s to lose?

Comments Off on Stupid Javascript Object Tricks
Similar posts in Coding, General Blogging, Javascript

Composition and Compose

A while back we discussed composing functions together to blend behaviors and extend functions to solve more complex problems. The discussion was all about composing two specific functions together. In functional programming composing multiple functions together can actually be part and parcel of the entire new function.

The Lodash Way

If we take a moment and look at libraries like Lodash and Underscore, they handle function composition as function chaining like the following.

Naive Generic Composition

This is great except, if we want to drop in our own functions we have to either use tap, which is counterintuitive, or we have to devise our own strategy for composing multiple functions together. First an aspect we would do well to avoid is the state management which happens under the covers. Second, an aspect we would like to favor is an ability to create our own transformation behaviors and chain them together in a clean, sane way. Let’s have a look at a simple, rather naive implementation of a generic composition function.

So long as two functions are provided to simpleCompose, we will get a new, composite function. We could actually create a new function which would perform the equivalent of x * (y + z) with an add and a multiply function. This may seem like kind of the long way around, but with a more complex logic, composition actually simplifies the flow. Let’s take a look at the simple example in code.

Although this kind of composition is powerful and allows for a surprising amount of flexibility, it’s rather rather limiting to only be able to compose two functions at a time. It would be much better if we could compose multiple functions together to create a new function on the fly. Let’s take a look at an iterative approach to composing functions together.

Iterative Composition

Let’s take our original compose function and turn it up a notch. Instead of letting it be the final implementation we can use it as the foundation for a more powerful approach to composition. We still want our compose function to accept two arguments and compose them, but we want to take two or more and compose them all. We will need to make use of the arguments object available in a function, as well as the slice function from Array.prototype. Anyway, less talk, more code.

This is a little more code, but it’s a lot more power. If we take iteratingCompose.length we get 2, but optionally, we can pass in as many functions as we want and it will compose them all! To get a perspective on the kind of power we are working with, let’s make some new functions and compose them all.

This would probably be more impressive if we did something more than just a little arithmetic. Let’s do something that takes a little more heavy lifting and see what we can really get out of our compose function.

Clearly this problem is more meaningful and substantial than simply performing a sequence of arithmetic operations. By using composition for function chaining, we can define a simple set of functions to perform small steps toward our goal. Each of the novel functions are simple and easy to understand, yet we can combine them to perform a complex operation.

Reducing Composition

Now that we’ve seen the power that compose brings to our programming life, let’s look at ways we can make the current implementation better. Instead of using a looping structure like forEach and maintaining state outside of the composing operation, let’s use reduce instead. Since our function arguments are being converted into an array, this is a simple refactoring.

This refactoring tightened up the whole definition pretty nicely. We identified a shared behavior, converting arguments to an array, and abstracted it. Then we replaced our forEach loop with a reduction, which helps us remove the burden of tracking state at each iteration. The only problem we could run into now is if one of the arguments provided is not a function.

The Final Iteration

There are two different approaches which could be taken here. First, we can stop here and say our reducingCompose function is all we want for our final compose function. This means, if someone mistakenly passes an argument which isn’t a function, they will get an error when they try to execute their newly created function.

Although getting an error is good, getting it early is better. What if we were to throw an error while compose is constructing the new function and alert our user, immediately, that they did something wrong? Arguably this is more programmatically interesting and I think it’s the right thing to do. Let’s hop to it.

Now we have constructed a compose function which gives us the kind of power we would want from function composition, but with the safety of a well-checked library. Through writing this generic compose function, we now have a new tool which can be pulled from our toolbox when we want all the power with none of the bloat. I compose my functions, do you?

Bind All The Things

In the time I have written and mentored with Javascript, there is a single core concept which seems to give people a significant amount of trouble: bind. To the uninitiated, bind is a puzzle, wrapped in a mystery wrapped in an enigma. If you are comfortable with functional programming, there are parts of bind which make no sense and, at the same time, if your experience is rooted in a strict OO language, other aspects will be less than sensible. I’d like to look at bind holistically in the hope that all will become clear.

Bind for Object Orientation

If we look at the object oriented nature of Javascript, we come quickly to prototypes. Let’s create a simple object we can experiment with using a prototypal definition and behavior composition to create a rich greeting API. (Yes, this is hello world, no I’m not kidding.)

Our greeter object gives us the facility to say hello and to generate a generic greeting. Let’s have a look at how we do this:

Something interesting happens in Javascript when we start capturing pointers to functions. In a purely functional programming language, there is not object or class system to provide function execution context. In a purely object oriented language, functions are second class, so they cannot be separated from their context. Javascript is the only language I am aware of where functions and their execution context can be separated. This context separation is precisely the kind of behavior which introduces the “this” issues people are so keen to bring up. Let’s have a look at what happens when we separate a function from its object.

Bind for Partial Application

The error we see is because sayHello refers, internally, to this.greet. Since hi captured a pointer only to the sayHello function, all sense of object context is lost. This means “this” is referring to whatever bizarre context happens to be surrounding our variable. Best case scenario is this refers to window, but who really knows. Let’s fix this issue.

By using bind, we define the function execution context explicitly. This means boundHi actually behaves the way our original sayHi method on myGreeter did. Now we get the kind of consistent behavior we wanted. This is not the only thing we can use bind for. Let’s take a look at a function which doesn’t depend on an execution context.

Add is a pretty simple function, but we can use it to perform some more interesting actions like doing an inline increment of a value. Let’s use bind to define a new, refined function to perform our increment action.

Here, bind provides the ability to partially apply an argument, or arguments if you provide more than one. Obviously our second function is both trivial and somewhat useless, but it demonstrates performing a partial application of more than a single argument. Meanwhile we can see a reason for wanting to partially apply a single value to a function for instance applying the same operation to an array of values.

Binding Context and Partially Applying Values Together

We’re in a place where we can tie this all together now. We have seen execution binding and partial application. Let’s have a look at what using all of these behaviors together for a single focused purpose. Let’s create a new function from what we already have which does something a little different.

Tying it All Up

Here we have created a new function, sayHola, with nothing more than the bind function and existing functionality in our Greeter instance. Bind is a very powerful function which is part of the core Javascript language. With bind alone, a lot of work can be done. The next time you are working and discover a context issue, or you simply wish you could use that one function if it only had one of the arguments pre-filled, look to bind. You might find exactly what you need.

Comments Off on Bind All The Things

Commenting Code: Why, Not How

If you have written any amount of code in any language you are likely aware of code comments or remarks and how they work. This isn’t really what I’m interested in. I came across a discussion about an open source project which had a requirement that all code must be left completely uncommented. People were surprised and alarmed as to why this project would require all code to be uncommented, and I tend to agree. This post is to address comment content. Hopefully, you will share my opinion that comments are important by the end of this.

New programmers are often told to comment their code by instructors, but they aren’t told what the comments should contain. I remember my C language instructor chastised me for commented all of my functions without regard to the importance of the function or value of the comment. He said “there are too many comments, are you trying to make your code unreadable?” Others received feedback that they didn’t comment enough.

While we are on the topic of novice programmers, let’s take a look at a comment style I have seen in code written by inexperienced developers. It usually contains information about what the function does and how it does it. Although it is lovely to see the program explained in clear English, it is not particularly helpful since good code should be not only functional but illuminating.

From this description anyone with experience in the language could probably devise a body of code which would perform the actions listed in the comment. The problem is, I have no idea why this code exists. Code which exists for no other purpose than just to be written is not useful code. This could be dead code, or it could be a problem which could be solved another way. If this code does something which the surrounding context gives us no clue to, we would never understand the value, just the means.

Instead I would write a comment like the following:

Now we understand the context for the function. We can not only see what the function does by the name alone, but the comment provides immediate context for the use. Even more than that, this comment is far less likely to be out of date by the next time someone looks at this bit of the code. Comments which detail the inner workings of a function are more likely to fall out of date as the life of the code gets longer. People may modify our function to behave differently, however the context of the creation of the function is unlikely to become obsolete even with (anti-SOLID) modifications.

This brief discussion can be summed up by the phrase “comments should tell the programmer why something was done, not how.” I like to call this my “why, not how” principle. This helps to guide the comment writer’s hand when they feel the need to add a comment to the code. If your code needs explanation as to how something was accomplished, the code is likely too clever. On the other hand, sometimes obscure solutions are unavoidable, which means the function context may not exist within the codebase. This is precisely the why that should be added as a comment. For example, this:

In Javascript there is another situation where comments are desirable. Although I believe JSDoc is a tool which was ported almost blindly from its source (JavaDoc) it is, hands down, the best tool for the job available in Javascript. I don’t believe that every function, method, class and module in your code should contain JSDoc comments, it is useful to annotate functions which might be used in a variety of situations. Essentially, JSDoc is a good initial way to document slow-changing program APIs which others might want to use. Following is an example of JSDoc use for the same function.

As you can see, the context comment is still part of the description associated with our function. This contextual clue is still the best part of our comment. Hopefully this function will be slow to change, so the arguments and return values will remain consistent. Even if they don’t howeever, our context clue will still provide important insight into why this function was created and what purpose it serves.

In the end, comments are not always necessary. They can be extra noise, or they can be misleading. As programs grow large and the context becomes unclear, comments become a life raft for programmers who are digging deep into old code to identify functionality which is important or no longer in use. If you choose to write comments, and I hope you do, think about WHY you wrote your code and let the code tell the next programmer HOW.

Node and Blended Tech Stacks

The longer people write single page applications, the more of them there are. The notion of a SPA harkens back to the concept of client server architecture, invented some time in the 1960’s or 1970’s. Now we build SPAs with services constructed with Java, C#, Scala and Node. Some of the stacks we use have names like MEAN, and others are a hand-selected composite of disparate technologies with no name.

I have quickly come to believe that any tech stack with a cute name is likely a solution looking for a problem. Sometimes you want Mongo, Express, Angular and Node, but sometimes you want Couch, Ember, Spring and Java, do we call it JESC? Gross.

The important thing here is, we took Node out altogether for the Java stack. The benefit we get with choosing Java’s history and broad set of packages, you lose the ability to share code between services and your client-side code.

We are in a unique situation in the current stage of technology where we can write a client application and provide it as a interpreted file which runs on a remote computer at request time. The code we can push to the client also will run on a server. By ignoring this fact, we lose a great opportunity to write code once and share it client to server.

People call this isomorphic or universal Javascript. If the services are all written in another language, there is no way to leverage the client code in the compiled server code. There is, however, still a way to share business logic from the server layer with the client with a little fancy fingerwork.

First, let’s write a little logic to verify a record doesn’t have a quantity which is greater than the maxQuantity property. Once we have that simple function, we can then apply it across a list of records.

If this were in the client side code, and we wanted to do the same kind of check in Java, the code would look something like the following:

Obviously this logic is almost the same, but two different developers probably would end up writing each of these. This is problematic since we now have the same logic in two different files in two different languages. This definitely a violation of DRY principles. For a single validation like this, we can probably accept the duplicate code, but in large code bases, this kind of thing will go from being a one-off occurrence to a pervasive problem. Let’s look at a way to consolidate this.

Comments Off on Node and Blended Tech Stacks