author and publisher.
One to One and Off by One
A few years back, I was working with a company that made scientific measuring devices. These devices were widely deployed across large geographic areas, often spanning time zones and political jurisdictions.
It was standard in their industry and culture to always measure and calculate using local time, not UTC. Because of this affliction, and the vagaries of daylight savings time rules, all of the time calculations in the code base would often by off by one, depending on the time of year. Developers got into the habit of just adding one or subtracting one to get the “right” answer, figuring that it was the necessary DST offset that hadn’t been applied elsewhere.
There was also disagreement on the definition of a day: their industry standard specified that the first minute of the new day occurs at 12:01 AM. 12:00 midnight was considered that last minute of the day. Now that’s contrary to our usual thinking, where midnight is the first moment of the new day, and 11:59 is the last minute of the previous day. So naturally what programmers intuitively believed about time and what the requirements specified were at odds with each other.
Off by One, Everywhere
The result? Their code base was completely littered with random +1/-1 adjustments to calculations, all at different points and put in at different times. The term “nightmare” doesn’t begin to do it justice.
How did this disaster come about? As with most such things, it happened one step at a time, one day at a time. The earliest folks on the project didn’t recognize the need for an abstraction to handle time, and by the time they did, it was too late—the damage was pervasive. Even though the team knew there was a problem, they thought it would be too expensive to go through the whole code base and fix it.
So instead they kept adding to the code base, to the point where it really
Now obviously this was a pretty extreme and obvious case, but the same thing happens on a smaller and more subtle scale all the time: missing abstractions, leaky abstractions (that reveals too many internal details), and inappropriate abstractions are commonplace.
How can you prevent this from happening to you?
Well you could brick yourself up behind a wall of abstractions (in an Edgar Allen Poe-like fashion), where every value and every method is an abstraction unto itself. I remember some C++ libraries that did exactly that. Excruciating. Wasteful of time and energy, and not appropriate: there are many occasions when an int is just an int. No need to be dogmatic about it. And even if you did create a huge number of abstractions as insurance, there’s no guarantee that they’re the
The Ideal Abstraction Law
What we need is a practical way to tell whether or not our level of abstraction is appropriate. Ideally, the code should reflect the real world that it models: one change in the real world should result in exactly one change to the code.
Since I love making up names, let’s call this
Now ask yourself how often that really happens in practice. When there’s a change in the real world, how much code do you have to chase after to change? One spot? A couple? A lot of little changes all over the code base in unrelated modules?
That’s a reasonable metric to judge the health of your code. Frankly, it’s probably rare that one real-world change will result in exactly one change in the code, but if you’re making lots changes in lots of places, you’ve got a serious problem. And the time to fix it is
Something to think about.
What’s a guru meditation? It’s an in-house joke from the early days of the Amiga computer and refers to an error message from the Amiga OS, baffling to ordinary users and meaningful only to the technically adept.
Stop Practicing and Start Growing
July 11, 2016
January 25, 2016
It's an Experiment
May 18, 2015