Notes on "Philosophy of Software Design"
I don't read too many programming books, and I've not taken notes as I read a book in a long time, but as part of scheduling some downtime after my 2nd jab, I decided to spend a nice Saturday on the couch with John Ousterhout's Philosophy of Software Design.
These are some fleshed out versions of the notes I took to myself while reading. It is not a review: the book is good, and if you interested in software engineering, I recommend it.
2.1 Complexity Defined
Your job as a developer is not just to create code that you can work with easily, but to create code that others can work with easily.
It is notoriously difficult to edit your own writing. When you write, you are trying to animate an idea in your readers mind. Your tacit knowledge of that idea makes you a poor judge of how well the text achieves that goal. Ousterhout continuously stresses that it's your peers who decide if your code is readable precisely for this reason.
Writing readable code is something you can get better at, but even though this is programming we're talking about, that process is more like "getting better at writing" than it is like "getting better at mathematics", and teaching the former is difficult. Worse, there's no vast library of classics to train from in software.
3 Strategic vs. Tactical Programming
I found myself agreeing with this chapter, but feeling as though something was missing. It wasn't until I started trying to expand on my notes that I started to understand why.
The quotes here are from 3.2 and 3.5, which I feel adequately describe strategic programming:
If you program strategically, you will continually make small improvements to the system design. [..] The most effective approach is one where every engineer makes continuous small investments in good design.
There is a neat through line here from the complexity chapter. Software development isn't limited to getting a computer to produce correct output. Software has to be maintainable, adaptable to changing requirements, and simple enough for new developers to become productive.
This book is about writing better programs, not industrial software practices, and these are the measurements of quality it is interested in improving. In this framing, programming too tactically is a way to build up unnecessary design complexity.
One of the book's strengths is that it repeatedly encourages its readers to identify exceptions to its advice. However, the discussion in this chapter was centered so thoroughly around the act of programming that I had trouble putting myself in its framing.
Every software system is built within a context, and it's typically the context that dominates when defining success or failure. That doesn't mean we shouldn't try to build good systems, but it does mean that there are situations where investment in system design is unwise.
I've found that it pays to be tactical when there are a lot of unknowns and strategic when there are few. These could be technical unknowns, but just as often there are also usability and product unknowns.
Tactical programming can fill in these gaps in understanding at lower cost, leading you to make better decisions earlier. An extreme example of this is incident response. Not only is there time pressure, but the source of the problem is almost by definition unknown.
Conversely, if you invest a lot in design before you understand the problem space, you're not only paying more to learn the same lessons, but if they cause you to rethink your solution then you've got higher sunk cost. This can be such a strong deterrent to doing the rethink that you continue with your poorly fit premature abstraction, accruing a huge amount of design debt.
In my experience, programmers are insufficiently strategic when they are first learning how to program because they do not know the strategies available. After a short period of time where they improve a lot, they become overly strategic, over-abstracting everything and adding unnecessary complexity for the sake of "cleanliness". This curve eventually bends back down approaching some unattainable ideal equilibrium.
Because less experienced programmers will benefit the most from this kind of book, I'm hesitant to enthusiastically sign on to advice that will play into what eventually will be their worst instincts. KISS and YAGNI have become common in industrial software circles because the damage done by eagerly over-architecting everything has become more widespread and more evident than the damage done by people who make insufficient investment.
The advice in this chapter is fantastic, but it lacks the context that could keep it from being applied incorrectly. With few unknowns, it's undeniably better to iterate on system design as you add features.
The primary advantage experienced engineers have is that they have fewer unknowns around system design, so their process of discovery is shorter, and they can often employ a strategic approach much earlier. Ousterhout is an exceptionally accomplished and experienced engineer, much moreso than I am, but this makes him succeptible to the same blindness as our self-editing writer when it comes to determining how experience plays a role in how to balance these approaches.
4 Interfaces
Another chapter where I took several notes. A few ones that I found interesting:
The best modules are those whose interfaces are much simpler than their implementations.
This is a really great, concise way to describe what you should strive for. Abstractions are almost all leaky in some way, and they carry a cost in cognitive overhead, so it's both best to reduce that overhead by making their interfaces simple and get higher value out of them by putting more implementation underneath that interface.
The informal aspects of an interface can only be described using comments, and the programming language cannot ensure that the description is complete or accurate.
Overall, I found Ousterhout's approach to comments to mirror my own biases, which is that they are necessary and most programmers do not write enough of them. Chapter 15 goes into this in more detail.
If you search google for "comments are bad" you get a whole bunch of articles explaining that comments are mostly bad, or that "good code should explain itself." I've always found this to be preposterous, and while I don't think people found my attempted proof by contradiction to be very persuasive, I love this approach of differentiating between formal and informal aspects of an interface.
It's also a convenient axis upon which to understand the goals of certain programming languages, where you have languages like Idris and Ada which attempt to formalize (and therefore check) as much of an interface description as possible.
If an interface has many features, but most developers only need to be aware of a few of them, the effective complexity of that interface is just the complexity of the commonly used features.
It's time consuming to sift through a large interface to find the few functions you need to use to achieve your goals, so I think I disagree with this.
5.7 Overexposure
If the API for a commonly used feature forces users to learn about other features that are rarely used, this increases the cognitive load on users who don't need the rarely used features.
This is a slight refutation of the last quote I included from chapter 4, though I will say that I think just having extra exported functionality that is usually unnecessary does add to cognitive overhead.
This is a big problem I have with the way that Dependency Injection is often employed to facilitate testing, where it can be so intrusive that your exposed API asks for a bunch of injected components that are irrelevant except for use in testing.
6.1 Make classes somewhat general-purpose
The phrase "somewhat general-purpose" means that the module's functionality should reflect your current needs, but its interface should not.
I love this advice. Even though it sounds related to the strategic programming advice I was a bit uncertain of, this is clearly scoped finely around problems where unknowns are much less prevalent. I think I started to really internalize this approach when I read a lot of early Go software, which was designed in this way, and whose interface
keyword forces you to contend with what an "interface" is conceptually in a very explicit way.
As a related example, when we wanted to try using memory mapped IO for a system, I wrote a version of os.File
that used mmap behind the scenes. This implementation included functionality that we were not going to use on that system, because the abstraction that package was providing demanded it.
Doing it in this way made this module much easier to understand in isolation, which meant people could ignore its implementation and just use it. Additionally, when it came time to question this decision, it was clearer that we were interested in mmap in general than any specific module we'd created.
8 Pull Complexity Downwards
It is more important for a module to have a simple interface than a simple implementation.
This is straight out of Richard P. Gabriel's Worse is Better, which I've written about at length before, and a stronger version of what was described in chapter 4.
This is presented with a lot of nuance in the book, with the conclusion mentioning:
Use discretion when pulling downward; [..] Remember that the goal is to minimize overall system complexity.
Even though I often think of myself as being sympathetic to the "New Jersey Method", I think Ousterhout's approach is much more like how I behave in reality.
10 Define Errors Out of Existence
A great chapter.
One of my favorite sayings: "Code that hasn't been executed doesn't work."
Thinking about all systems in this way is very valuable. At Datadog, we have applied this test thoroughly when developing our infrastructure. If you are going to rely on some process to happen in exceptional circumstances, eg. spooling messages to some backup system to be replayed later, it's better to put that in the critical path somehow so that it's always being executed.
Creating a recovery mechanism for crashed servers was unavoidable, so RAMCloud uses the same mechanism for other kinds of recovery as well.
We did exactly the same thing for parts of our time series storage. Focusing energy on startup is a good tip in general, you need it to work well when you deploy anyway.
15 Use Comments as Part of the Design Process
I've seen this described as "Documentaion-Driven Development." A related benefit to writing early in the design process is described in one of the all time great quotes from Leslie Lamport:
To think, you have to write. If you think without writing, you only think you're thinking.
Having only truly understood my own thoughts on "Philosophy of Software Design" after writing these notes, I think this applies very widely.
16.3 Comments belong in the code, not the commit log
I've been guilty of this a lot in the past, though typically the place I am putting the comments is into code review notes. I've started to both catch myself more and to cite this in code review.
17 Consistency
Consistency creates cognitive leverage
I love this phrase and I hope it enters the popular lexicon.
I am a big fan of employing mechanical leverage as a metaphor in software in general. I think I might have picked this up directly or indirectly from Martin Thompson. It's often a good check on whether or not adopting some complex dependency will be worth it: how much leverage will you gain from applying it?
Another great one from the chapter that I do not have any further comment on:
Having a "better idea" is not a sufficient excuse to introduce inconsistencies.
19.4 Test-Driven Development
This seems to be the most contentious part of the book, judging by the other reactions and reviews I've read online. The TDD community produces a lot of literature, and they tend to have a high degree of certainty that they've figured out software quality.
I've always been skeptical. If you apply my long discussion on strategic vs. tactical programming above to TDD, you'll probably guess that I think TDD prematurely frontloads a lot of work and ossifies design before you've learned enough to make those decisions.
Ousterhout's dim appraisal of TDD is that it is the opposite of strategic programming:
[TDD is too focused] on getting specific features working, rather than finding the best design. This is tactical programming pure and simple, with all of its disadvantages.
This gave me a new perspective on chapter 3. I suspect Ousterhout would probably understand my concerns but classify them as problems with tactical programming.
20.1 How to think about Performance
In general, simpler code tends to run faster than complex code.
This is great advice.
Simpler code is more durable to changes in optimizers, hardware, etc. and more likely to be made faster by outside changes.
If you find yourself replacing some simple code with more complex code, try and keep a benchmark that compares them. I've encountered several situations where changes in the compiler (eg. in escape analysis or autovectorization) have led to clever fast code becoming slower than the simple obvious version.