Integrated Tests Are A Scam

Pulling things apart generally means opening code up to the possibility of being used in a way that its current client (probably the entry point of the cluster) happens not to use it. This is a (generally) unavoidable consequence of removing code from its context. When we leave a block of code in its context and its client only sends it a limited set of inputs, we can safely avoid some of the tests we would otherwise think to write, and in the interest of time, we usually don't write those tests. When we separate that block of code from its context, it becomes liable to be sent those previously-unseen inputs and we have to decide whether to care about that. It seems generally (though not always) irresponsible never to add tests for those previously-unconsidered inputs.

In the process, we have refactored--we haven't changed the behavior of the system yet--but we need more tests in order to support reusing newly-available code in other contexts. We don't need to add those tests for the current system yet, but it's only a matter of time before we regret not adding them.

So it is that refactoring can make tests that we were once able to safely avoid less safe to avoid. Of course, in exchange for this risk, our refactoring opens up code for potential reuse that was not previously available for reuse, so if we don't intend to try to reuse that code, then we probably shouldn't separate it just yet.

This highlights some interesting tension between which set of tests does the current system need as a whole and which set of tests do the parts of the system need individually. On the one hand, we don't want to waste energy testing paths that the system as a whole does not execute; but on the other hand, if we don't write those tests, then we might run into latent mistakes (bugs) while adding features that use never-before-executed paths of existing code. I'd never thought of that in particular before. Another of the many tradeoffs that makes writing software complicated.