How TDD Affects My Designs - The Code Whisperer

I have written elsewhere that people, not rules, do things. I have written this in exasperation over some people claiming that TDD has ruined their lives in all manner of ways. Enough!


This is a companion discussion topic for the original entry at http://blog.thecodewhisperer.com/permalink/how-tdd-affects-my-designs

I find that practicing TDD leads me to write code that has (some of) the (widely advertised as) desirable characteristics of functional programming, even in languages where mutation of state is the normal thing. For example, going through complicated setting–up exercises to get an object into the right state to exhibit the behaviour I want to test for is a pain, so I tend to pass in state explicitly. And if I see that several methods take the same bundle of state I'll create a value object to hold them, and then I might end up giving that value object some behaviour thus discovering a useful new abstraction.

Asserting on the state of an object is more work than asserting on the return value of a method, so I tend to write more functions and fewer commands.

Code that talk to the outside world is harder to test than code that doesn't so I tend to both minimize it and move it all together around the outside of my code with a largely pure engine in the middle, as per “hexagonal architecture”

I find that practicing TDD makes it more natural to write good code because good code is easier to (write) test(s for). What I think is missing from a lot of discussions of whether or not TDD produces good design is an assessment of whether or not the programmer doing TDD knows what their “good” options are. If they don't, it might not help very much.

My tdd got more expressive after reading Responsability Driven Design by Rebeca W Brooks which brings on the table bject role stereotypes.

There is one thing I do to facilitate TDD that might be considered "harming good design": I sometimes make private methods public even though they are only used by tests. Precondition: Those methods don't break any invariants and they _could_ be part of the class's public interface.

What is your stance on unit selection and mocking? Do we need a hard rule that says classes must always be units, and all dependencies (not just ones that go out to the real world) must be mocked so they can be tested in isolation? I'm asking because i tend to disagree with that stance and i'm curious about others ideas.

with the comment, "and even the eradication of state altogether", what language are you referring to? I know haskell forces you to control and specify when and where state can occur, but it doesn't eradicate it.

True. I didn't mean "the eradication of all state", but rather "the eradication of many otherwise common uses of state". It certainly *feels* like eradicating all state altogether, compared to working in object- or record-based languages.

You should search this site for the phrase "mock objects", as I write extensively about it.

I don't tend (any more) towards the kind of hard-and-fast rule that you've asked about. That said, I do offer some rule-based guidance in http://link.jbrains.ca/WpR9aS and, of course, my *big* description of how/when to use mock objects effectively comes from "Integrated Tests are a Scam", which for now is best consumed as a video. http://bit.ly/QWK7do (Sorry about that.)

On the contrary, I do this all the time, and I believe that it improves design by pointing out emerging cohesion problems. The JUnit FAQ states it very simply: "Testing private methods may be an indication that those methods should be moved into another class to promote reusability." I encourage making them public, trying to test them in isolation, then letting future events guide moving them to another module to promote reusability.

When I want to hide method, I don't use private/public any more, but rather use implementation/interface. I find that I get much better results this year. I wrote about this a decade ago in my book, JUnit Recipes.

"If you want to write a test for that private method, the design may be telling you that the method does something more interesting than merely helping out the rest of that class’s public interface. Whatever that helper method does, it is complex enough to warrant its own test, so perhaps what you really have is a method that belongs on another class—a collaborator of the first. If you extract the smaller class from the larger one, the helper method you want to test becomes part of the newly extracted class’s public interface, so it is now “out in the open” and visible to the test you are trying to write. Moreover, by applying this refactoring, you have taken a class that had (at least) two independent responsibilities and split it into two classes, each with its own responsibility. This supports the Single Responsibility Principle of object-oriented programming, as Bob Martin describes it. You can conclude that having tests in a separate package helps separate responsibilities effectively, improving the production system’s design." - JUnit Recipes, 2004

"Many programmers complain about having only one implementation per interface, although I still haven’t understood what makes that a problem."

Agreed. "Here's how the client wants to use this service (aka interface); write however many implementations you need to make it work." I could almost say that's an implementation detail :)

(Incidentally, there's a typo shortly after: "Moreover, the test doubles themselves act as additional implementations of the interfaces, a *fact* which most detractions fail to notice." -- the word "fact" is missing.)

Thank you for pointing out the typo. There were two. I believe I have fixed them.

I would like a way to include some semantics in the interface definition that go beyond method signatures. This would likely work better in languages where we can define types more precisely, like Haskell or (somewhat surprisingly) Pascal. :) I think that if interfaces carried with them a description of how they're meant to behave, then fewer people would object to "extra" interfaces, because they could carry much more of their own weight. The line between client expectations and freely-changeable implementation details would be much, much clearer.

I write contract tests as a way to add this semantic/behavior details to an interface. I think of these tests and the interface as a single, logical thing. I find that thing beneficial, even critical, no matter how many production implementations I need.

I understand your point about using an interface with only one implementation to clarify the contract. I can tell you one disadvantage I see: It complicates source code navigation from the client class to the implementation. On the other hand, I don't see any problem on using public and private to define the contract.

If you want to promote a private method to make it "more important", in order to test it and point out a potential new class to emerge, one option I've taken some times is to give the method package access. Although that means to work the opposite way you point, having the tests and the tested code in the same package.

I must say I don't really see big advantages and disadvantages between the two ways of working, though.

You make two very good points, both in "interfaces should be richer" and in "contract + tests = a single thing".

I vaguely remember an article, from many years ago, showing that you need more than just syntax to completely specify a contract. (The article was on C++ but it was showing that, for example, you need a way to indicate things like "PUSHing an element to a stack should make that element be returned from a POP" or "POPping the last element from a stack should make it empty" and so on.) Tests are a good way of accomplishing that - a richer contract than just plain syntax.

Yes. Moreover, one weakness of design by contract: some contracts are as complicated as the implementations to express as a single logical/mathematical expression. Some contracts can only be expressed compactly by examples, which explains why I like to use contract tests and consider those part of the interface. I think we do best by some combination of contract expressions and contract tests.

The clearer the contracts of the pieces are, the better we understand where behavior resides and how it works and how it fits in to the rest of the system, and the less we need to navigate through multiple layers of the call stack at once.

I interpret this difficulty of navigation as helpful: it indicates where we should find a clean boundary of knowledge, and whenever we feel tempted to cross that boundary, I take that as a sign of where we need to express the corresponding contract more clearly and more precisely.

I have also said in the past that wanting to know so much detail at once reflects a disease of non-modular thinking. I admit that it's an extreme way to view things, and I express it extremely for effect, but I believe it nonetheless. :) (Just don't take it too seriously; it's meant to provoke, not offend.)