Simulating Failure in Collaboration Tests

December 17, 2020 Integrated Tests Are a Scam, Test Doubles, Not Just Coding


This is a companion discussion topic for the original entry at https://blog.thecodewhisperer.com/permalink/limiting-beliefs-and-unstated-assumptions

Hi, nice perspective. I know that it's not always possible to use a real database for integration tests, but usually I prefer the real instance in the first place. IMO no database double can replicate the exact behaviour so, is all the effort and diligence needed to accurately mimic the real instance valuable? To me it seems too much error prone and costy and necessitates of a deep understanding of database internals that not always the developer has. Also simulating real world crashes requires way less effort. IMHO the test double should be used only when there is no other option available. Happy New Year :)

Indeed. This article was not about that.

When integrating with an RBDMS, I tend to write integrated tests, but I write them a bit differently than most. Over time, as I remove duplication, I'm left with a thin layer of integrated tests that document the behavior of the database driver. These tests cease even to be part of the application's test suite and instead become their own test project for v6.7.12 (or whatever version) of the database driver.

That integration, however, happens when implementing the Repository interface to talk to (for example) a PostgreSQL database. My controllers will never know that they're talking to PostgreSQL. My controllers typically don't care about the details of SQL.

Have you ever had the experience of feeling utterly surprised by a question? I mean that someone asks you a question that causes you to scream an “obvious” answer inside your head. The answer seems so obvious in fact that you wonder why someone would ever ask the question in the first place.

I’ll try not to read too much into that :slight_smile:

Anyways, thanks for choosing to elaborate on the topic in a full post. I think it will benefit a lot of readers (me included), and I’m glad I stumbled across it.

I am confused about your perspective on something after reading this post though, and wanted to ask for some clarification. Maybe there is another hidden assumption you can uncover :).

The context of my question is this. Your post begins by discussing two implementations of a Repository. One is the production MySQL implementation, and the other a lightweight in-memory implementation. The in-memory implementation is beneficial testing, and because it is a hand-written, static implementation, it allows it to participate in Contract tests (perhaps this is a wrong assumption, and dynamic mocks/test doubles can too?). Being able to Contract test your test implementation is beneficial to increase your confidence that it acts like the real thing. I like that.

But then you say:

…when I write tests and need test doubles, I tend to write them using a test double library (JMock, NSubstitute, rspec-mock…), even when writing them “by hand” is objectively easier.

And further on:

So it goes with me and test doubles in Collaboration Tests. I like consistency and I feel comfortable with dynamic test doubles, so I use them even in situations in which a simpler alternative would work equally well.

Firstly, when you say dynamic test doubles, do you mean syntax like Mockito’s when(myRepo.findAllUsers()).thenReturn(foo)? If so, don’t these little mocks you setup preclude you from the benefits of Contract tests? For example, how can you be sure these little mock implementations behave like the real thing?

And secondly, I am also wondering what you mean by a simpler alternative. Maybe an example would help clarify here.

I hope my questions were clear. I know it takes a lot of energy sometimes to try to understand where someone is coming from. Thanks in advance.

Indeed, I tried very hard to word that very carefully. :slight_smile: I’m often reminded of https://xkcd.com/1053/. I tend to benefit from the occasional reminder that not everyone shares the same context. :slight_smile:

Indeed, Mockito (and JMock and NSubstitute) provide a thing that I label as a “dynamic test double”. I mean an object that implements an interface without being a literal compile-time class that implements that interface. In Java, we build them with Dynamic Method Invocation Handlers. In Ruby we can use send(). These test double libraries allow us to implement an interface without writing a class to do that. This is unlike in C++ where we typically use templates to generate a static test double.

Without the technical details, by “dynamic test double” here I mean to intercept individual method calls for the purpose of simulating them (dynamic test double) as opposed to writing an alternative (usually lightweight) implementation of the entire interface (static test double).

The simplest example is an anonymous implementation, such as we can do in Java:

new Catalog() {
    Option<int> findPrice(String barcode) {
        return Option.of(750);
    }
}

That’s a static test double which, most would argue, is simpler than creating a dynamic test double with Mockito and saying “when findPrice(with(anything))), return Option.of(750)”. My point was this: even if some dynamic test doubles are more complicated (to use) than some simpler alternatives, I find value in having a uniform syntax for my test doubles that compensates for the extra complication. Moreover, I can read those dynamic test doubles quite easily, so the extra complication doesn’t much hurt me anyway. Not every programmer will feel the same way about that.

Back to Test Doubles and Contract Tests…

I use test doubles for Collaboration Tests. Those test doubles specify parts of the contract of the next layer. They collectively “add up to” the contract of the next layer.

Test doubles also “implement” the contract purely abstractly. There is no behavior to check. They only detect events and return hardcoded values. There’s nothing inside them to check. Mockito works. NSubstitute works. Dynamic method interception works.

If we wrote Contract Tests for the Test Doubles, then we’d be doing double-entry bookkeeping. We’d just write the same thing twice in order to increase the chances of noticing a mistake. This might have value, but I typically judge it too little to be worth the effort.

If I need a Lightweight Implementation (for various reasons), then I might want to check it with the Contract Tests for its interface. This would happen if the implementation needed to be complicated because it implements a complicated external interface, such as HTTP, SMTP, or java.sql.ResultSet. I would try to make this lightweight implementation so lightweight that it was obviously correct. I would usually prefer to use 6 different lightweight implementations of different parts of a big interface over combining them into one big implementation that tries to be all things for all people. The smaller implementations will probably be obviously correct in a way that the big one wouldn’t be.

And, of course, this is just a preference and not a rule.

How much does that help? Did I get close?

You’re welcome. Indeed, I believe I understood you easily. (You tell me!) As for the energy it takes to understand where someone is coming from, I find that work interesting and enjoyable. And it pays well from time to time. :wink:

Yes, bang on, thanks.

You know, one of the things that’s noticeable from purusing your blog is that it seems like you put effort and time in crafting quality responses to people who comment on your posts, mine here being no exception. Thank you for engaging like you do.

1 Like

First, I’m glad that I helped. Second, thank you for the kind words. Indeed, I do try.