Clearing Up the Integrated Tests Scam

No matter what one writes, no matter how carefully one tries to articulate it, a non-trivial segment of the readership will interpret it differently from what one intended. So it has gone with “Integrated Tests are a Scam”. I’d like to address some common differences of interpretation.


This is a companion discussion topic for the original entry at http://blog.thecodewhisperer.com/permalink/clearing-up-the-integrated-tests-scam
1 Like

Hi,
For me the term *integration tests* doesn't mean *collaboration tests* + *contract tests*, but exactly what you described as *integrated tests*. According to wikipedia:
Integration testing is the phase in software testing in which individual software modules are combined and tested as a group.

So for me this is a problem with not having one naming convention and not you using the wrong word :)

I hope that your article will clarify to others what you really meant!

This is exactly the problem! :) That's why I don't say "integration tests" any more. Even though I might not be causing the problem, I am suffering from it. (All right... only suffering a little.)

I hope that this clarifies it to at least some people!

Hi, the 8 points scam you presented... Well, I haven't seen it. What I witnessed instead is very positive:
1 & 2 exactly as you said
3) we write integration tests / e2e tests
4) we keep on writing unit tests because we know how much value they give us
5) our design is cool, our app work, our tests are green
I really don't know why in the scenario you present the design gets worse. I just don't get it.

Cheers!

And one more thing. You write:

I never use “unit tests”, because that term causes its own confusion, but that’s another article

I hope you write this article because I'm living my life in a (false) believe that unit tests are well defined. Save me from this fallacy please! ;)

We recently watched your video in my team. For what it matters, we understood what you meant with integration tests. Anyway I do really like your clarification with this sentence: "the integrated tests scam happens when we use integrated tests for feedback about the basic correctness of our system, and write them in place of microtests that would give us better feedback about our design"

I'm not sure what there is to write about it, really. Different people use "unit" to mean different things, but when I used to say "unit test", I usually meant "small test". Some units are bigger, and so their "unit tests" would be bigger, so if I mean "small test", then I should say "small test". Several years ago, Michael Hill (@GeePawHill) started saying "microtest", and that literally means "small test", so I stopped saying "unit test" and started saying "microtest".

I don't care what kind of "unit" you want to talk about; I find microtests are the most helpful for criticizing my design, and I rely on that to design better, so I write a lot of microtests.

"We keep on writing unit tests because we know how much value they give us". As long as you do this, then you won't have only integrated tests for some parts of the system, and so those parts will not become more poorly designed as quickly as they would if you only had integrated tests.

When we have only integrated tests for a part of the system, then those tests don't criticize the design as well as microtests do. If we don't have constraints forcing us to improve our design, then the probability is higher than we will not improve it as much. This means more tangled code, which becomes even harder to test with microtests, which encourages us to write only integrated tests for that part of the system. Repeat.

I like your improvements on the test lexicon. As I said on my blog (thegreenbar.wordpress.com):

"I like this terminology [collaboration and contract tests] better than “unit” and “integration”
tests, because the names reflect their function in verifying the implementation rather than just a notion without nuance of their scope."

I think a lot of people get confused and hung up about what different tests types mean and it distracts from the work of doing TDD well.

I'm working on a new blog article on test category definitions and the confusion about them. I was wondering what you think of the following sentence:

"I believe that some contract tests can (and often should) be integrated tests – for example focused tests
that touch a database to verify low-level data model logic (which goes beyond just checking that the database is integrated)."

My thinking is that if in collaboration tests at the service level you mock out the call to the interface that does CRUD for your domain entities, then for the corresponding contract tests it's good to test the concrete implementation of that interface in a way that verifies that the CRUD really works as expected for all the entities concerned, with a real database (so each test is "integrated" because it tests 2 interesting parts of the system at the same time: 1 method under test and the database). Have I interpreted your terminology correctly?

An interesting case about which I haven't thought much before. To me, the contract tests are the test functions that don't refer to a specific implementation of the interface/contract we're checking. It becomes an integrated test (or not) only depending on which instance of that interface you create when you subclass the contract test. If that instance directly integrates with service implementations in the layer below, then it turns the contract test into an integrated test.

In that case, the contract test function is neither an isolated nor an integrated test; however, the contract test function + the subject/instance factory method might be either an isolated or integrated test, depending on exactly what the factory method creates.

To the original point, yes: when you're writing code in the last layer before an expensive third-party library, then I would expect to implement those contract tests quite often as integrated tests, following the "don't mock types that you don't own" principle.

Thanks for your answer. My article about test category confusion is now up:
https://thegreenbar.wordpre...

I hadn't fully grasped your approach to contract tests, but it's fairly clear to me now. It seems like in a contract test you can't mock out anything that is specific to the concrete implementation of the abstraction under test. Whatever is in the the method signatures of the abstraction (including setters and constructors of an abstract superclass under test) is fair game, but if you're testing an interface whose methods accept only data objects or primitive values as arguments, it's a black box.

Much better now, yes ;-) Thank you.

I still bunch collaborative tests into the term "integration tests". For me it still makes sense that those are integration tests, isolated or not. :P . So you were fine! I don't think the follow-up post is necessary but if it helps people great! And so for me I still say it as "integration tests are a scam!" :P. If people want to know what an integration test is, I boil it down to two types. For example since I do a lot of React and React.js is so testable I can say this:

- testing through a browser or using real things (network, etc.) OR in the case of react using mount() but hitting a real API over the wire (network calls) a real database, etc.
- testing many parts together, or many components. This might be an isolated test, or it might be hitting real things, depends on your style. For me when I do create very few of these which in React is using enzyme's mount(), they're not hitting real things and are what I coin "isolated integration tests" but then that might confuse people because "isolated" could mean many things. To me in this case "isolated" just means an integration test that is testing via mount() which tests the entire component tree at that seam together, so it's isolated in that it doesn't hit real things but it's still an "integration test" because it's testing the integration or "collaboration" of many components in the React tree together at once.

I do love this though which are great summaries:

- So, let me clarify again: the integrated tests scam happens when we use integrated tests for feedback about the basic correctness of our system, and write them in place of microtests that would give us better feedback about our design. Don’t do this

- So yes, I write some integrated tests, but many, many fewer than you, for a specific purpose, and I design in a way to intentionally make them obsolete over time. I write them in a way that avoids the scam. And I certainly don’t use them to check that “the system hangs together”. That way lies the scam.

I think it's be helpful to explain though what you mean by 'I write them in a way that avoids the scam'

I think a couple real code examples could help but realize you might have some of those in your course?

I like your idea of calling the isolated unit tests 'micro tests' or maybe 'micro isolated tests' which is even better at least to me. They fit into newbies to tests's brain easier at first. Then after they get their feet wet you can introduce the term 'seams'.

Yes I know this reply is about a 2 year old post...lol. I'm a busy guy.

It's supposed to be evergreen content. I love seeing new comments on 10-year-old articles, let alone 2.

Regarding code samples, indeed, I illustrate this in all my training, but broadly speaking, I don't try to write _exhaustive_ integrated tests. I treat them more like smoke tests (if I'm writing them for programmers) or acceptance tests (if I'm writing them for customers). I merely mean that I don't rely on them to check basic correctness of behavior that can reasonably run without expensive external resources. (That's why we have interfaces/protocols.)

Yea agree. I have found just sprinkling _a very few *simple*_ integration tests that target the main output only pretty useful when you have very messy legacy code and have been asked "hey TDD guy, you're so awesome at tests. We didn't have time to add them and how we have a 1500 line React component that is breaking all the time. So... can you please add tests to our nice codebase you miracle worker". Then they an come in handy because in some cases, that's really the only kind of tests you can write when you have messy codebases that are coupled like hell and need to start breaking them apart with a little confidence.

I really liked the article, thanks for writing.

I’m currently in a situation where I’m testing how a dotnet api interacts with a database.

Because our team is working with a nearly 40 year old database, we have an analyst who gives us sql queries that we then parameterize and execute from our api.

Also, no one on the team has any form of write/change access to this database.

Our team is practicing Tdd, which i’m proud of, and as a long time practitioner of tdd, I feel most of our tests are pretty good.

But there is still a decent amount of programming that happens in the sql queries, and in database triggers that were installed years and years ago (before our project even existed).

So, we’re integration testing the endpoints and services that interact with our problematic db.

I found your article insightful, but am curious how you’d test these software elements. We already use contracts, but I don’t know if that helps in this scenario… but what is collaboration testing?

Is that what we’re already doing?

Thanks again, really been enjoying your articles and videos

1 Like

Thank you for your kind words.

If you treat the database interactions as a black box, then you’re likely already doing collaboration and contract testing the way I’ve described it. If I were in your situation, I would be trying to narrow the scope of the integrated tests, which means moving the boundary as close to the database as I could—but gradually over time. There is no real rush. With a system that old (and, I infer, delicate), safety matters more than speed.

As for the logic in your SQL queries, you likely have little choice in the matter: you’ll run integrated tests there, and maybe even manual ones, depending on the programming tools available to you. Maybe you’ll do some kind of shell-level scripting to run a bunch of queries on a test database to be able to do some form of rudimentary checking in the style of Golden Master. Either that or you use your non-SQL programming language purely for its SQL client library, so that you can run queries while being able to use xUnit to write assertions. Whatever happens, there’s probably no opportunity to do collaboration testing within the SQL queries part of the system. I would treat that as a boundary and consider that the integration point.

My guess is that you’re already doing enough here. If you can push the boundary closer to the database, try it, but don’t feel obliged to push very hard. When a good idea comes to you, try it.

Good luck.

1 Like