Integrated Tests Are A Scam

April 5, 2009 (updated December 12, 2015) Integrated Tests Are a Scam


This is a companion discussion topic for the original entry at http://blog.thecodewhisperer.com/permalink/integrated-tests-are-a-scam

J.B.,

1. Having read most of your blog posts on integrated tests, I have a far less allergic reaction to your "Integrated tests are a scam" position.

2. I didn't understand from my earlier reading that you hold Acceptance Tests in higher esteem than Integrated Tests. With your definition of Integrated Tests defined in a lot of detail, I understand how and why you're making the distinctions between Integrated Tests and Acceptance tests. That diffuses most of my objections to the statements that I thought you were making (through my own misinterpretation / over-simplification of your position).

3. I think the 6 seconds vs. 60 seconds is a real issue and you illustrated it well. To state the obvious, Acceptance tests trigger the 60 second problem much less frequently than frequently run Integrated Tests do.

4. Aslak Hellesoy is absolutely right when he points out that testers can effectively choose high-value tests through combinatorial and pairwise software test case designs. This test design strategy can often detect the significant majority of defects in a System Under Test with fewer than 1% of the total possible number of tests executed. Trying to write truly comprehensive Integrated Tests would be lunacy. See, e.g.:
http://www.combinatorialtes... and
http://hexawise.com/case-st...

5. If you had your way, everyone worldwide involved in developing and testing software would understand the significant costs and serious limitations of conducting Integrated Testing. If I had my way, those same people would all understand the significant efficiency and effectiveness gains they could achieve through pairwise and combinatorial testing methods. It is amazing to me that such a small percentage of the software development and software testing community remains unaware of such a powerful approach.

6. I used to think our views were mostly opposed. (Again, mea culpa: it was because I didn't take the time to fully understand what you were and were /not/ saying). I now think our views are mostly consistent.

- Justin Hunter

And instead just do functional tests?

I tried that. I noticed that those tests don't exert enough pressure on the design; they don't provide enough warning about tangled dependencies.

Besides, how often are functional tests not integrated? As far as I can tell, the concept of customer unit tests never really took off.

Hello, many thanks for this blog post and for the Agile 2009 presentation, which is one of the best I've ever seen.

A short comment on my side. One thing I try to fight against, is the idea of having a test-fixture created by bringing the whole Spring context to life, instead of simply creating few objects in some setup() method. This of course makes tests last much longer and requires the team to maintain numerous XML spring config files. I know this is not exactly what you are talking about, but for me this is just another example of blurring the border between unit (focused) and integration tests, mainly because someone is too lazy to create objects in test code.

Cheers!

Dziękuję, Tomek! I have had the same experience with Spring, and so when someone asks me about that, I give them a simple Novice Rule: never load a Spring configuration in an automated programmer test. Eventually a person will reach the point where he wants to test that he has configured Spring correctly, but understands not to use Spring to test the classes he instantiates with Spring.

Hi,
That's what makes whole idea fail. Whenever setup is an essential part of the application, and in most cases it is, you can't avoid integration tests. Logging, security, transactions or anything you can put into aspect/interceptor must be tested somehow and it's only integration test which can prove it works. And as soon as you have one integration test you add them more and more. Then add the automated UI/selenium/ testing and a pure test cases will never be written by your team. Doh.

By your definition aren't acceptance tests a form of integration tests? Do you keep those or tend to throw them out? Or is the point simply do the testing you have to do (ie - acceptance for feature specification) but don't go down the rabbit hole of just testing because it gives you some kind of invalid personal assurance that your app will be bug free?

Hello, Ryan. Thank you for your comment.

Yes, most people implement most acceptance tests as integrated tests. I have experimented with "customer unit tests" as James Shore has called them, in which I take examples from a customer and run them directly against the smallest cluster of objects (sometimes a single one) that implements them, but those tend to make up perhaps 1% of my test suites.

I don't use integrated tests to show basic correctness (see http://link.jbrains.ca/OcKoSm for more), and anyway this entire discussion relates to programmer tests, not customer tests. I use integrated tests only where I want to check integration: system-level tests, smoke tests (very few in number), performance and reliability tests... but not to show that an object would compute the right answer running on a Turing machine.

I hope this helps clarify things for you. I have said this for years in my training courses, but whenever I try to write about it, it grows to thousands of words, and I don't want to inflict too many such articles on the world.

Thanks so much for your response. A colleague and myself were discussing this post and while I agree with it, I found the concept of throwing out customer acceptance tests challenging.

It appears I need to write this in bigger, brighter letters. :)

So, what do you recommend instead of integrated tests? You spent the whole article saying why they're bad (and made valid points), but didn't propose any kind of alternative.

This is only the first article in a series, but sadly, the link to the rest of the articles is broken. Until I fix it, try these Google keywords: "site:blog.thecodewhisperer.com integrated tests"

Link fixed. Thanks.

Awesome, I'll check those out :)

Do you treat ORM / Data Framework as trivial behavior ? With other words : do you always put your data access strategy behind a layer ?

I assume that I should hide data access behind a layer, then wait for evidence that I should change it. I make this decision based on past experience. I learned this one very early: http://dddcommunity.org/cas... so sometimes I forget to reconsider it when things have changed, so I need other people to remind me to reconsider it. :)

This decision is driven by the utopia of the "persistence independent" model isn't it ? Do you think it's realistic ? Would you be able to build your domain and then say "ok i'll store it with either xml serialization or nhibernate or ravendb or json files or entity framework....". Persistence is the main bottleneck in most applications, hiding it behind a layer means you'll have less control once you want to improve your response time.

http://ayende.com/blog/4567...

I don't want to be able to switch ORMs, but I also don't want my persistence services to pervade the rest of my system, because then it's *really* hard to change how some part of it behaves.

"...hiding it behind a layer means you'll have less control once you want to improve your response time." I think the opposite: hiding it behind a layer means I have total control over improving response time, because there's almost no chance that clients could depend on the implementation details of the persistence services, do they can change almost any way they want, as long as that doesn't break the basic behavior of find and save.

Seems REA Australia have a ruby gem to enforce client / server contract tests - worth a look.

https://github.com/realesta...