Continue Discussion 10 replies
August 2012

flografle

Thanks, really good piece. The link to "when to fake and when to mock" is dead, do you have a live one?

2 replies
August 2012 ▶ flografle

jbrains

Yes. It's here: http://legacy.thecodewhispe...

Thank you for letting me know; I'll update the link.

1 reply
August 2012 ▶ flografle

jbrains

I did one better: I migrated the post over to the new blog.

August 2012 ▶ jbrains

flografle

Thanks, I appreciate it.

July 2014

ittaizeidman

Thanks for the post.
I found it useful but somewhat conflicted with a notion that's been building with me (following conversations with colleagues) that even though we can mock Services maybe we shouldn't (all the time).
Maybe we should opt for more component/cluster tests inside the "hexagon".
Because mocking means I want to fixate an interaction and maybe we should try and limit those mainly for the ports.
The added value I see of course is a lot more flexibility with respect to bigger changes inside of the hexagon which have no functional bearing. My service still gets the same request and outputs the same response but internally the services are different.
Would really like to hear your thoughts on this.

1 reply
September 2014 ▶ ittaizeidman

jbrains

Yes, Ittai, I usually counsel against doing anything "always". For that reason, I (usually) advise people to "mock Services freely", meaning "don't feel any guilt when mocking a Service".

On the other hand, I often advise people "mock all Services for the next four weeks" as a form of deliberate practice. Sometimes the only way to find the edge of the cliff is to fall over it a few times. Once you've mocked a lot of Services you'll feel more able to judge when to do it and when not to. This relates to the "peers v. internals" notion in Growing Object-Oriented Software Guided by Tests.

Often we end up with small clusters of objects that work together to expose a single component API. I might and might not find myself separating those objects from each other by interfaces. As always, it depends. Often if I do separate those objects by interfaces, then I discover that part of the component stabilises and becomes more widely reusable and the rest becomes a plugin/policy/strategy.

As you point out, I mock Services in order to find the essential interactions between modules. We could do that other ways, but I find mocking Services especially effective at doing that.

September 2014

fiunchinho

Nice article.
About using test doubles for entities, I think it also depends on how rich your model is. If your entities have almost no business rules, then there is no point in using test doubles. It's cheap to use the actual entity in your test, and you don't have to assert method calls anyway.

But if your Entity is taking care of business rules, you most likely will call entity methods from your service. In this case, I find usefull to mock entities so I know that those method calls (I mean commands, not queries) are being sent correctly. I think it's easier to use interaction based testing, since you don't always have the possibility to assert the entity state using state based testing: you don't always have a public interface for that.

I'd love to see your thoughts on this.

1 reply
September 2014 ▶ fiunchinho

jbrains

Business rules, yes; infrastructure, no.

In short, business rules should run entirely in memory. If they can't, then they depend on the infrastructure choices of a specific application, and this violates the Dependency Inversion Principle in its most fundamental way.

If we design our Entities as snapshots of a persistent "thing", then we can treat them almost as Values, and in that case, we wouldn't need to extract interfaces for/mock them. An Entity equates to a Value + Behavior. If that behavior becomes sufficiently complicated or we want to make it Pluggable, then we reach for an interface/mock.

The problem comes when we let an Entity depend (even through an interface!) on an application-level Service. We have to approach that with caution, even with interfaces/mocks involved. Abstractions can leak very easily in that case, and I see exactly that happen all over the place. For example, an authorisation policy might make sense as a business policy, but persistence acts only as an application policy. I design the application to persist the Entities, rather than the Entities to persist themselves. This makes ORMs of dubious value. Again, use with care.

At the same time, some authorisation policies only apply to a specific application (user X can access page Y) and others to the underlying business (only Gold members can schedule appointments during these times). I prefer to separate those from one another, and I don't always find it clear how to distinguish them. In those cases, I care more about "cost of change" than "getting it right".

October 2014

oscherler

As I was testing a couple of new services the other day, I reached a point where all my tests were passing, yet I was missing two very important pieces: I had forgotten to register service A in my dependency injection container, and I hadn’t yet installed an external library that was a dependency of service B.

Yet all my tests were passing, because I’m instantiating service A in my tests for it and mocking it in the other tests — skipping the DIC both times — and I’m mocking the external dependency in my tests for service B, and not testing it as it’s an external dependency.

I guess I need to add a few (not many) integrated tests to test the “plumbing”: that all my services are registered in the DIC properly, configured with the right dependencies, and it makes sense, but it was an interesting realisation.

1 reply
October 2014 ▶ oscherler

jbrains

I assume that the Entry Point uses the Container to obtain an instance of service A, passing it to whomever needs it. I also assume that the Entry Point similarly controls the lifecycle of service B. In both cases, a smoke test would help check that the Entry Point assembles the application correctly. Of course, we add the fewest smoke tests possible.

When you register service A in a Container, you must do this because you want some Framework (that uses the Container) to instantiate a class X for you, and X uses A. Can you write a smoke test for the application that discovers all the classes (like X) that Framework will instantiate for you, and simply try instantiating them all, checking for exceptions? That might help. If not, then why not?

I'd like to avoid testing the plumbing in the style of "I added service X, not let me verify that the container can instantiate an X with the expected implementation". I'd rather test that the system can instantiate the things that need X.