Keep Dependency Injection Simple

I taught evolutionary design in person to about 200 programmers this year. In the 15 years that I’ve been doing this, I’ve spoken with confused programmers who felt unsure how to use dependency injection to improve their design. I tend to notice this confusion more often among the programmers who have “some experience” practising test-driven development—say, 6 months to 2 years. I’d like to share a few short pieces of advice, and if this advice leads you to ask some follow-up questions, then I invite you ask them, so that I can help you even more.


This is a companion discussion topic for the original entry at http://blog.thecodewhisperer.com/permalink/keep-dependency-injection-simple

Containers are an abstraction over manually configuring the dependency graph. They provide a high level language for managing the configuration, composition and lifetime of objects.

From this point of view, the program is the same whether we configure it manually or with the container.

Now, one main benefit we gain from using a container is that it can warn us a common configuration problems.

One benefit of using containers is that they can be the first gate in preventing common injection problems. The container can throw errors early when injecting a short-lived object (transient) into a long-lived object (singleton).

In these cases, the transient object inherits the lifetime of the singleton object, that causes the transient object's constructor only executes once! The container will warn us about this, preventing the problem early.

The other benefit is that they make cleaning up legacy code a little bit easier. We can more easily push the configuration for a component up to the top of the application with ease!

Since containers remove the "dependency carrying" problem, we can register the dependency at the top, request it in the component's constructor at the bottom and that's all we need to do. No need to modify constructors all the way up the chain and fix all the tests.

I am one of those programmers, trying to get their head around DIP frameworks. I am sure I understand the base principle and I do manual injection in some tests, as described in Joes article. Currently I try hard to use Dagger 2 for Android development, and currently I can't say, that they helped me developing Android Apps better, faster, or test something like the more difficult parts like networking easier. For instance one of the better recommended practices to inject a fake network api for testing is to prepare build variants (android gradle's flavor).

I haven't thought about this entire comment yet, but I plan to. For now, I want to reply to something specific: "No need to modify constructors all the way up the chain and fix all the tests." I don't understand this problem. More accurately: I think I understand this problem, but I don't understand why programmers do this. I hope I have merely understood this incorrectly. Let me try:

"Down here" I need a Service, so I have some class X that asks for Service in its constructor. I write new X(new Service()). Now, of course, G needs an X, then F needs a G, and so on back up to A. For some reason, programmers seem to have this idea that A, B, C, ..., up to G need to know about Service. This leads to this so-called "dependency carrying" problem. I don't get it. I write this: new A(new B(new C(new D(new E(new F(new G(new X(new Service())))))))); Only X and the entry point (which creates A) knows about Service. I see no "dependency carrying". I see no need to "modify constructors all the way up the chain and fix all the tests". X uses Service, so checking the behavior of X involves the contract of Service. Otherwise, nobody else cares about the behavior Service, not even the entry point.

So what am I missing?

"Containers are an abstraction over manually configuring the dependency
graph. They provide a high level language for managing the
configuration, composition and lifetime of objects." I understand this in principle, but I don't consider benefit worth the price. In Java, for example, we simply instantiate objects and pass them into each other's constructors. What actual benefit does this high-level language provide here?

Also, I rarely want to inject request-level services, so I can't think of a situation in which I want to inject a request-scope dependency. Can you describe an example of needing to do this? I tend to use Value Objects for request-scope input and I tend mostly to inject (apply partially) only Services (stateless when possible). Moreover, since injecting dependencies and apply functions partially means the same thing, I couldn't imagine wanting to apply something partially that would change from request to request: doesn't that defeat the purpose of partial application?

So I still miss something. I don't see in "Common configuration problems" a problem I have actually experienced. When would one ever inject a short-lived object, rather than merely always injecting ones with a longer lifetime? A short-lived object sounds like a request-scope function parameter to me.

I suppose I need to see an example of a code base using one of these containers so that I could debate with someone the merits of applying the container in a particular way. So far, everything I've seen---and I admit I haven't seen much---shows evidence of significantly not understanding the point of injecting dependencies.

Please keep trying, as long as you the energy to do so. :)

If you have a Service X that you need in A, D & F you need to "carry" service X through the constructor of B, C & E you can skin it many ways but your going to need to carry something through some constructors.

Let me see whether I understand this.

A, D, and F all need X. OK.

X x = new X(new Service());
new A(x, new B(new C(new D(x, new E(new F(x, new G(x)))))));

I don't understand "through the constructor" here. I added a parameter to three constructors. E don't know about F's constructor (why would it?). Neither B nor C knows about D's constructor (why would it?). If you're not sure, then introduce variables for the instances of classes A through G.

What's the problem? Where is the "carrying"? I don't see it.

I may be wrong but I think the "carrying" problem comes from not fully DI-oriented applications where each object in the dependency graph explicitly creates the ones below him. In this case, if you create one dependency at a higher level, then you are forced to explicitly pass it down the dependency graph in each constructor to finally reach the one really needing this dependency.

That is approximately what I'm asking about and precisely what we avoid when we apply the Dependency Inversion Principle. I'd also like to hear it from the people who actually experience the problem, because I genuinely feel like I'm missing something.

My biggest problem with DI containers (at least with annotation-driven ones) is the lack of compile-time checking of whether all your dependencies are set up and set up correctly.
With non-annotation-driven ones, I don't really see the benefit, the same way you don't.

I think that the people who have a problem with this probably don't realize that all these "new" statements happen *OUTSIDE* all of the classes that you listed there. Creating instances and injecting them is done by "some other code" that is not part of any of the classes that implement the application code.

The logic goes something like this: We know that A uses B uses C, etc, down to G uses X uses Service. You're telling me that class X should stop doing "new Service()", and should receive it as a constructor parameter. OK; well that would mean that when the code in some method of class G does "new X", it should pass in "s", an instance of Service. So class G would need to receive an instance of Service in its constructor. So class F would need it in the code where it does "new G", and so F also needs to receive the instance of Service in its constructor.

That all "makes sense," but is bad design. You really don't want classes A through G to have to know about Service. What you want is for something "outside" (like another class, or a "magic" 3rd party framework) to know about all the dependencies and how classes are "wired up." Then each class only needs to know about its immediate dependencies. All the "really messy" detailed implementation dependencies get moved "outside" of the application classes into "something else."

That "outside" thing is a mess, and it is full of nasty dependencies. But that's OK: It's small, very focused, and easily changed. It's separate from your business logic, so that the two don't get "tangled" with each other.

Yup. https://blog.thecodewhisper...

I would add that the "bad design" is doing its job: it's telling you that you've scattered cohesive behavior throughout the system. Pushing it up the call stack towards the entry point gathers that behavior into one place. That's the trick.

My point is that Dependency Injection does something *different* from pushing things up the call stack: DI extracts all the dependencies and class-to-class wiring "off to the side" -- to a place that exists independently of the stack.

Step 1: Some "all knowing" class or magic framework selects all the right implementation classes for this run and wires everything up.

Step 2: The application code runs. Lots of stack pushing and popping goes on here.

It seems we're using the same words to mean different things.

When I say "dependency injection" I only mean the act of injecting the dependency. When you write "DI" here, you seem to be referring to "DI *containers*". A DI container is not DI. You don't need to use a DI container to do DI. Indeed, this assumption that one needs a container to do DI creates most of the problem that most people seem to associate with DI.

As promised in the twitters, I did a simple example in working Java code for us to pull apart. I started with a new A(new B(new C(new D(new E(new F(new G(new X(new Service())))))))); kind of thing, as per your earlier comment. It a simple App that calls a Service that needs a Repo: that's in this commit. As per your example, even if the chain were longer there would be no need to carry dependencies all the way through the constructors - I'm injecting things in the App as necessary.

Then I added Spring DI container config to allow me to easily test the Application method in this commit (technically two commits, I forgot an interface). That's the only benefit of the DI container here. My design didn't change and I pretty much added it as an afterthought.

After that I demonstrated one obvious alternative choice: making the service depend on the list of people and having the app do the orchestration of getting the people from the repo, which is done here. Having the DI container let me test that the App still worked as well and documents how the thing is orchestrating the rest of the app.

You might decide the App wasn't worth testing, or to push things around into other places and some of those things aren't worth testing either and that would be fair enough, but I'm interested in your thoughts and alternative designs. Feel free to fork, PR, whatever.