Play DI: compile time versus runtime

Shane Auckland
shanethehat
Published in
2 min readJan 7, 2016

--

Scala’s Play Framework provides a runtime dependency injection mechanism by default, courtesy of Google Guice. The framework is built with flexibility in mind however, and so allows developers to replace the default application loader with a custom one that allows dependencies to be resolved at compile time. More details on this technique can be found here.

For the last couple of weeks, basically since reading Loïc’s blog post, I’d been working under the impression that compile time DI is a safer, and therefore better, approach. Having just spent a rather unhappy day trying to functionally test non-trivial implementations, I’m no longer convinced. In this post, I’m going to put forward my arguments for both cases in the hope that it will help me make up my mind.

Why compile time?

As developers, we like to fail early, especially before our application gets anywhere near a user. Resolving dependencies at compile time ensures that issues with dependencies are apparent earlier.

On the face of it, it appears that compile time DI provides more explicit control over what is injected. That is definitely appealing when you want to provide alternative implementations. By contrast, Play’s runtime DI implementation uses annotations and an element of automated discovery.

Why runtime?

Providing a custom ApplicationLoader creates an overhead in terms of boilerplate code compared to Guice’s @Inject annotation. Because the ApplicationLoader is responsible for assembling the router, adding a controller (even if it has no dependencies) involves updating the ApplicationLoader to pass the new controller to the router’s constructor. This could become unwieldy for larger applications with a granular constructor structure.

Failing early may not be as much of a valid argument if the application has a suite of functional and integration tests that properly cover the production dependencies, and that suite of tests is used early. Using configuration rather than dependencies to manage the application’s boundaries means that production dependencies can be used fairly early in the pipeline, so failures should be easy to catch with automated CI tooling.

Next steps

Obviously this is a bit of a hand wavy opinion piece, but hopefully it will bring up some more viewpoints. For me, I think the next step is going to be to produce a more detailed examination, with working examples of both approaches handling non-trivial scenarios.

--

--