Tour de Lagom Part 1: Lagom and microservices
Lagom is a framework for building reactive microservices in Scala or Java with an emphasis on developer productivity. I’ve had a chance to work with it for some time and would like to share my impressions about it. This is going to take a bit longer than just one post, so you’re reading part one of a 3-part series:
- Part 1: Lagom and microservices
- Part 2: Lagom and reactive
- Part 3: Lagom and developer productivity
The idea is to cover most of what Lagom has to offer by looking at it at various angles. Let’s go!
Lagom and microservices
First off, the word microservices, whilst having won the Golden Hype Award of 2016, actually also carries some meaning with it. The premise of a microservice-oriented architecture is to enable large amount of people to build large software fast. This is achieved by segregating functionality into small (but not too small) self-contained services that are loosely coupled and communicate via a well-defined API. Hence the API is the only thing that various teams building various services must communicate about and agree upon. If everything goes well, then, teams can iterate at various speeds, make refinements to their service(s), switch to a new hyped language or a new hyped database without impeding anyone depending on the service (remember, the only thing that other services know about is the API).
It turns out that whilst this model sounds appealing in theory — especially to venture-capital driven companies of which priority number one is growth (rather than pesky concerns such as sustainability or revenue) — it turns out that putting it into practice is not quite that easy and carries a few technical challenges with it. To be very clear, a microservice-oriented architecture is orders of magnitude more complex than your traditional 3-tier architecture. If you don’t need any of the benefits that a microservice-oriented architecture yields, I’d very strongly advise to resist the Call Of The Hype and to use whatever suits the business you’re in. But if you do, then Lagom may be of substantial help, especially if you are just about to get started.
As a framework, Lagom makes opinionated design choices when it comes to addressing some of the core issues that need to be tackled when building a microservice-oriented architecture. And that’s fine. It’s a frame-work, it is here to frame your work. If you don’t like it, go ahead and build the 679th home-grown microservice stack out there (I probably should mention that I’m not a very big fan of “not invented here” stacks so long as an existing stack provides an answer to the business problems to solve).
Without further ado, let’s look at what those choices are.
Lagom and “The API”: service descriptors
The one thing all services (or rather, people building the services) need to communicate about is the definition of the service - what does it offer, how can you use it. Very often that is a REST API, but increasingly so (and luckily so, should I add) it is a message-based protocol too. This is not service discovery (we’ll come to that in just a moment) and depending on where you are at, it may not even be very much formalized (in many organizations, it is a page somewhere on the company wiki). Yet having a clear and shared and consistent view of all services is I think pretty much the most important thing for a microservice-oriented architecture since this is what will enable re-use.
When building Lagom-based microservices, the service definition is written in source code. Each Lagom microservice consists of two projects: an API and an implementation. It is the API that declares all the things that a service offers. What I find quite nice here is that the declaration itself is abstracted from the underlying transport technology — eventually calls will be mapped to HTTP requests or WebSocket connections but this is not set in stone (and leaves the door open to switching to HTTP 2.0 if supported by the consumer). There are 3 types of things that a Lagom service may declare as something it provides to others:
- a simple call (think traditional request-reponse)
- a streamed call (think bidirectional WebSocket)
- a topic (think publish-subscribe)
A Lagom service descriptor looks like this:
This would translate in a rather traditional request-response implementation where both request and response are of type
Calls that result in a (potentially bidirectional) flow of messages is modelled using Akka Streams:
This example is interesting because the signature isn’t obvious, so let’s dissect it.
First, we call a
tick method which takes an interval - when calling this service you’re expected to pass the interval as part of the path (say, for an interval of 2 seconds you’d call
Next comes the signature
ServiceCall[String, Source[String, NotUsed]]. The first type parameter of the ServiceCall is the type of the inbound request, which is still very much a non-streaming, one-shot request of type
String as in the previous example. The second type parameter is an
akka.stream.scaladsl.Source[String, NotUsed] and represents a stream of strings. But what on earth is
NotUsed all about? This is the one thing that has really been putting me off the first time I saw the signature. Don’t worry, you get used to it, like you get used to many things in life that are annoying at first and then eventually fade away as days go by. But still, what does it mean?
Well, who would have known — this has turn into its own dedicated post. Read it! (unless you already know what it means).
Are you back? Great! Personally, I’d prefer that Lagom had used some sort of façade to hide the
NotUsed part, since Akka Streams in Lagom service descriptors don’t want to know about the materialized value of a stream. But oh well.
There’s one more way in which Lagom services can communicate with the outside world: through streaming topics in the good old publish-subscribe fashion. Lagom has built-in support for the most popular message broker out there - Kafka - and declaring a topic is pretty straightforward:
The important bit here is the
withTopics builder method. We just declare which topics our service will be sharing with the outside world and let Lagom take care of the rest. How a topic is implemented is another piece of the puzzle and we’ll be looking at it in another article in this series, but I can already tell you now that it is one of the very nice things about Lagom.
A word on compile-time service descriptors
Let’s take a step back to think about what it all means. We have service descriptors that are written in compiled code. Followers of the Church of Loosely Coupled may be very upset by this approach, especially when learning that on a continuous integration service, when all Lagom micro-services are compiled using the latest versions of all descriptor definitions, the build may break if a service definition has changed. Followers of the Church of Compile Everything will on the other hand be very pleased by this, because now a core behaviour of your entire microservice system is checked automatically by the compiler. What is very important to mention here is that by no means does the existence of service descriptors in Lagom mean that you can only consume Lagom services from other Lagom services. As mentioned, the definitions eventually map to “real life” things such as HTTP requests, WebSockets and friends, making it possible for any other services to interact with a Lagom service without knowing that it is a Lagom service. But inside of the Lagom realm, consuming services becomes very easy.
Lagom and service discovery
Right after knowing what a service is about, one very important aspect is to know where a service is running. We certainly don’t want to hardcore calls to a specific IP address nor a hostname — the idea is to know the minimal amount of information about a service to get started using it, i.e. its name.
Lagom has a ServiceLocator trait which allows you to roll your own service location integration if you need to (also check out the CircuitBreakingServiceLocator which backs calls to services with circuit breakers to prevent cascading failure).
If you’re going to deploy your application using ConductR there’s not much you need to do else to start consuming services. Remember the service descriptors we talked about earlier on? It turns out that they make consuming other Lagom services pretty easy. There’s two things you need to do: tell Lagom that you’d like to use another service, and then consume it.
implement macro takes care of creating a service client to access the functionality offered by the service.
HelloService is the service that we’ve defined earlier on, and therefore consuming a service is reduced to calling a method:
What I really like about this design is that it does not try to hide away the fact that we’re going to call a service. You can’t miss that. You can’t miss the
ServiceCall. You can’t miss the
invoke. You know for a fact that when you call this, something is going to happen — bits are going to flow over a wire. You’re not going to mistakenly assume that this is a local method call and put it in a recursive function (or so I hope). At the same time, you’re also not having to wrestle with placing an actual HTTP call, deserializing the result and all of the boilerplate that really isn’t all that interesting.
Lagom and message formats
Lagom provides out-of-the-box support for JSON via the Play JSON library. Whilst JSON is a popular format for RPC-like services and outwards-facing APIs I’d argue that it isn’t quite fit for anything inter-service communication, given that:
- there’s an overhead in terms of amount of bytes to transfer that has a negative impact on latency
- there’s an overhead in CPU work required for de/serialization and if you deploy your application in a cloud environment, where CPU usage drives costs, then this is adding to the cost
- JSON does not have any built-in notion of version. In fact it has no notion of message evolution whatsoever and if you don’t roll your own, you might be in trouble (I’ve seen this in a few places already — having as many ways to talk to services as there are services isn’t helping productivity and cooperation). Lagom documents how to go about schema evolution for JSON messages, which is a good start, but in my opinion not enough (it doesn’t prevent “stealth updates” that would negatively affect third-party services that can’t rely on the type-checked messages exposed via the API).
There is an issue about this in the Lagom repository, namely about adding support for Avro & a schema registry. Whether the schema registry should be Kafka-only or rather an abstraction thereof is another question, but supporting a binary schema and building in the notion of a schema registry seems like a very good idea since evolving service versions tends to become a hard problem to solve once there are a sizeable amount of services out there (check out this talk to get a better idea about the problem space).
The Lagom team recently announced the development of a code generator based on OpenAPI v2 to generate service stubs from e.g. Swagger specifications. There’s also plans to add support for gRPC. And whilst I’m not a big fan of anything that has RPC in its name I see how being interoperable with as many service protocols and formats helps Lagom.
That’s it for this part. Stay tuned for more!