A recent trend in our industry is the software architecture move from a monolith app to microservices, often with a related blog post explaining how much better everything is after. A recent blog post by neobank Monzo explains that they have reached the crazy amount of 1500 microservices (a ratio of ~10 microservices per engineer), and details one of the many challenges it creates (spoiler: security). In this article, I will try to debunk two supposed "benefits" of microservices: modularity and scalabity.
A tentative definition
It's hard to find a proper definition for the term microservice. The most frequent definition I've read is functional: it would be a smaller-than-normal (thus micro) unit of functionality. This definition is illformed because we can always go deeper in this rabbit hole: Is performing an addition an unit of functionality? Should we have a standalone binary to do a simple arithmetic operation? Where do we draw the line?
I've read on r/programming a technical definition: a microservice is a badly named encapsulation technique. We already know functions, classes, modules, packages, libraries to achieve the purpose of encapsulating a service (= a unit of functionality); microservice would just be another one, with the defining difference that microservices are run as standalone binaries, and the communication between microservices is done over the network.
Fallacy #1: Microservices lead to cleaner code.
Modularity (or separation of concern) is an important goal of software engineering, and all mainstream programming languages offer several tools of various granularity to implement it. However, this is a very hard task and we often fail it leading to the dreadest spaggheti codebase.
The main difficulty of having a modular codebase is to identify the dependencies between each part of your product: this is not related to your codebase, but purely an intrinsic aspect of your product. The technical tools you have at your disposal don't help identifying dependencies, but only help to implement those dependencies. If your codebase has failed to achieve modularity with tools such as functions and packages, it will not magically succeed by adding network layers and binary boundaries inside it: you will end up with the spaghetti over HTTP codebase.
However, your codebase has now to deal with network and multiple processes. It creates a whole new set of additional problems to solve, and potential bugs to introduce:
- Network failure (or configuration error) is a reality. The probability of having one part of your software unreachable is infinitely bigger now.
- Remember your nice local debugger with breakpoints and variables? Forget it, you are back to printf-style debugging.
- SQL transaction ? You have to reimplement it yourself.
- Communication between your services is not handled by the programming language anymore, you have to define and implement your own calling convention (some languages acknowledge it and provide constructs to help like Ballerina).
- Security (which service can call which service) is checked by the programming language (with the private keyword if you use classes as your encapsulation technique for example). This is way harder with microservices: the original Monzo article shows that pretty clearly.
Fallacy #2: Microservices lead to more efficient code.
There are two aspects of efficiency for software: performance and scalability.
For performance, it is obvious that language-agnostic API and network calls is less than ideal (this might be the reason why the video game industry is still safe from this whole microservice trend).
But in the web age, all the fuzz is about scalability (maybe not for good reasons, but this is a topic for another rant). We generally scale our services linearly by adding more servers, allocating X servers for our application. Splitting an application into microservices gives finer-grained allocation possibility; but do we actually need that ? I would argue that by having to anticipate the traffic for each microservice specifically, we will face more problem because one part of the app can't compensate for another one. With a single application, any part (let's say user registration) can use all allocated servers if needed ; but now it can only scale to a fixed smaller part of the server fleet. Welcome to the multiple points of failure architecture.
Conclusion
It is pretty clear that by using microservice as your level of encapsulation, you are loosing a lot of features provided either by your programming language or by your operating system. So why go down this road ? The main benefit is independency: this is the only level of encapsulation which allows you to develop, test and deploy services separately. It should be seen as an escape hatch from the comfort and safety of other encapsulations. This is the Rust's unsafe keyword of service programming.
Like the famous Considered Harmful paper from Dijkstra, the main point of this article is that microservices are overused and that there are technically better programming constructs for most of their use cases. If you need this independency for a social reason (like keeping a team of engineer independent of another), go for it but keep in mind that almost all technical challenges (code modularity, scalability, single point of failure…) will not be magically solved by using microservices.
Comments
Post a Comment