Microservices vs Monolith

the age-long debate

microservices vs monolith str

The requisite TLDR;

Microservices or monolith? Pick your poison. As shown in the below write-up, there is generally no right or wrong answer: it all depends on the project, team and the skills, budget and time constraints. No fear though, it is possible to switch gears midway if you tread lightly. For an extra bonus, I’m also taking a look at a fairly new phenomenon on the systems architecture landscape: the promise and the curse of nanoservices.

It’s all about how to arrange the whole

The monolith architecture versus microservices debate has its roots traced in time when the names had not yet been coined in software development. When they had just emerged as alternative recognisable architectural patterns.

It’s worthwhile setting some anchor points when referring to microservices and monolith architectures for the benefit of further discussion. For microservices, while there are a lot of different definitions out there, I am sticking to the one closest to the Martin Fowler’s from his article of the same name: a platform or application based on microservices is a system of distributed independent processes communicating with each other over a network or other medium through technology-agnostic protocols. A microservice might represent a business activity with a specific outcome - while desirable, this is not a necessary requirement. It also may consist of other underlying services but should represent itself to its consumers (other services) through a well-defined interface, not imposing on them the demand for knowledge of any its inner workings.

The monolith, on the other hand is almost the antithesis of microservices: processing of the information is performed centrally, in one place with all subsystems communication done internally inside the memory of a single process. Any modification to the processing logic requires deep understanding of the monolith’s structure, the framework and language it is implemented in. Although load can be distributed across the group of monoliths, any given member of the group does not “know” of or communicate in any shape or form to other instances of itself.

Think of the microservices as a net of ball-shaped nodes, each performing a smaller task to fulfil the larger system’s purpose, and talking to each other by passing legible messages through clearly discernible channels. Now take that net, and squash it into a lump of “stuff”, making the channels shrink and disappear, with nodes bumping and binding to each other, and some even protruding into their neighbour’s core - that’s what your typical monolith looks like.

Now we have the definitions out of the way, I want to emphasise the main distinction between the two styles of architectures essentially comes down to whether the individual components of the complete system interact with each other by means of internal “wiring” framework - or - through external services and protocols.

How do we answer the question of how can we identify components which can form microservices and how large can they be?

While there is no hard-and-fast rule for the size of components the system can be decomposed into, their common traits are as follows:

  1. Relative independence from the rest of the system: the true value microservices bring with them is decentralisation, both in terms of handling the operational flow and the ability to develop, deploy and maintain components of the system in isolation from each other (deployment includes testing and maintenance encompasses logging and monitoring).

  2. Scalability: each individual component has to be able to scale out under load, otherwise there is a risk of introducing a bottleneck in the system (and scale in when the load is back to normal to reduce the overall cost).

  3. Robustness: components should handle any potential failures in the process of their mutual interaction to avoid the loss of data or inconsistencies in transactions.

  4. Security: interaction with a stand-alone component via external means implies its exposure to all attack vectors that any service endpoint is prone to, so make sure to secure each of them properly.

Looking at just the above requirements makes it clear that undertaking the microservices architectural path is not easy and that the reduction of the composite complexity comes at a price of increasing the operational one. The overall cost will grow with the number of microservices in the system and the graph of their relationships. Thus there could always be times when a microservice implements more than one - albeit closely related - function to offset the penalty of communication delay, the entanglement of their interactions, and/or to minimise the attack surface.

The monolith route is not a yellow brick road under all circumstances either, as one still needs to satisfy all mentioned criteria of a well-designed system except, maybe the first one - independence - as it is self-contained. It certainly helps that the focus is kept on just one single thing and having all structural relationships inside hides all your complexity away under the layer of language constructs and libraries - for good and bad, and sometimes - the ugly.

Let’s go over some arguably most important ups and downsides for each of the approaches below.

Microservices

…when you start out, you should think about the subsystems you build, and build them as independently of each other as possible. Of course you should only do this if you believe your system is large enough to warrant this. If it’s just you and one of your co-workers building something over the course of a few weeks, it’s entirely possible that you don’t.

Stefan Tilkov, “Don’t start with a monolith!”

The power of microservices is in being small but many, and if you have ever read Lem’s “The Invincible” you know what I mean: each of them is capable of evolving independently from each other, focusing on one simple task at hand. Together they form disposable compound interconnected structures, which collectively solve problems more complex than just one node can tackle. Their weakness is the web-like relationships that can grow so involved it might become hard to navigate and keep track of them.

Below comes a non-exhaustive list of the main features of microservices that makes them so attractive, yet often hard to implement:

  1. Modularisation from onset:

    • Pro: it does help immensely if you are able to establish clearly all individual subsystems and evolve them independently from each other. It will allow your system to remain flexible and help to keep each component clean from the function overload. You might even experiment with using different frameworks and languages for different parts of your system - as long as they hold onto their contracts to others.

    • Contra: defining boundaries from the very beginning can be hard and time-consuming, especially if the product is in flux. And if the partitioning was not optimal, redefining it later is ten times harder! As from then on, you will have to deal not with one, but with many - and the already established relationships among them.

  2. Easy (-ier) refactoring:

    • Pro: you might rightfully expect that any refactoring of a microservice as a nexus node becomes easier in itself, as comprehensively follows from the previous point.

    • Contra: just don’t assume the same will happen if you have a major refactoring at hand involving the inter-modular relationships. Sometimes it doesn’t even have to be the initial partitioning error, but just a consequence of the new required functionality not fitting well in the existing design. A change touching many code bases is more involved on any occasion.

  3. Fast, independent delivery of individual parts within a larger ecosystem:

    • Pro: that is one of the catch-cries of the microservices in the modern era of computing, and deservedly so. Being able to change and update parts of the system while it is running leads to a shorter release cycle, more frequent and less risky deployments, and in the end - better user experience.

    • Contra: your complete solution likely needs to be able to run several versions of every service at the same time, so canary deployments is a must - which introduces additional complexity into your overall delivery pipeline. Continuous integration with full-coverage testing is of paramount importance now too.

  4. More flexible and nimble scaling:

    • Pro: having your system run as a set of a narrowly-focused services with a lower than average resource consumption makes it dumbfoundly easy to run on the most nimble type of virtual infrastructure available out there - be it containers, or any flavour of “cloud-functions”. It also becomes a walk in the park to fine-tune scaling out and scaling in policies for various parts of the system to make sure it accommodates to the load in the most gracious manner.

    • Contra: your logging and monitoring becomes a challenge - instead of a single or a handful of services one now is prospectively forced to deal with a swarm of them. Review how well your services score on the robustness scale and invest into a monitoring and logging solution allowing you to pinpoint the cause of (potential) failures with good precision before the problem is fully unfold.

It might be worthwhile to consider a couple more points in order to evaluate whether the microservices architecture is well suited to your project:

  • Starting with microservices might be detrimental to the project’s viability: there is a certain budget to launch a new product off the ground and having to invest time and resources in something that might never kick off is not a wise strategy. (And see the opening quote to the ‘Microservices’ chapter expressed by someone who is a great proponent of the “microservices first” approach)

  • Microservices usually imply working in different teams separated by service boundaries. With a small team of few developers it might make little sense to split the efforts in order to work on several things at once instead of concentrating on a delivery of the overall solution that actually brings value

Monolith

If you are actually able to build a well-structured monolith, you probably don’t need microservices in the first place. Which is OK!.. You shouldn’t introduce > the complexity of additional distribution into your system if you don’t have a very good reason for doing so.

Stefan Tilkov, “Don’t start with a monolith!”

A monolith is strong in centralisation and completeness, and therein lies its fragility. Everything is straightforward in the beginning and programming languages are naturally predisposed to building monoliths. All code is in one place so any change involving several components is easy. As with the microservices, things start to get tricky when growth accelerates. Suddenly it becomes hard to move around the programming logic scattered across the multitude of implemented features, revisions get chunky and more complicated and an instinctive fear of updates and deployments grows as they turn larger and riskier. With microservices one can mitigate the issue to an extent by growing and scaling development teams - each working on their own subsystem. This measure does not work so well with monoliths.

Let’s have a look at good and bad aspects of the monolith architecture in some more detail:

  1. Easier to start with:

    • Pro: monolith architecture is definitely a more traditional paradigm familiar to a greater number of people than microservices. Programmers are being taught to think in subroutines, functions, classes and methods rather than distributed systems communicating remotely with each other. Not only that, communication across different parts of the program is effortless, fast and (mostly) secure as all data exchanges are occurring within the process’s memory with no external connections involved. With a small team of developers on a tight budget, all of that might become the determining factor in the search of acceptable architectural solution.

    • Contra: easier to start does not necessarily mean “easy to carry on” though. Much rigour is required to ensure that during the course of development the structure remains well-defined, the ever-growing libraries scope is kept under control, and security patches are applied to them.

  2. Simple to maintain and refactor:

    • Pro: a single codebase means it should be less problematic to perform any complex change involving different parts of the project, provided the structure is easy to navigate through. [This argument may seem a moot point as microservices can also reside in the same repository - however the mono-repos bring their own set of challenges in the context of microservices which I am not going to discuss here].

    • Contra: monoliths tend to grow very quickly over time and if special attention is not paid to the overall makeup, their composition may become very complex and no longer easy to handle.

  3. Central monitoring:

    • Pro: it is evident that due to all internal logic and relationships being stowed away under the shell of one program all external monitoring will be targeting only one entity - the endpoints of the monolith itself. All basic health checks can be performed on the load balancer and format of the log streams is uniform across the fleet. To trace an individual request it’s often enough to implement a marker (request ID).

    • Contra: the onus of the subsystems’ monitoring is on the maintainers of the code in the form of a “health check” endpoint. It could be cumbersome to traverse an error or an uncaught exception through programming constructs within a massive codebase.

Additional argument often brought up against monoliths is they are relatively heavy-weight and thus their scaling can be a taxing task. That is true to the extent that there might be a limit on what sort of container the monolith may fit in, which can affect the respective scaling agility.

Never fear, switch the gear

Can you start with one architecture, and gradually transit to another? This question probably makes sense almost entirely in the context of starting with monolith and gradually splitting it up into modules to recreate the whole as a collection of microservices. It’s hard to justify why one would want to travel in the opposite direction. Transition is certainly doable but the effort required will depend on how well you kept the foundation of your monolith, as shown in the previous sections.

As Martin Fowler points out in his MonolithFirst article, there are a few ways to accomplish the monolith-to-microservices transition with some strategies consistently working out better than the other. I list them below in rough order of their most likely success:

  1. “Peeling the onion” [and shedding a tear or two in the process :-)]: it might be the path of least resistance allowing one to continue working on their monolith while gradually transforming it to a more distributed system by peeling its parts off the edges and evolving them into the microservices. Example: “The Guardian” website.

  2. “Split and repeat”: rather than trying to accurately identify all subsystems and partition your monolith along their boundaries, try to split it roughly into a couple of large independent pieces and work your way up from there. [A “change pattern” might help with this: what changes together, stays together]. An even better strategy is to start off with two or three coarse-grained services in the first place, which might be a “median way” in terms of effort between the monolith and full-blown microservices implementation.

  3. “Ice Age”: as harsh as it sounds, you might just want to rewrite your monolith from scratch instead of trying to disassemble a Frankenstein’s creature and breathe new life in every part seized. It could be an act of despair after it becomes clear that the demand has outgrown a once solid solution, in the face of unsuccessful attempts to fix the unfixable. Yet it also can be a result of deliberate decision to choose a “sacrificial architecture” for a product with the intention to rebuild it once the idea has proven to be successful. Over and above that, always bear in mind that sometimes your design today for tomorrow’s demand might be inadequate for the day after. So be prepared to learn that a lot of successful companies had to rethink and redo their product and service offerings a couple of times in the space of one decade.

  4. “Transformer”: congratulations, you’ve done it well, your product is highly modular, you need just to go ahead and set your microbots loose to do their job in a distributed manner with the same quality and precision as when they were all part of a single body. As much as it resonates with most transitioners-to-be, it is unfortunately a highly improbable scenario as well, due to the very nature of the monolith: it is usually designed to be a solid and tightly knit piece.

Nanoservices: enter the brave new world

Tom McLaughlin, the founder of the ServerlessOps, in March 2018 published on his website a blog post on the rise of “nanoservices”: services that, by his own definition are “deployable” - contain within themselves all necessary information on their infrastructure deployment, “reusable” - not tied to their current use, and “useful”.

Each nanoservice possesses a knowledge on how and where to:

  • obtain its own input
  • publish results in a format easily consumable by other nanoservices
  • handle all errors it might encounter in the process

In essence, a nanoservice is a domain-specific microservice readily deployable onto one of the popular cloud computing platforms with a clearly defined input/output interface.

A nanoservice, he argues, is more complex than a software library but less so than an average microservice, and the premise of nanoservices is to be able to take a few of them, group and arrange together in order to create a usable application. He then goes on to describe his own implementation of a few nanoservices deployed on AWS Serverless Application Repository (SAR), which cooperatively solves the task of decomposing an AWS Cost and Usage report into chunks ready to be fed into a data analytics platform.

For as to why a nanoservice bears less complexity than its “micro” cousin - is not because it is doing less or is somehow smaller in its size. The reason it is considered simpler is due to the virtue of being deployable. It is inherently tied to the medium it is deployed to, which typically is one of the cloud computing environments. By being provisioned into that destined environment, it normally makes heavy use of more than one of the managed services available on that platform, sparing considerable effort in implementation, documentation and integration testing for its creators and removing a lot of maintenance burden from users of that nanoservice.

Take AWS Lambda as an example - the top-used nanoservice platform on AWS - AWS will take care of scaling the service in and out, logging, and monitoring. [One still needs to collate the metrics, create alerts and set the triggers to react to them]. It can use API Gateway or ELB for its frontend endpoint, SQS as a queuing service for dealing with messages, SNS as means for subscription and notification, S3 for storage, DynamoDB for caching metadata, and so on and so forth, with all of the mentioned services being fully managed by the cloud provider. One can drop a level deeper and provision a nanoservice as a container on ECS Fargate cluster - still managed by AWS - with the nanoservice containing all necessary code and logic to deploy the cluster itself prior to its provision. You can imagine going all the way down to EC2 level if that fits your representation of a “nanoservice” too.

Does it mean though one can now deploy a bunch of nanoservice solutions, connect them to each other and reap the results of their coordinated execution in a hands-off fashion? Far from it! You still need to know the ins and outs of all related cloud resources, their footprint during scaling events, performance tune-up and security implications. You need to become a master of the platform the solution is running on, collect all logs and metrics, and keep a watchful eye on its execution. After all, you are the one whose account it is deployed to, trusting it with your data, and footing the bill for all of its (mis)behaviour.

Having toolkits like AWS CloudFormation Development Kit (CDK) readily available for TypeScript, Python, Java and .NET it is now easier than ever to start developing the “self-deployable” solutions in the pursuit of the nanoservices heaven - or hell. Subject to your team composition and their current set of skills it might work out either way, as Tom McLaughlin himself reflects upon in one of his recent Twitter threads. The nanoservices (and the serverless tools employed to create them) bring along the abstractions which shift the complexity of software engineering onto systems engineering instead. If that starts to sound familiar by now, it should. With that in mind, set your sail to the favouring wind, and safe travels. Oh, and if you need a pilot to lay a fairway for your journey or some old salts to help you weather a storm, please by all means get in touch!

Acknowledgments and credits

I am grateful to my colleagues Ben Boyter and Naveen Nathan who have carefully reviewed this essay before publication and suggested to elaborate on a few topics, which has definitely made it a much better piece. My gratitudes extend to our Melbourne team leader Kirsty Trask for initiating the discussion in first place and proof-reading the final version thereafter.

I would also like to thank Looney Labs for kindly allowing me to use the slightly modified images of the "Monolith" and "Evil computer" goal cards from their Star Fluxx board game to light-heartedly illustrate the stand-off between the two architectures.