Morphing the Monolith

Comments 5

Share to social media

Microservices can certainly be made to work well for particular types of applications, but is it relevant to the mainstream? Can it replace the traditional architectures of database-driven applications?

Microservice architecture is a type of service-oriented architecture that was developed from the concept of Domain-Driven design (DDD) and consists of loosely-coupled services that are network-based.

It was devised as a way of avoiding the difficulties that are faced when extending a conventional ‘monolithic’ business application to deliver new functionality, which would generally incur the pain of re-engineering parts of a large central database, which performs many interdependent processes. Instead, we define a set microservices, each one with a well-defined scope that it is limited to, but comprehensively covers, a set of business processes that pertain to a single bounded context. This allows relatively autonomous teams to develop functionality in parallel.

It also means that for any particular microservice, you can change code without knowing anything about the internals of the other cooperating microservices, because you don’t share data structures, database schemata, or other internal representations of objects. You interact strictly through clearly-defined APIs.

However, by cutting out the ‘monolith’, you now need to move data safely, between services, and abandon a half century of painfully-won expertise in maintaining referential integrity, handling concurrent usage and managing transactions. This brings with it some substantial challenges, when designing the microservices architecture:

  • You can’t easily evolve the microservice APIs as your understanding of the domain improves. It isn’t just a matter of understanding the service itself, but also the wider context.
  • You place faith in the reliability and performance of the network, but need to managing network errors and service interruptions, and come up with strategies for such things as resiliency, caching and high latency.
  • Each internal database still needs to be part of your organisation’s auditing strategy. Can you, for example, track changes as part of a fraud investigation?
  • You still need a coordinated approach to defining every datatype that is to be publicly consumed. This can turn out to be extraordinarily complicated.
  • Databases that are used only by a microservices but hold personal data are still subject to international rules on data privacy, retention and so on.
  • You have to rely on a central service for managing distributed transactions.

Microservices have highlighted the inadequacies of the monolithic approach, but do they avoid the problems or merely kick them down the road a bit? Perhaps a more generally-useful approach is to tackle the inflexibility of large relational databases, by changing the way we design databases. Instead of assuming that relational databases are inherently difficult to change quickly, shouldn’t we find a way of doing it? If so, then how? What do you think?

Commentary Competition

Enjoyed the topic? Have a relevant anecdote? Disagree with the author? Leave your two cents on this post in the comments below, and our favourite response will win a $50 Amazon gift card. The competition closes two weeks from the date of publication, and the winner will be announced in the next Simple Talk newsletter.