All monoliths are not created equal. That is, not all monoliths became monoliths the same way nor do they have the same root causes. With this post, I'll identify different types of monoliths I've seen. This is useful as the tactics I use to break the monolith into smaller, more manageable, pieces is different depending on which category of monolith I'm dealing with. It's worth noting that some times elements of these different categories of monoliths are used in combination.
A monolith is an application that has grown too large to effectively manage. That is, it's expensive to enhance and fix. Change to monoliths often come with unintended consequences; you fix something and other things break. Monoliths effectively marry you to a technical stack, making it difficult to grow with new technologies as they evolve and mature. Due to the size and complexity of underlying monolithic code, it's often expensive and time-consuming to bring on new developers (there's so much to learn). The same labor characteristic limits the business in that outsourcing change often isn't an option.
Feature-Bloated Web Application
This type of monolith is specific to web applications. It becomes larger and more complex as new features are added. Moreover, some of those new features weren't considered when the application was originally designed. Consequently, those new features are often inelegantly "tacked on" and don't really conform to the initial design of the application. Often, there's no time and budget to revamp the application design to add new features in ways that are easy to maintain and support. The consequence for this is often technical debt in the form of unwanted coupling.
New features add complexity to all parts of the application. New features impact not only the user interface but server-side backing code that supports that interface and the underlying database. Often that server-side backing code and database is tightly coupled with the user interface and "assumes" that the business processes currently implemented by the interface. Often, it is assumed that these business processes are static and rarely change. Unfortunately, that is often not the case. Developers react to the additional complexity in predictable ways.
Beware of undocumented developer assumptions. These assumptions are often made for developer convenience. Those assumptions make the new feature easier to code. I don't blame them; the extremely large feature set is often too difficult to keep in one person's head. Developers, as their assumptions produce initially more streamlined and easy to read code, believe that they are making a positive contribution. Unfortunately, they are also inadvertently creating land-mines for other developers who aren't aware of those underlying assumptions and need to change that code for some other feature addition. As most of these assumptions go undocumented or even unidentified, other developers working on that section of code are often unaware of all the consequences of their changes.
Most monoliths don't measure feature usage. As a consequence, features, once implemented, never get removed. It would be logical to remove features that aren't used or aren't used often as the complexity for having that code remain will negatively impact your ability to add new features down the road. Sadly, getting budget for removing features is hard to do in most organizations as the benefit to doing so is hard to measure and too intangible.
It's worth noting that often the design of the underlying database often becomes bloated along with the web application. Due to the large scope of business it supports, the underlying database often becomes an Achilles heel for the organization. Not only is database refactoring much more costly and difficult than refactoring code, it's not uncommon for that database to be used by multiple applications. The implication here is that change to the underlying database often has consequences for more than the monolithic application it was originally created for.
Data Store Strangle
Despite the length of time that databases (relational and non-relational) have existed, database design skills are woefully inadequate at most organizations. As a result, you often get a process-oriented database structure. That is, a database structure that is specific to processes currently implemented by the application that database supports. Usually, the mindset is that the database can be 'refactored' along with code as new features are implemented.
Database design impacts application code. At the risk of stating the obvious, applications have code dedicated to reading and writing to the database. Hopefully, database code is restricted to a data access layer within the application, but not all developers adopt the practice of separating data access concerns. However, the readability of that data access code is directly impacted by the quality of the design of the underlying database. If the database structure is complex to the point that its not understood, chances are the data access code that uses that database will be just as hard to understand. Furthermore, if the application uses a different domain mode than was implemented in the database, some type of data structure translation is present in data access code. The short story is that the database design used can greatly increase the size and complexity of data access code.
Refactoring databases is harder and more time-consuming than refactoring code. There are a couple of false assumptions buried in that line of thinking. First, it is assumed that refactoring a database is as easy as refactoring code. Not true. Any database refactoring often must take place while supporting existing code. Also, any database refactoring also must have some type of data conversion as a part of it. Another way to look at it is that code only has state that's transient. That is, its state exists only at run time. Database state is persistent. Changes to that database must include converting the data that was stored into its new format.
Refactoring databases has more risk than refactoring code. If a code refactoring is discovered to have a defect, backing out that change is usually an option. Database refactoring changes usually don't have this safety net. Yes, you can restore a version of the database that was taken before the refactoring was implemented, but users will loose any data entered while that refactoring was in place. Unless you code and test a reverse conversion, implementing database refactoring changes commits you (pun definitely intended) past the point of no return.
The Over-Engineered Tarball Component
Some developers can't resist over-engineering internal components they write. Often that happens because they are bored and looking for something more interesting to do. I can't fault them for that motivation. Unfortunately, when those mental musings get implemented in applications, additional complexity often results. This contributes to an application becoming a monolith.
I've seen a case where such a component was written to validate cash transfers from account to account. While laymen to this business process might look at such a transfer action as simple, those of you deeply embedded in bank know differently. The tax ramifications and considerations are incredible depending on how that transfer is recorded. In fact, we had over 100,000 permutations and combinations of transfers that needed to be verified and tested. Anyway, the component in question utilized the same type of bit-switching algorithm that UNIX uses for file permissions, but with many more dimensions than just user, group, and all permissions. Anyway, when that person left, nobody could understand the component nor could they effectively change the rules it applied. Long story short: the application was held hostage by this embedded, but critical component.
Overly complex components have a way of creating complexity in the surrounding parts of the application. In my example, we had code to correct the validation verdict in places where it came up with the incorrect answer. This was needed as nobody knew how to change the overly-complex component handling validation.
The business process implemented by the component is often not understood. That is, replacing such a component is often more difficult as the business rules it implements are often not understood. It's hard to code a replacement without a specific target.
The Distributed Monolith
Separate services are supposed to increase developer velocity by reducing the size and complexity of code that needs to be changed. We all know that this doesn't always work out. We've all seen situations where feature additions require coordinated changes across multiple components. We've all seen that even though these services are theoretically decoupled and should be able to be changed/enhanced/deployed independently, that this doesn't seem to happen much of the time. The reason is that these services are not truly decoupled. Let me explain.
Developers have a desire to consolidate code. We've incented this in developers for years by promoting DRY (Don't Repeat Yourself). Carrying this idea forward, developers consolidate repeated code whenever they find it. Invariably, business code gets consolidated into common libraries. It starts with implementing a service contract in code, making a library out of that, and sharing it across the publishing service and all consumers. It often progresses from there to include common business code that is listed as a dependency for multiple services. this common business code, which usually is used for manipulating common business data, tends to also be consolidated. Perhaps by publishing that logic in common libraries, and is used by several services.
Developers are often quite proud of the code consolidation. Unfortunately, whenever that common code changes, it forces a coordinated redeployment of all consumers along with the producer. In this world, there's no such thing as a non-breaking change. In essence, DRY, if used for business logic including service contracts, increases coupling between services. If that common code is purely technical and services are completely free to decide when they upgrade to new versions, then there's no coupling issue.
Distributed monoliths are caused by coupling between services that forces change across multiple services. In these cases, you're not getting the benefit of breaking the world up into smaller services. In fact, just the opposite. You get all the disadvantages of a highly coupled monolith along with the additional overhead of managing a larger number of services. In other words, you forgo all advantages of implementing microservices and pay more for the privilege.
Separate common business functionality into its own service instead. Deploy it once. call it from whatever services need it. Only maintain it once.
Don't create common code that represents your service contracts. Consumers usually don't need all attributes in the request or response for a REST service call. Let's take an example of a REST call that returns customer information. For each customer, there might be 50 attributes. Of that, possibly a consumer only needs five or six. That consumer call should not break if new attributes are added or if an attribute that the consumer ignores is changed or withdrawn. If that contract is represented by common code, that common code will necessarily list all the attributes that the consumer ignores and such changes will be a breaking change.
Services should always be able to choose when they upgrade included libraries. Always - no exceptions. To get the benefits of breaking the monolith up into several services, you cannot permit any common code that could force a deployment of a service.
The Distributed Monolith
Separate services are supposed to increase developer velocity by reducing the size and complexity of code that needs to be changed. We all know that this doesn't always work out. We've all seen situations where feature additions require coordinated changes across multiple components. We've all seen that even though these services are theoretically decoupled and should be able to be changed/enhanced/deployed independently, that this doesn't seem to happen much of the time. The reason is that these services are not truly decoupled. Let me explain.
Developers have a desire to consolidate code. We've incented this in developers for years by promoting DRY (Don't Repeat Yourself). Carrying this idea forward, developers consolidate repeated code whenever they find it. Invariably, business code gets consolidated into common libraries. It starts with implementing a service contract in code, making a library out of that, and sharing it across the publishing service and all consumers. It often progresses from there to include common business code that is listed as a dependency for multiple services. this common business code, which usually is used for manipulating common business data, tends to also be consolidated. Perhaps by publishing that logic in common libraries, and is used by several services.
Developers are often quite proud of the code consolidation. Unfortunately, whenever that common code changes, it forces a coordinated redeployment of all consumers along with the producer. In this world, there's no such thing as a non-breaking change. In essence, DRY, if used for business logic including service contracts, increases coupling between services. If that common code is purely technical and services are completely free to decide when they upgrade to new versions, then there's no coupling issue.
Distributed monoliths are caused by coupling between services that forces change across multiple services. In these cases, you're not getting the benefit of breaking the world up into smaller services. In fact, just the opposite. You get all the disadvantages of a highly coupled monolith along with the additional overhead of managing a larger number of services. In other words, you forgo all advantages of implementing microservices and pay more for the privilege.
Separate common business functionality into its own service instead. Deploy it once. call it from whatever services need it. Only maintain it once.
Don't create common code that represents your service contracts. Consumers usually don't need all attributes in the request or response for a REST service call. Let's take an example of a REST call that returns customer information. For each customer, there might be 50 attributes. Of that, possibly a consumer only needs five or six. That consumer call should not break if new attributes are added or if an attribute that the consumer ignores is changed or withdrawn. If that contract is represented by common code, that common code will necessarily list all the attributes that the consumer ignores and such changes will be a breaking change.
Services should always be able to choose when they upgrade included libraries. Always - no exceptions. To get the benefits of breaking the monolith up into several services, you cannot permit any common code that could force a deployment of a service.
Trailer
In forthcoming blog entries, I'll suggest tactics for breaking each of these types of monoliths into more manageable applications. Thanks for reading.
No comments:
Post a Comment