Software is always evolving.
A notable exclusion is the TeX typesetting system by D. E. Knuth, which is designed with the idea to be perfect. Yet you may find bugs there too. Different talk.
Measuring complexity in software development
A practical measure of complexity is the development time and effort needed to make a change in software. Sure, this depends on a range of factors. Skills, for instance.
It is not unusual for a skilled developer to write 500+ lines of high-quality code a day. What is unusual is the persistence of such performance. Can it be 10,000 lines a month? Can it be 100,000 lines a year? If you have a team of five skilled developers, will you get two million lines of software in two years? Well, for the last question, it is definitely “No.”
Sure, it is not fair to measure individual productivity by the number of code lines they have written today. But if we consider a whole system evolved by hundreds of developers in a time span of years, codebase growth dynamics seems to be a reasonable metric. And it is way behind 100% theoretical performance. Likely, you will get some 25% or 15%, or maybe 5%.
The law of increasing complexity
The second law of thermodynamics says the state of entropy of an isolated system will always increase. Same applies to software: The law of increasing complexity.
If you only apply the approach of adding stuff into a system, its complexity will increase beyond control despite all your efforts to manage it.
Understanding the growth of complexity
As a simplified example, let’s consider the data model aspect of software. If you have 10 tables in your database, usually it is not complex yet. But if you extend it to 100 tables, the situation gets much worse than 10 times more complex. Complexity is not about the number of entities directly, it is about the number of possible indirect connections between them.
And this metric grows incredibly — it is not just exponent, not factorial, but O(n^n). It is really bad:
2³ = 8 | 3! = 6 | 3³ = 27
2⁶ = 64 | 6! = 720 | 6⁶ = 46,656
2⁹ = 512 | 9! = 362,880 | 9⁹ = 387,420,489
Of course, in practice we are not dealing with the worst-case scenario, and attitude matters too. By applying tests, code review, design patterns, refactoring, continuous integrations, and all other practices, it is possible to evolve significant amounts of software in one piece. This is all useful but has limits.
If you take care of your code, you can get to one million lines of software that you can evolve. If you violate all the good practices, you can get stuck with just 100,000 lines. There is at least an order-of-magnitude difference.
The problem is, with a 10-million codebase, it will get slippery anyway — no matter how hard you try to do all the small changes in the right way.
Managing complexity with bounded contexts and layers
Dealing with complexity in software is not a secret or mystery. Instead of incrementally evolving a system as one piece, it should be split into distinct domains that are mapped to each other — bounded context, the key pattern of domain-driven design.
Additionally, a system should be organized in layers, allowing us to work with higher-level concepts as we progress.
It’s important to note that it is not just about modularity. If you split a system into connected modules, you won’t get rid of complexity as the system will remain in one piece — it will just be better organized.
The cost of reducing complexity
Reducing complexity comes at a cost. One key aspect is that bounded contexts should overlap, with their elements being mapped instead of directly referenced.
Just introducing a “clients” module and requiring everyone to use it will not reduce complexity. Instead, denormalization is needed to achieve this goal.
In different contexts (or domains), there should be separate models of “clients” that are mapped to the upper-level “client” model. This approach allows each context to evolve independently while encapsulating complexity within the individual context.
You make additional efforts to map the same concept to different domains but you get encapsulation of complexity as payback. Worth it.
So, if you apply reasonable design efforts to split a system into (overlapping) bounded contexts and organize it into layers, your system will remain simple enough to evolve smoothly in decades. Apparently, it is not what happens in reality.
Microservices — just better modules
Monolithic applications are often considered difficult to evolve, and microservices are thought to be the cure. The idea is simple: Split your monolith into a collection of microservices that can be deployed independently, thereby scaling your system. However, this approach is not a panacea.
Microservices are essentially just another way to divide a system into modules. While modularity can lead to better organization of a system, it is not a way to encapsulate its complexity.
The pitfall of incremental improvements
Complex software cannot be designed in its entirety upfront; it must evolve incrementally. As a result, we have adopted iterative development processes and focus on delivering small improvements to software on a daily basis or even more frequently. While this approach is essential for managing the inherent complexity of software, it also presents a potential pitfall.
By concentrating solely on incremental changes, we may inadvertently lead our software to unmanageable levels of complexity in the long term. This occurs because the focus on small, immediate improvements leaves little room for essential high-level design refinements.
Consequently, when it becomes necessary to split a significant portion of software into bounded contexts or layers, there is no proper moment to do so, as it never aligns with the small-scale changes typically made during iterative development. This misalignment makes the approach of small incremental changes a pitfall that contributes to the overall increase in system complexity.
Incorporating high-level iterations
To address this issue, one approach for evolving a system could involve incorporating high-level iterations, such as quarterly cycles, in addition to sprints. These longer cycles shall focus on addressing the necessary high-level increments for maintaining manageable complexity in system design.
While this may seem like a logical solution, what appears logical is not always feasible in practice. It may be a challenging proposition, but attempting to integrate high-level iterations into the development process could be a worthwhile endeavor to keep software complexity under control.
This article was written by a Wriker, in Wrike. See what it’s like to work with us and what career development opportunities we offer here.
Also, hear from our founder, Andrew Filev, about Wrike’s culture and values, the ways we work and appreciate Wrikers, and more here.