Heuristics are methods and procedures used to solve problems with efficient use of resources.
Depending on the starting point and the problem to be solved, various methods and procedures addressed in this blog post can help—for example, top-down and bottom-up, divide and conquer, or the separation of concerns.
In contrast to reference architectures that clearly demonstrate how a specific software architecture should be structured, architecture principles represent proven fundamental principles that nevertheless provide no information on how they are to be used in specific situations.
With most of the principles addressed here, two main issues play an important role: reducing the complexity of the architecture and increasing the flexibility (and adaptability) of the architecture.
The top-down approach starts with the problem, successively breaks it down into smaller subproblems, and finally ends up with mini problems that can no longer be broken down and that can be directly solved.
The advantages of this approach are that all components are known, and the risk of creating unsuitable results is extremely low. These advantages, however, are only visible at a late stage, and misunderstandings manifest themselves in the result at the end of the project.
In contrast, the bottom-up approach starts with the specific machine and builds additional “abstract machines” on top of it. The developers start with the implementation without full knowledge of all system details. The partial solutions are combined with each other until finally a complete “problem solution machine” is created.
In contrast to the top-down method, results are achieved quickly, and risks are identified at an early stage. On the other hand, partial results can potentially be unsuitable for subsequent steps.
The two approaches aren’t mutually exclusive and can complement each other.
An essential feature of a comprehensible software architecture is the hierarchical decomposition of the system into subsystems or building blocks.
The divide and conquer (Latin, divide et impera) principle is used in many branches of IT and describes a reductionist approach that breaks a task down into ever smaller partial tasks until the complexity of these tasks reaches a manageable level. This principle is also used in numerous algorithms and makes use of the fact that the effort required to solve problems is reduced when they are broken down into smaller subproblems.
Similarities to the top-down design approach are clearly recognizable. A system or a component is broken down into ever smaller, relatively independent components, resulting in a hierarchical (or tree-type) component structure.
This approach can be used to encapsulate single or multiple functions or responsibilities, or to separate different aspects of a problem from one another.
Depending on the algorithm, various approaches are possible for solving the overall problem, for example:
Decomposition is an important approach for reducing complexity [Sta20]. One of the central principles of decomposition is encapsulation, without which undesired dependencies between individual parts of the system can result. Encapsulate complexity in components and treat these as black boxes. Components should make no assumptions regarding the internal structure of other components. Other important aspects are low coupling and high cohesion, but we’ll go into more detail on these later.
Instead of reinventing the wheel, you should reuse already established and proven structures.
Design iteratively, and determine and evaluate strengths and weaknesses based on a prototype design.
Break down the system into elements that are as independent as possible, and separate responsibilities clearly and understandably.
As Albert Einstein once said, “Make things as simple as possible, but no simpler.” Simplicity has desirable effects. It makes things easier to understand and prevents problems becoming hidden by excessive complexity. Simple structures are easier to understand and are therefore easier to change. Any dependencies can also be more easily determined and more easily removed.
This principle is closely related to the term “suitability”, as a degree of complexity can be appropriate in a specific situation. Appropriate use of complexity, however, is a matter of experience. In case of doubt, preference should be given to the less complex option.
The “You Aren’t Gonna Need It” (YAGNI) principle (represents one of the principles of Extreme Programming (XP), which was designed as a development process model around 1996 to counteract preventive, excessively extensive, but ultimately unnecessary design efforts due to changing requirements and still remain able to act in a project with high employee fluctuation.
The “Keep it simple, stupid!” (KISS) principle (meaning “Make it as simple as possible”) has its anecdotal origins in a project to create jet engines. These should be maintained using simple tools by average mechanics without any specialist knowledge.
When applying the principle, two pitfalls in particular must be taken into account. On the one hand, there is a risk of taking an essential, architecturally relevant, but not yet acute, requirement into account too late and causing high change costs. On the other hand, there is a risk of choosing an obvious solution instead of a simple one. A simple solution would solve a problem without a lot of effort and complexity, whereas an obvious solution is mainly due to its popularity, for example, from a previous project.
The separation of concerns principle states that different aspects of a problem should be separated from one another, and each subproblem should be addressed on its own. As with many other principles, it’s based on the principle of divide and conquer.
Concerns and responsibilities should be addressed at all levels of the design, from individual classes through to complete systems.
The separation of functional and technical elements is particularly important and should be a fundamental objective. Doing so ensures that the functional abstraction is separated from the specific technical implementation and allows both aspects to be further developed independently of one another (or makes it easier to replace and reuse individual program elements). An additional advantage is increased quality due to improved traceability of changes and their impacts.
The modularity of a system determines the extent to which it’s broken down and encapsulated in self-contained building blocks (modules). The separation of concerns principle can be used in conjunction with the principle of information hiding to implement the modularity principle. The modularity principle states that one should aim to use self-contained system building blocks (modules) with simple and stable relationships. The building blocks of a modular system should be black boxes and hide their internal workings from the outside world.
When recurring, similar problems are answered with recurring, similar solutions, conceptual integrity arises. This approach simplifies the understanding of facts and is a means of achieving the principle of least surprise. This helps reduce complexity.
If no emphasis is placed on conceptual integrity when designing an architecture, the result is that extensive solutions are created that are difficult to understand, more complex than necessary, and—as a result—difficult to work with. An example here is shuffling a deck of cards while playing cards. There are many different ways to shuffle a deck of cards successfully: overhand shuffling, injog shuffling, pack shuffling, riffle shuffling, Indian shuffling, or shuffling all the cards on the table. All of these options achieve the same goal. If we use a different shuffling technique every time we shuffle a deck of cards, we might be able to impress our fellow players, but we won’t achieve a better shuffling of the cards. Nevertheless, we made the effort to learn each of these mixing techniques. If we want to teach new players how to mix, it’s easier to limit ourselves to one technique.
On the other hand, an excessive pursuit of integrity can lead to excessive reuse and application of solutions to problems that differ in detail or require entirely different solutions. For example, if we had playing pieces instead of playing cards, only one of the preceding techniques would be applicable.
Nobody is perfect, and errors happen. This insight is fundamental in software development and can be found in many central software development process models, such as test-driven development (TDD) or agile approaches. Scrum, for example, provides for continuous improvement through sprint retrospectives and enables the team to independently identify and solve problems. In addition, Scrum learns about its own strengths and weaknesses and how to deal with them. This gives the team robustness and resilience. The design of a software architecture also benefits from such an approach. For example, you can regularly check the built architecture against its goals and thus discover possible problems in the design.
Even when an application is running, errors constantly occur. Whether it’s a hard drive failure, a network failure, or an application crash, Murphy’s Law applies: “Whatever can go wrong, will go wrong.” Expecting errors and being able to react to important errors is therefore a proven principle.
The robustness principle, or Postel’s Law, is a concrete law for dealing with errors. It says, “Be conservative in what you do, be liberal in what you accept from others.”
If possible, a system should only send simple and correct messages to other communication partners so that the probability of errors is reduced. At the same time, in order to ensure the continuity of functions, a received message should always be accepted as long as the purpose and content of the message are clear. However, this doesn’t mean that messages should not be checked at all; instead, for example, only the fields of a message that are used for further processing should be checked.
The information hiding principle presented below is still an essential and current principle today. Especially in this context, lean interfaces play an important role.
The principle of information hiding was developed by David Parnas in the early 1970s. As already explained, the complexity of a system should be encapsulated in building blocks. This increases flexibility when it comes to making changes. The building blocks are regarded as black boxes; access to their internal structure is denied and instead takes place via defined interfaces. Only the subset of the total information that is absolutely necessary for the task should be disclosed.
The most important aspects of an architecture are interfaces and the relationships between building blocks. Interfaces form part of the basis of the overall system and enable the relationships between the individual elements in the system. The individual building blocks and subsystems communicate and cooperate with each other via interfaces. Communication with the outside world also takes place via interfaces.
Putting the key into the lock and opening the door of a new house for the very first time is a wonderful experience. The individual rooms still smell of the final work carried out by the painters, carpenters, and others. Everything is clean, and the kitchen is tidy. A few weeks later, the fresh smell has gone. And if you don’t tidy up, maintain, repair, and throw things out on a regular basis, a new house can very quickly become an unattractive place.
The same principle applies to software and its architecture. Software is normally continuously enhanced. If you don’t tidy up regularly and remove rough edges during the process, additional features created as a result of time pressure and bug fixing won’t be integrated properly into the underlying architecture, and even the best software architecture will degenerate in a very short time. The costs for further development and renovation of software are often so high that the effort is no longer economically viable. Starting again from scratch then becomes an option that can’t be excluded.
It’s therefore necessary to refactor the software at regular intervals and to carry out a redesign. When defining “refactoring,” Martin Fowler differentiates between refactoring as a noun and refactoring as an verb (i.e., activity):
Refactoring serves to adapt dependencies so that incremental development is made simpler.
Take the example of a bugfix in a class that accesses another class via multiple dereferences in the format
u.getV().getW().getX().getY().getZ().doSomething().
Such dereferencing chains should be avoided because they create direct dependencies across entire networks of classes. In this case, a possible refactoring approach would be to place a new method getZ() in class U.
It is critical that time is regularly spent on refactoring and redesign, and appropriate resources must be planned for in the overall project calculation.
Editor’s note: This post has been adapted from a section of the book Software Architecture Fundamentals: iSAQB-Compliant Study Guide for the Certified Professional for Software Architecture—Foundation Level Exam by Mahbouba Gharbi, Arne Koschel, Andreas Rausch, and Holger Tiemeyer. Mahbouba is the managing director and chief architect of ITech Progress GmbH. Arne is a professor for distributed information systems at the University of Applied Sciences and Arts in Hanover, Germany, and a board member of the iSAQB. Andreas is the director of the Institute for Software and Systems Engineering at Technische Universität, Clausthal. Holger is a former vice chairman of the iSAQB board.
This post was originally published in 3/2025.