The Mathematics of Simplification

People who know me, know that I am a huge fan of mathematical foundations for anything related to architecture. We have far too many frameworks, methods and theories that are not scientifically (e.g. mathematically) founded. It is my believe that this holds us, enterprise (IT) architects, back in delivering quality in a predictable and repeatable way. Much of the architecture work I come across can be cataloged under ‘something magical happened there’.


There are examples of mathematics being used to improve our knowledge and application of that knowledge.

Jason C. Smith has used mathematics to create a representation of design patterns that is independent of the programming language used. It allows you to analyse code or designs and find out which patterns actually have been used (or misused).

Odd Ivar Lindland has used simple set theory to create a basis to discuss quality of conceptual models. Highly recommended if you want to think more formally on the quality of your models.

Monique Snoeck and Guido Dedene have used algebra and formal concept analysis to create a method for conceptual information modeling, finite state machines and their consistency. A good introduction is the article ‘Existence Dependency: The key to semantic integrity between structural and behavioral aspects of object types‘.

This is just a grab in the formalization of Enterprise Architecture. There are many more examples!

The Mathematics of IT Simplification

Recently I came across a whitepaper from Roger Sessions titled ‘The Mathematics of IT Simplification’. In this white paper the author describes a formal method to partition a system so that it is an optimal partition. He offers a mathematical foundation to support his method. It’s really interesting and I would encourage anyone to have a look.

The whitepaper suffers from a serious defect though. It makes a few assumptions, some expliciet and some implicit, that are crucial to the conclusions. But some of those assumptions are not as safe or obvious after some scrutiny.

  1. Only two types of complexity are considered: functional and coordination}. The author states that only these two aspects of complexity exist or are relevant. It is also stated that aspects are also independent of each other. There are no references to existing research that supports this assumption nor is there any attempt being in the article itself to justify the assumption.
  2. The Glass constant. The author needs a way to quantify the increase of complexity when adding new functionality. He uses a statement made by Robert Glass. But how true or applicable is that statement? Even the author himself states Glass may or may not be right about these exact numbers, but they seem a reasonable assumption.
  3. Coordination complexity is like functional complexity. This is probably the most troublesome assumptions. The author builds up a (non scientifically) case for the mathematics for functional complexity. He fails to do this for coordination complexity and simply states that the same mathematics will probably apply.
  4. A final assumption, even though not explicitely made in the article but nevertheless present, is that adding a new function does not affect complexity of existing functions. I can imagine adding functions that actually lower complexity of existing functions. The article in fact is only valid for completely independent functions, which makes it not usuable with decomposition or dependent functions. But those are in fact the most common circumstances for doing complexity analysis to justify partitions

Nowhere in the article is there any scientific proof that these assumptions are ‘probably’ true. Arguments in favor of the assumptions are either leaning towards it is worst case so it can only get more precise or the assumptions feels right, doesn’t it. Neither of these are accepted in the scientific world. Scientists know how dangereous it is to base conclusions on assumptions that are not founded in research, be it empirical or theoretical.

Research to quantify complexity, even if just for comparison, is valuable. Therefore I applaud this effort but I would encourage the author to go forward with it:

  1. conduct empirical research to gather data that supports the assumptions made;
  2. find similar research that either supports or rebutes your assumptions and conclusions;
  3. and finally the most important, apply for publication in an A-level publication to get quality peer review.

All of the examples I gave in the introduction did these steps. Their conclusions are supported by empirical research, they stand on the shoulders of prior research and their work has been peer reviewed and published in A-level journals. That is the kind of scientific foundation Enterprise Architecture needs.


3 thoughts on “The Mathematics of Simplification

  1. Bavo:
    Thank you so much for taking the time to go through my White Paper The Mathematics of Simplification. You have made many good points, most of which I agree with and even those I disagree with raise valid issues. So let me go through them.

    Let’s start with your critique of my assumptions.

    “Only two types of complexity are considered: functional and coordination}. The author states that only these two aspects of complexity exist or are relevant.” I don’t believe I ever said that these are the only two types of complexity that exist or are relevant. In fact, there are many other measures of complexity that are frequently used, probably the most common being Cyclomatic Complexity. However these measures of complexity operate at a different level. They are looking at the code complexity, not at the overall architectural complexity. I have proposed a model for architectural complexity. I am not aware of any other models that address that level.

    You state that “There are no references to existing research that supports this assumption nor is there any attempt being in the article itself to justify the assumption.” This is true. To the best of my knowledge, there is no research on architectural complexity at all. This paper should be seen as a hypothesis. I invite researchers to prove that my hypothesis is true or false. At this point, this paper represents the best understanding we have of how architectural complexity works, how it can be measured, and how it can be eliminated.

    But I completely agree on the need for research on this matter. I made this point forcibly in my IASA editorial, “Obama’s Information Technology Priorities [1],” in which I called for research in Complexity Management Methodologies as among the highest priorities for the new Federal CIO. I have frequently given talks at Universities where I encourage research on this field. I will be equally supportive of research that either collaborates or refutes my hypothesis.

    I do believe that at least the first half of this assertion is backed up observationally, that is, the assertion that overall complexity increases exponentially with increasing functionality. For more on this, see my web short on The Relationship Between IT Project Size and Failure [2].

    While I agree that Glass’s Constant is just a guess, I think it is a good guess. In any case, as I think I make clear, it is not important exactly what the constant is in the relationship. What is important is where the constant appears: in the exponent of the formula. This means that complexity increases exponentially with functionality (or dependencies.) Whether complexity doubles as functionality increases by 25% (as Glass says) or only doubles when functionality increases by 30% has no impact on the underlying analysis.

    In a follow-on blog [3], I go into this equation in more detail, and show how you can adjust it for your own assumptions on the exponential relationship.

    I think the case for coordination complexity is at least as good as that for functional complexity. As we have learned from many SOAs, the more messages a system has, the more complexity it has. And, as before, I believe it is clear how to adjust the equations if you don’t agree that coordination complexity has exactly the same exponent as functional complexity. As before, the precise constant is much less important that where the constant appears (in the exponent.)

    You say that “A final assumption, even though not explicitly made in the article but nevertheless present, is that adding a new function does not affect complexity of existing functions.” I’m not sure why you are saying this. The fact that we see an exponential relationship between the number of functions and the complexity of the overall system would seem to me strong evidence that the addition of a new function DOES affect the complexity of the existing functions. Otherwise we would expect to see a linear relationship between functionality and complexity.

    Beyond these assumptions, you suggest that I “conduct empirical research” on this topic. I couldn’t agree more. I am a small organization. I don’t have the luxury of a large research budget. However I continue to offer my collaborative services to anybody who would like to explore this area. And, if anybody would like to fund this very important research, I would be delighted to participate.

    Thanks again for your thoughtful review!

    Best wishes,
    Roger Sessions


  2. Oh yes, one more point. You say I should ” apply for publication in an A-level publication to get quality peer review.” If you look at page 3 of the paper, you will see that I acknowledge the in- depth reviews of 12 different reviewers, all highly experienced in the field. I think you will find very few “A-level” publications that include that depth of peer review.

    – Roger Sessions

  3. I am happy to see that you agree with me on the need for scientific research. I do understand it requires (a lot of) funding and a willing academic institution. On the other hand, when I search for “measuring complexity” or “comparing complexity” on I don’t come empty handed either.

    To get back to assumption #4.

    What if you add a function that actually off loads some of complexity of existing functions in the systems. Technically you are replacing a whole lot of functions with new functions resulting in a new systems with one additional function.

    Or perhaps a different way of looking at complexity. If we look at the systems from a cybernetic point of view and see it as a set of variables with various types of loops between them (reinforcing, balancing …). What if introducing a new function also introduces a new variable with specific feedback loops that actually make the system more stable and predictable? Wouldn’t that lower complexity?

    This last example also shows that there might be (my own unproved hypothesis) at least two types of complexity: behavioural and structural complexity. Perhaps your white paper deals more with structural complex and less about dynamic modelling (with the variables and feedback loops). Just a thought though. If this would be true, it would raise the question on how both complexities are related.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.