Recap of European Identity & Cloud Conference 2013

The 2013 edition of the European Identity & Cloud Conference just finished. As always KuppingerCole Analysts has created a great industry conference and I am glad I was part of it this year. To relive the conference you can search for the tag #EIC13 on Twitter.

KuppingerCole manages each time to get all the Identity thought leaders together which makes the conference so valuable. You know you’ll be participating in some of the best conversations on Identity and Cloud related topics when people like Dave Kearns, Doc Searls, Paul Madsen, Kim Cameron, Craig Burton … are present. It’s a clear sign that KuppingerCole has grown into the international source for Identity related topics if you know that some of these thought leaders are employed by KuppingerCole themselves.

Throughout the conference a few topics kept popping up making them the ‘hot topics’ of 2013. These topics represent what you should keep in mind when dealing with Identity in the coming years:

XACML and SAML are ‘too complicated’

It seems that after the announced death of XACML everyone felt liberated and dared to talk. Many people find XAMCL too complicated. Soon SAML joined the club of ‘too complicated’. The source of the complexity was identified as XML, SOAP and satellite standards like WS-Security.

There is a reason protocols like OAuth, which stays far away from XML and family, have so rapidly gained so much followers. REST and JSON have become ‘sine qua none’ for Internet standards.

There is an ongoing effort for a REST/JSON profile for XACML. It’s not finished, let alone adopted, so we will have to wait and see what it gives.

That reminds me of a quote from Craig Burton during the conference:

Once a developer is bitten by the bug of simplicity, it’s hard to stop him.

It sheds some light on the (huge) success of OAuth and other Web 2.0 API’s. It also looks like a developer cannot be easily bitten by the bug of complexity. Developers must see serious rewards before they are willing to jump into complexity.

OAuth 2.0 has become the de-facto standard

Everyone declared OAuth 2.0, and it’s cousin OpenID Connect, to be the de facto Internet standard for federated authentication.

Why? Because it’s simple, even a mediocre developer who hasn’t seen anything but bad PHP is capable of using it. Try to achieve that with SAML. Of course, that doesn’t mean it’s not without problems. OAuth uses Bearer tokens that are not well understood by everyone which leads to some often seen security issues in the use of OAuth. On the other hand, given the complexity of SAML, do we really think everyone would use it as it should be used, avoiding security issues? Yes, indeed …

API Economy

A lot of talk about the ‘API Economy’. There are literally thousands and thousands of publicly available APIs (named “Open APIs”) and magnitudes more of hidden APIs (named “Dark APIs”) on the web. It has become so big and pervasive that it has become an ecosystem of its own.

New products and cloud services are being created around this phenomena. It’s not just about exposing a REST/JSON interface to your date. You need a whole infrastructure: throttling services, authentication, authorization, perhaps even an app store.

It’s also clear that developers once more become an important group. There is nu use to an Open API if nobody can or is willing to use it. Companies that depend on the use of their Open API suddenly see a whole new type of customer: developers. Having a good Developer API Portal is a key success factor.

Context for AuthN and AuthZ

Manye keynote and presentations referred to the need for authn and authz to become ‘contextual’. It was not entirely sure what was meant with that, nobody could give a clear picture. No idea what kind of technology or new standards it will require. But it was all agreed this was what we should be going to 😉

Obviously, the more information we can take into account when performing authn or authz, the better the result will be. Authz decisions that take present and past into account and not just whatever is directly related to the request, can produce a much more precise answer. In theory that is …

The problem with this is that computers are notoriously bad at anything that is not rule based. Once you move up the chain and starting including the context, next the past (heuristics) and ending at principles, computers are giving up pretty fast.

Of course, nothing keeps you from defining more rules that take contextual factors into account. But I would hardly call that ‘contextual’ authz. That’s just plain RuBAC with more PIPs available. It only becomes interesting if the authz engine is smart in itself and can decide, without hard wiring the logic in rules, which elements of the context are relevant and which aren’t. But as I said, computers are absolutely not good at that. They’ll look at us in despair and beg for rules, rules they can easily execute, millions at a time if needed.

The last day there was a presentation on RiskBAC or Risk Based Access Control. This is situated in the same domain of contextual authz. It’s something that would solve a lot but I would be surprised to see it anytime soon.

Don’t forget, the first thing computers do with anything we throw at them, is turning it into numbers. Numbers they can add and compare. So risks will be turned into numbers using rules we gave to computers and we all know what happens if we, humans, forgot to include a rule.

Graph Stores for identities

People got all excited by Graph Stores for identity management. Spurred by the interest in NoSQL and Windows Azure Active Directory Graph, people saw it as a much better way to store identities.

I can only applaud the refocus on relations when dealing with identity. It’s what I have been saying for almost 10 years now: Identities are the manifestations of relationship between two parties. I had some interesting conversations with people at the conference about this and it gave me some new ideas. I plan to pour some of those into a couple of blog articles. Keep on eye on this site.

The graph stores themselves are a rather new topic for me so I can’t give more details or opinions. I suggest you hop over to that Windows Azure URL and give it a read. Don’t forget that ForgeRock  already had a REST/JSON API on top of their directory and IDM components.

Life Management Platforms

Finally there was an entire separate track on Life Management Platforms. It took me a while to understanding what it was all about. Once I found out it was related to the VRM project of Doc Searls, it became more clear.

Since this recap is almost getting longer than the actual conference, I’ll hand the stage to Martin Kuppinger and let him explain Life Management Platforms.

That was the 2013 edition of the European Identity & Cloud Conference for me. It was a great time and even though I haven’t even gotten home yet, I already intend to be there as well next year.

Conceptual, Logical and Physical

In his article “ArchiMate from a data modelling perspectiveBas van Gils from BiZZdesign talks about the difference between conceptual, logical and physical levels of abstraction. This distinction is very often used in (enterprise) IT architecture but is often also poorly understood, defined or applied.

Bas refers to the TOGAF/IAF definitions:

TOGAF seems to follow the interpretation close to Capgemini’s IAF where conceptual is about “what”, logical is about “how” and physical is about “with what”. In that case, conceptual/logical appears to map on the architecture level, whereas physical seems to map on the design/ implementation level. All three are somewhat in line but in practice we still see people mix-and-match between abstraction levels.

I am not a fan of the above. It is one of those definitions that tries to explain a concept by using specific words in the hope to evoke a shared emotion. Needless to say, this type of definition is at the heart of many open ended and often very emotional online discussions.

Conceptual, logical and physical are most often related to the idealization – realization spectrum of abstraction. This spectrum abstracts ‘things’ by removing elements relating to the realization of the ‘thing’. Opposite, the spectrum elaborates ‘things’ by adding elements related to a specific realization. You can say that a conceptual model contains less elements related to a realization compared to a logical model. You can also say that a physical model contains more elements related to a realization when compared to a logical model.

In other words, conceptual, logical and physical are relative to each other. They don’t point to a specific abstraction. For that you need to specify more information on exactly what kind of elements of realizations you want to abstract away at each level of abstraction.

The most commonly used reference model for using these three levels is as follows:

  • Conceptual. All elements related to an implementation with an Information System are abstracted away.
  • Logical. A realization with an Information System is not abstracted away anymore. All elements related to a technical implementation of this Information System are abstracted away.
  • Physical. A technical realization is assumed and not abstracted away anymore.

That is the only way to define the levels conceptual, logical and physical: define what type of realization-related elements are abstracted away at each level. You can never assume everyone uses the same reference model. You either pick an existing one (e.g. Zachman Framework) or define your own.

Saying that conceptual is “what”, logical is “how” and physical is “with what” is confusing to say the least. Especially if you know that in the Zachman Framework “how” and “what” are even orthogonal to “conceptual” and “logical”.

At first it is not easy to define a conceptual model without referring to an Information System. For instance any referral to lists, reports or querying assumes an Information System and is in fact already at the logical model.

A misunderstanding I often hear is that people assume that conceptual means (a lot) less detail compared to logical. That’s not true. A conceptual model can consist of as many models and pages of text as a logical model. In reality, conceptual models are often more limited but I only have to point to the many failed IT projects due to too little detail at the conceptual model. It’s just wrong.

Smart Meters … but not so secure

In this article Martin Kuppinger from KuppingerCole Analysts discusses a security leak in a device used for controlling heating systems.

It’s shocking but I am not surprised. IT history is riddled with cases of devices, protocols and standards that required solid security but failed. Mostly they failed because people thought they didn’t need experts to build in security. Probably the most common failure in IT security: thinking you don’t need experts.

Who remembers WEP or even S/MIME, PKCS#7, MOSS, PEM, PGP and even XML?

The last link shows how simple sign & encrypt is not a fail safe solution:

Simple Sign & Encrypt, by itself, is not very secure. Cryptographers know this well, but application programmers and standards authors still tend to put too much trust in simple Sign-and-Encrypt.

The moral of the story is: unless you really are an IT security expert, never ever design security solutions yourself. Stick to well known solutions, preferably in tested and proven libraries or products. Even then, I strongly encourage you to consult an expert, it’s just too easy to naively apply the, otherwise good, solution in the wrong way.

Outsourcing Architecture?

The Harvard Business Review Blog Network published a rather interesting article on the reasons behind the failure of the Boeing 787 Dreamliner. One of the main causes seems to be related to how Boeing outsourced work.

Boeing undertook one of the most extensive outsourcing campaigns that it has ever attempted in its history. That decision has received a lot of press coverage, and the common wisdom is coalescing around this as a cause of the problems.

Outsourcing as such is not wrong or risky. Many success stories heavily depend on outsourcing: Amazon outsources delivery, Apple outsources all manufacturing … The key is what you outsource and what you keep internal.

Rather, the issues the plane has been facing have much more to do with Boeing’s decision to treat the design and production of such a radically new and different aircraft as a modular system so early in its development.

In other words, Boeing outsourced some of the architecture of the new plane. That was new for them. So far they only outsourced manufacturing of parts after Boeing designed them internally.

if you’re trying to modularize something — particularly if you’re trying to do it across organizational boundaries — you want to be absolutely sure that you know how all the pieces optimally work together, so everyone can just focus on their piece of the puzzle. If you’ve done it too soon and tried to modularize parts of an unsolved puzzle across suppliers, then each time one of those unanticipated problems or interdependencies arises, you have to cross corporate boundaries to make the necessary changes — changes which could dramatically impact the P&L of a supplier.

This equally applies to IT solutions. If you outsource parts of the solution before you have designed the whole, you’ll end up with problems whose solutions cross supplier boundaries, impact P&L of those suppliers and require contract negotiations. To avoid this, the entire solution has to be designed before suppliers are chosen and contracts signed. That includes the design of new components, changes to existing components and, often forgotten, integration with existing (“AS-IS”) components.

Either you outsource this entirely (all of it, you can’t be cheap and only outsource 95% of it) or first design the whole internally first.

In the creation of any truly new product or product category, it is almost invariably a big advantage to start out as integrated as possible. Why? Well, put simply, the more elements of the design that are under your control, the more effectively you’re able to radically change the design of a product — you give your engineers more degrees of freedom. Similarly, being integrated means you don’t have to understand what all the interdependencies are going to be between the components in a product that you haven’t created yet (which, obviously, is pretty hard to do). And, as a result of that, you don’t need to ask suppliers to contract over interconnects that haven’t been created yet, either. Instead, you can put employees together of different disciplines and tell them to solve the problems together. Many of the problems they will encounter would not have been possible to anticipate; but that’s ok, because they’re not under contract to build a component — they’ve been employed to solve a problem. Their primary focus is on what the optimal solution is, and if that means changing multiple elements of the design, then they’re not fighting a whole set of organizational incentives that discourage them from doing it.

For Boeing it is going to be a very costly lesson. But at least they’ll have a chance to learn. Large IT projects often fail for exactly this reason: modularize a complicated problem too soon.

One last element from the article … why did Boeing do this?

They didn’t want to pay full price for the Dreamliner’s development, so, they didn’t — or at least, that’s what they thought. But as Henry Ford warned almost a century earlier: if you need a machine and don’t buy it, then you will ultimately find that you have paid for it and don’t have it.

They wanted to safe on the development of the aircraft and thought that by outsourcing the design (the ‘tough problems’) they would keep costs low.

Morale of this: don’t outsource pieces of a puzzle before the entire puzzle is known (designed or architected).

How many times did you encounter this in IT projects?

Prevent, Detect and Recover

Most organizations spend a lot of money trying to prevent damage to information assets.  Typical measures are firewalls, anti-malware, authentication, authorization … But sadly most organizations forget there are three dimensions to proper risk management:

  1. Prevent. Measures to prevent risks from becoming reality. This is where typically most investments are done.
  2. Detect. Detect, after the fact, that something damaging happened to information assets. This is not limited to mere detection but also includes determining the actual impact.
  3. Recover. Undo the damage that was done, ideally as if nothing happened but minimally to a level accepted by the organization. A good detection, specifically of impact, is needed to properly recover.

It is amazing how much money is spend on the first dimension and how little is spend on the other two. Yet a good information risk management consists of a good balance between all three dimensions.

  • If you are not good at detecting damage to information assets, how can you know how well your prevention is performing?
  • If you can detect an intrusion but you have no idea what they did, how can you recover?

Prevent, detect and recover is not limited to attacks from hackers, human or technical errors are as damaging as hackers.

Imagine you periodically receive a file from an external party, this file is to be processed and will result in updates to the database. A responsible organization will take typical measures like signing and encrypting the file, verifying it’s structure using a schema … all aimed at preventing damage. But what if the external party made an honest error and supplied you with the wrong file?

None of the earlier measures will prevent the damage to your database. Even if you can’t automatically detect this damage, perhaps you’ll have to wait for a phone call from the external party, you can take measures to recover. Database schema’s and the updates could be crafted in such a way that you can “rollback” the processing of the erroneous file.

Information risk management should not be limited to prevention, but balance it with detection and recovery. It should also give sufficient attention to risks originating from human or technical errors. In fact, most damage will come from this and not from malicious users or hackers.

The Mathematics of Simplification

People who know me, know that I am a huge fan of mathematical foundations for anything related to architecture. We have far too many frameworks, methods and theories that are not scientifically (e.g. mathematically) founded. It is my believe that this holds us, enterprise (IT) architects, back in delivering quality in a predictable and repeatable way. Much of the architecture work I come across can be cataloged under ‘something magical happened there’.


There are examples of mathematics being used to improve our knowledge and application of that knowledge.

Jason C. Smith has used mathematics to create a representation of design patterns that is independent of the programming language used. It allows you to analyse code or designs and find out which patterns actually have been used (or misused).

Odd Ivar Lindland has used simple set theory to create a basis to discuss quality of conceptual models. Highly recommended if you want to think more formally on the quality of your models.

Monique Snoeck and Guido Dedene have used algebra and formal concept analysis to create a method for conceptual information modeling, finite state machines and their consistency. A good introduction is the article ‘Existence Dependency: The key to semantic integrity between structural and behavioral aspects of object types‘.

This is just a grab in the formalization of Enterprise Architecture. There are many more examples!

The Mathematics of IT Simplification

Recently I came across a whitepaper from Roger Sessions titled ‘The Mathematics of IT Simplification’. In this white paper the author describes a formal method to partition a system so that it is an optimal partition. He offers a mathematical foundation to support his method. It’s really interesting and I would encourage anyone to have a look.

The whitepaper suffers from a serious defect though. It makes a few assumptions, some expliciet and some implicit, that are crucial to the conclusions. But some of those assumptions are not as safe or obvious after some scrutiny.

  1. Only two types of complexity are considered: functional and coordination}. The author states that only these two aspects of complexity exist or are relevant. It is also stated that aspects are also independent of each other. There are no references to existing research that supports this assumption nor is there any attempt being in the article itself to justify the assumption.
  2. The Glass constant. The author needs a way to quantify the increase of complexity when adding new functionality. He uses a statement made by Robert Glass. But how true or applicable is that statement? Even the author himself states Glass may or may not be right about these exact numbers, but they seem a reasonable assumption.
  3. Coordination complexity is like functional complexity. This is probably the most troublesome assumptions. The author builds up a (non scientifically) case for the mathematics for functional complexity. He fails to do this for coordination complexity and simply states that the same mathematics will probably apply.
  4. A final assumption, even though not explicitely made in the article but nevertheless present, is that adding a new function does not affect complexity of existing functions. I can imagine adding functions that actually lower complexity of existing functions. The article in fact is only valid for completely independent functions, which makes it not usuable with decomposition or dependent functions. But those are in fact the most common circumstances for doing complexity analysis to justify partitions

Nowhere in the article is there any scientific proof that these assumptions are ‘probably’ true. Arguments in favor of the assumptions are either leaning towards it is worst case so it can only get more precise or the assumptions feels right, doesn’t it. Neither of these are accepted in the scientific world. Scientists know how dangereous it is to base conclusions on assumptions that are not founded in research, be it empirical or theoretical.

Research to quantify complexity, even if just for comparison, is valuable. Therefore I applaud this effort but I would encourage the author to go forward with it:

  1. conduct empirical research to gather data that supports the assumptions made;
  2. find similar research that either supports or rebutes your assumptions and conclusions;
  3. and finally the most important, apply for publication in an A-level publication to get quality peer review.

All of the examples I gave in the introduction did these steps. Their conclusions are supported by empirical research, they stand on the shoulders of prior research and their work has been peer reviewed and published in A-level journals. That is the kind of scientific foundation Enterprise Architecture needs.

Standards, picking the least worst

Most people dread standards, they are considered a nuisance at best but are mostly regarded as something specifically crafted to make life miserable. Yet standards are key in a healthy IT organisation. So why is it that people have such negative feelings about standards? In this article I’ll try to explain what the benefits of standards are and give some possible reasons on why they have such a bad reputation.

Perhaps unnessary but let’s first answer the question “what is a standard?“. A standard is a rule that instructs people to use a specific technology instead of alternatives, to perform an activity in a certain way, to use a common template for documentation … In other words, it limits personal freedom and forces them to do things, and always do things, in a certain way.

The case for standards is created around the premise that the IT organization as a whole benefits more than any loss that may be incured in some projects. But how can the IT organization benefit while some projects suffer? The key to explain this is optimization of investments. Investments made by the IT organization in those standard technologies, processes, templates …

Optimization Of Investments

Imagine the following scenario. Five years after an application has been placed in production, some features need to be added. You walk into the IT department and ask the following questions “Who knows what needs to be done, how much would it cost, how long will it take …?”. There are two possible ways this can go.

Option 1, the application was developed with a technology that was hardly used in the last years. Not a lot of people know what needs to be done and those people that did know can’t remember all the details, it’s hard to budget the change and no one really wants to estimate how long it will take. It may not be that bad in reality, but it’s obvious that at least some additional risk is present.

Option 2, the application was developed with a technology that is still being used for most application. Given all that experience, most people have a pretty good idea of what needs to be done, it will cost roughly as much and take as long as similar projects from a recent past. It may not be as positive as pictured, but in this case it’s obvious that risk is kept to an acceptable minimum.

These two options show what I mean with “optimization of investments“. By consistently using an as small as possible set of technologies, investments can be optimized: financial resources are spread less thin, opportunities for gains in experience and knowledge are focused to a small set of technologies …

Similar arguments can be made to show how standardized processes or templates will create benefits for the organization.

Individual Projects Will Suffer

In the introduction I stated that standards may actually make some projects suffer. In fact, it’s an absolute guarantee that some projects will suffer. I often state that I can make any project cheaper by having it not follow a standard.

A project team could be more experienced in a technology that is not your standard (although I suggest you audit your sourcing strategies if this happens), an alternative technology could have a feature that makes life considerable easier … there can be many reasons why a different technology benefits your project more than a standard will.

That is actually the heart of the problem with standards, everyone regurarly is confronted with standards as unbeneficial but rarely see the benefits for the IT organization as a whole.

Picking The Least Worst

When you choose technologies to become a standard, you are not looking for the one that is the best in a (small) number of cases, you are looking for the technology that is least worst in most cases. Yes, you read that right, it’s about picking the least worst. That may be one of the most common misconceptions of standards and is probably the root cause for the many (often useless) debates about changing one standard for a better one. If you change a standard, you are also rendering prior investments obsolete, thereby renouncing benefits. Either that new standard has to be extremely better or keeping the old standard must be awfully bad. In any case, a managed transition is required to safeguard past and future investments.

Changing a standard is not something you want to do in just the context of project, it’s a choice that affects the entire IT organization in the long term and therefore needs to be made at that level.

It also follows from the above that the best you can do with standards is to apply them as often as you can. The more you apply them, the less room there is for rogue technologies, the more experience the organization gains and the less risk there will be in the long run.

Top 3 Fundamentals Every Architect Should Master

An architect is often seen as a generalist who needs to know a little about everything. Although this is technically true there are still some concepts every architect should master in every detail. So far I can list the following three, in order of importance (first being most important).

  1. Abstraction
  2. Information – Behavior – Structure
  3. Systems Thinking


Abstraction lies at the core of almost every single method, tactic or model an architect deploys. The vast majority of issues I encounter with architectures are directly related to insufficient understanding of abstraction. I  am not talking about just “leaving out details”, I am talking about a deep understanding of what abstraction really is: generalization-specialization, idealization-realization, composition-decomposition and the large influence it has on everything we do as an architect.

For a good introduction to the subject I recommend Graham Berrisford’s site (“Library” and “Methodology” sections). I deliberatly don’t deep-link so you’ll have the benefit of wading through all his material, highly recommended. But realize that this is even just the tip of the iceberg!

Information – Behavior – Structure

Closely related to the third topic (Systems Thinking) is the distinction between information, behavior and structure. A concept that is known since at least the 70’s in information analysis and lies at the heart of most methods we use today. Structured analysis and design is the mother of most relevant methods and modeling we know today, itself it is a continuation of general systems thinking and cybernetics. More recently the ArchiMate notation has reintroduced these concepts for the masses.

Again, I am not talking about merely knowing the difference but about the impact it has on our daily work. Applying abstraction to the concepts of information, behavior and structure is one of the key elements needed for true mastery of architecture.

Systems Thinking

Finally my (so far) last “fundamental” is (general) systems thinking and it’s children like cybernetics. This fundamental is probably more relevant for enterprise architects. Solution and application architects are more often dealing with systems that can be modeled using mechanistic views or simple deterministic or animate systems (see Russell L. Ackoff and Jamshid Gharajedaghi, “On the mismatch between systems and their models“). Nevertheless all architects benefit at least some from a deeper knowledge of systems thinking. Even though I may be haunted for saying this, but most SOA architecturs fail at least because the systems-aspect has been neglected.

What do you think, is this list too short? Are there any fundamentals missing? Leave a comment!

What I would Change About ArchiMate

ArchiMate 1.0 has been out for a while and ArchiMate 2.0 is months if not weeks from publication. Drafts for 2.0 have been circulating around a while now. The upcoming 2.0 version doesn’t change a lot to the core language. This new release is mostly marked by two extentions: one for modeling business motivatons and requirements and one for modeling the implementation and migration phases.

Most of what I was looking for in 2.0 didn’t made it. Here is a list of what I would have changed in 2.0 to the core language.

Define Business and Application Function as structure

It may come as a surprise but Business Function is not behaviour. Behaviour is the reaction a system has after being triggered by an event. Any behavioural element therefore represent instances that have a start time (when the event triggers), a result and an end time (when the result is produced). A business function is not such an element.

Most if not all meta models, including TOGAF, describe it as a structural element. In ArchiMate it would be a more logical version of a business actor. In other words, it’s an idealization of a business actor.

The exact same reasoning to move Application Function from behaviour to structure. An Application Function is a logical (idealized) version of an application component. TOGAF would call it a Logical Application Component.

Add Application Process as a core element

The business layer has a process element, why shouldn’t the application layer have it? An application service with the related application process would corresponds to a well known concept in IT: use cases. I would love to see them in ArchiMate!

Add dependency relationship

ArchiMate has tons of relationships, most of which should are hardly used, yet it misses one of the more important relationship: dependency. ArchiMate tried to avoid it by adding most of the specialized dependencies: Used By, Access, Realization …
But there are still places where I would like to use a simple dependencies without being forced to specify more detail.

Redesign the technology layer

ArchiMate is built around the distinction of internal versus external and behaviour versus structure. This is very visible in the Business and Application layer, especially with the above proposed changes. The technology layer is an exception. They tried to fit UML elements in this model and did a pretty good job. Yet, it feels wrong that the symmetry from the business and application layers is not showing in the technology layer.

I even wonder how wrong it is to have a behavioural item (System Software) inherit from a structural item (Node).

Something is not right in that layer and it needs to be fixed.

The Pragmatic Architects Creed … no thank you

I got informed about a new deliverable from the folks from Pragmatic Enterprise Architecture: The Pragmatic Architects Creed. They collected a list of statements about architecture and ask you to sign it so you can show your support.

Since this felt like a good thing, I headed over to the list to sign it. Luckily I decided to read it first before I would sign it. In my opinion, most statements are too generic, some are plain wrong and only a small number qualify for my signature.

Here is the list of statements with my comments inline. See for yourself if you agree or not, leave a comment if you want to share some of your thoughts.

I put the interests of any organisation I work for above the personal or siloed interests of the individual that employed me.

Euhm … perhaps … never? As an architect it is my responsability to design a system that realizes as much of my clients wishes as possible while staying within the budget, time frame and the boundaries set by the environment. If that client is a business unit manager, I should only follow that manager’s wishes. If the manager asks me to try and go around some limitation set by the organization, than I will do so. It is up to the organization to set the boundaries.

If I wouldn’t do that, I would not be able to build a trust relationship between the client and me. My client would not experience me as someone who is helping him to achieve his goals but as someone sent from an outsider to make it difficult to reach his goal.

If the environment, the organization, wants to influence my design to make sure it serves the greater good, they have to do so in advance by setting rules. At a different moment, they can even hire me to do that. This is the role building regulations have in modern societies.

But honestly, as an architect I can’t serve two masters with potentially competing visions.

I can vehemently agree with someone about subject/point B when I have only recently vehemently disagreed with the same person about subject/point A.

I agree with this one. But is this limited to architects? To me this looks like a credo valid for any professional in any industry.

I love to be proved wrong.

Whoever claims he or she loves to be proved wrong, is lying. Nobody likes to be proven wrong. I do like to learn from other people and I am willing to adjust my ideas or statements whenever needed. Being proven wrong is never a good reason to stop learning or become unreasonable.

I relate new things that people do not know to . things that people already know.

I agree but again think this is a generic credo for any professional in any industry.

I spend more time understanding a problem domain than determining a solution.

Why? I don’t understand this one. If the problem domain is simple and the solution complex, I’ll spend more time thinking about the solution. I don’t see how you can objectively see this as a credo.

I stand up and give an unpalatable truth when no one else will, even though it may mean I lose my job.

Please do so, that frees up a spot for me 😉

But seriously, depending on the “unpalatable truth”, the context and the involved players, I’ll speak up or stay silent. I do take my responsability as an architect and I often do tell people things they don’t want to hear. At the same time I am not trying to improve everything around me nor do I stop people from learning from their mistakes.

I constantly ask myself and others; Why?

Nothing to add, I agree.

I see patterns and structure in everything from traffic congestion to Pop Music.

Given the previous credo … why? Why would it help me, my clients or the profession in general if I see patterns and structure in everything from traffic congestion to Pop Music? As an architect I must have a solid understanding of patterns and structure, those are at the core of the profession, but there is no need to see them in everything around me.

I can see disagreement between people when those people think they are in agreement.

This is a good one in fact. All to often I see people agree when they are in fact talking about two different things. As a person who has often more expertise in the subject, I see it as my duty to inform them they actually disagree.

I can abstract any thing or idea to a logical and conceptual level.

A solid understanding of idealization/realization is a must for every architect so I will agree with this one. I do feel a bit uncomfortable with the words “any thing or idea” however.

I never, by admission or omission, lie.

I would have agreed if it wasn’t for the word “never”. You can help people if you sometimes lie to them. Lying should be an exception however and must never harm. But as Robin Williams, playing the role of Theodore Roosevelt, said: “Sometimes it’s more noble to tell a small lie than to deliver a painful truth.” (

I know the difference between a Model, a Metamodel, and Metametamodel.

Basic knowledge for any architect, I agree.

I know the difference between layers of abstraction; Contextual, Conceptual, Logical, Physical.

Is this a more generic version of the one about logical and conceptual level? I think so. I still agree.

I know the difference between Architecture, Design & Construction

One can start a civil war by discussing about the difference between “architecture” and “design”. So I challenge the author of this statement to explain the difference to me. Oh, one catch, the difference has to be described in a scientific and objective way. Definitions like “design contains more detail” or “architecture is about managing risk and cost” won’t cut it.