Recap of European Identity & Cloud Conference 2013

The 2013 edition of the European Identity & Cloud Conference just finished. As always KuppingerCole Analysts has created a great industry conference and I am glad I was part of it this year. To relive the conference you can search for the tag #EIC13 on Twitter.

KuppingerCole manages each time to get all the Identity thought leaders together which makes the conference so valuable. You know you’ll be participating in some of the best conversations on Identity and Cloud related topics when people like Dave Kearns, Doc Searls, Paul Madsen, Kim Cameron, Craig Burton … are present. It’s a clear sign that KuppingerCole has grown into the international source for Identity related topics if you know that some of these thought leaders are employed by KuppingerCole themselves.

Throughout the conference a few topics kept popping up making them the ‘hot topics’ of 2013. These topics represent what you should keep in mind when dealing with Identity in the coming years:

XACML and SAML are ‘too complicated’

It seems that after the announced death of XACML everyone felt liberated and dared to talk. Many people find XAMCL too complicated. Soon SAML joined the club of ‘too complicated’. The source of the complexity was identified as XML, SOAP and satellite standards like WS-Security.

There is a reason protocols like OAuth, which stays far away from XML and family, have so rapidly gained so much followers. REST and JSON have become ‘sine qua none’ for Internet standards.

There is an ongoing effort for a REST/JSON profile for XACML. It’s not finished, let alone adopted, so we will have to wait and see what it gives.

That reminds me of a quote from Craig Burton during the conference:

Once a developer is bitten by the bug of simplicity, it’s hard to stop him.

It sheds some light on the (huge) success of OAuth and other Web 2.0 API’s. It also looks like a developer cannot be easily bitten by the bug of complexity. Developers must see serious rewards before they are willing to jump into complexity.

OAuth 2.0 has become the de-facto standard

Everyone declared OAuth 2.0, and it’s cousin OpenID Connect, to be the de facto Internet standard for federated authentication.

Why? Because it’s simple, even a mediocre developer who hasn’t seen anything but bad PHP is capable of using it. Try to achieve that with SAML. Of course, that doesn’t mean it’s not without problems. OAuth uses Bearer tokens that are not well understood by everyone which leads to some often seen security issues in the use of OAuth. On the other hand, given the complexity of SAML, do we really think everyone would use it as it should be used, avoiding security issues? Yes, indeed …

API Economy

A lot of talk about the ‘API Economy’. There are literally thousands and thousands of publicly available APIs (named “Open APIs”) and magnitudes more of hidden APIs (named “Dark APIs”) on the web. It has become so big and pervasive that it has become an ecosystem of its own.

New products and cloud services are being created around this phenomena. It’s not just about exposing a REST/JSON interface to your date. You need a whole infrastructure: throttling services, authentication, authorization, perhaps even an app store.

It’s also clear that developers once more become an important group. There is nu use to an Open API if nobody can or is willing to use it. Companies that depend on the use of their Open API suddenly see a whole new type of customer: developers. Having a good Developer API Portal is a key success factor.

Context for AuthN and AuthZ

Manye keynote and presentations referred to the need for authn and authz to become ‘contextual’. It was not entirely sure what was meant with that, nobody could give a clear picture. No idea what kind of technology or new standards it will require. But it was all agreed this was what we should be going to 😉

Obviously, the more information we can take into account when performing authn or authz, the better the result will be. Authz decisions that take present and past into account and not just whatever is directly related to the request, can produce a much more precise answer. In theory that is …

The problem with this is that computers are notoriously bad at anything that is not rule based. Once you move up the chain and starting including the context, next the past (heuristics) and ending at principles, computers are giving up pretty fast.

Of course, nothing keeps you from defining more rules that take contextual factors into account. But I would hardly call that ‘contextual’ authz. That’s just plain RuBAC with more PIPs available. It only becomes interesting if the authz engine is smart in itself and can decide, without hard wiring the logic in rules, which elements of the context are relevant and which aren’t. But as I said, computers are absolutely not good at that. They’ll look at us in despair and beg for rules, rules they can easily execute, millions at a time if needed.

The last day there was a presentation on RiskBAC or Risk Based Access Control. This is situated in the same domain of contextual authz. It’s something that would solve a lot but I would be surprised to see it anytime soon.

Don’t forget, the first thing computers do with anything we throw at them, is turning it into numbers. Numbers they can add and compare. So risks will be turned into numbers using rules we gave to computers and we all know what happens if we, humans, forgot to include a rule.

Graph Stores for identities

People got all excited by Graph Stores for identity management. Spurred by the interest in NoSQL and Windows Azure Active Directory Graph, people saw it as a much better way to store identities.

I can only applaud the refocus on relations when dealing with identity. It’s what I have been saying for almost 10 years now: Identities are the manifestations of relationship between two parties. I had some interesting conversations with people at the conference about this and it gave me some new ideas. I plan to pour some of those into a couple of blog articles. Keep on eye on this site.

The graph stores themselves are a rather new topic for me so I can’t give more details or opinions. I suggest you hop over to that Windows Azure URL and give it a read. Don’t forget that ForgeRock  already had a REST/JSON API on top of their directory and IDM components.

Life Management Platforms

Finally there was an entire separate track on Life Management Platforms. It took me a while to understanding what it was all about. Once I found out it was related to the VRM project of Doc Searls, it became more clear.

Since this recap is almost getting longer than the actual conference, I’ll hand the stage to Martin Kuppinger and let him explain Life Management Platforms.

That was the 2013 edition of the European Identity & Cloud Conference for me. It was a great time and even though I haven’t even gotten home yet, I already intend to be there as well next year.

Cloud IT as an Architectural Style

Martin Kuppinger from Kuppinger Cole, known from the excellent European Identity Conference, wrote a very interesting article on Cloud Computing: “It’s not about the cloud – it’s about Cloud IT“.

But the more you dive into the topic of cloud computing it becomes obvious that this cloudy thing of “cloud” (usually associated with the Internet and things which are provided there) isn’t the key thing. The key to success is that companies understand the value of Cloud IT.

What does this mean? Cloud IT stands for consequently using cloud principles in IT – and in every part of IT, not only for consuming some external services. That includes

  • well defined services (SLAs!!!)
  • a consistent service management across all services, regardless of where they are running (and, based on that, consistent approaches to cloud governance)
  • applications which are agnostic of where they are run or which hardware resources are available – there have to be parameters which might limit the ability to run applications everywhere and the application has to accept the currently available hardware resources but as well should understand that these resources can change dynamically

Defining everything in IT as services in a consistent manner is a fundamental change and the foundation for a flexible use of cloud services. Once you have made that move you can decide (based on parameters of a service) which service provider (internal or external) you will use. Thus, the first step is making your IT “cloud-ready”, e.g. moving towards a Cloud IT. Without that, using cloud services will always be sort of tactical and not strategic.

On the last day of the 2009 edition of the European Identity Conference I participated in a workshop on Cloud computing and Identity with Martin. In the workshop I told Martin that for me, an architect, the most interesting aspect of Cloud Computing is not the ability to house your application logic externally but a renewed and global attention for various architectural patterns.

The underlying current for most of these patterns is a high degree of abstraction and transparency combined with simplicity (not the bad kind, the good kind). In other words: keep it simple, abstract away everything that is not part of your application and don’t care about the environment you are running in (for instance network transparency). The advantages of following these principles are becoming more obvious due to Cloud Computing: scalability, continuity, flexibility, reusability …

Those patterns can equally be applied to classical internal IT. Yet, you rarely see this except at the application level. Cloud computing forces you into this thinking, traditional IT however gives you enough escape hatches. Not in the least because vendors keep on selling solutions that stifle innovation. As a simple example you can take the infamous network transparency. Demonstrated over and over again in the last 3 decades to be achievable (see for example the Inferno operating system) yet most commercial solutions still expose the network to you. So many good “inventions” but so little uptake from vendors.

In conclusion: I can only join Martin in his advice: get your IT cloud ready, move to a Cloud IT. Even if you will never ever actually move to the cloud. And more importantly, put pressure on your vendors to force them to innovate!

[edited: corrected some typos and grammar]

Encryption … no, we don’t need that

Kim Cameron recently went to a conference where he heard a cloud computing vendor utter these, and judging on the blogosphere almost legendary, words:

One of the vendors shook me to the core when he said, “If you have the right physical access controls and the right background checks on employees, then you don’t need encryption”.

Kim admitted he almost choked. I can understand him. We are in for some rough times if there are cloud computing vendors out there who think like that.

On the other hand I would like to take this opportunity to make sure you know that encryption in itself does not mean security. You can apply encryption all over the place, using keys that have a gazillion bits, and still have a unsecure, dumb solution.

Any vendor who replies “We use 256 bit AES encryption” when answering the question “How do you secure transmission of data?” is as dumb as the vendor who says “physical access controls and the right background checks on employees make encryption not necessary”.

10 Obstacles and Opportunities for Cloud Computing

My friends at Slashdot pointed me towards this reference of a good paper on cloud computing. This is probably one of the first decent articles I read about cloud computer. It covers real topics, real questions .. instead of the usual marketing gibberish. I am especially pleased they mention obstacles like “data lock-in” and “data confidentiality and auditability”. I wrote about some of these topics before: here and here.

Direct link to the PDF “Above the Clouds: A Berkeley View of Cloud Computing.

Online Evictions

In a previous post (Disturbances in the cloud) I described that using services in the cloud (or more down to earth: on the Internet) introduces more risks then most users imagine. A new examples seems to be the (free) AOL Hometown service for site hosting. It was shut down on Oct. 31, 2008 leaving all users behind with no access to their own content. Some say they had a 4 week notice but the support forums seem to at least indicate not everyone got it. This was the only “official” notice of the imminent shutdown, a small blog entry.

More information can be found here.

So, how are you doing? Let’s take a common example: online mail services like Live Mail or Google Mail. How many of you have local backups they can access if those services ever shut down or change their terms of use?