Recap of European Identity & Cloud Conference 2013

The 2013 edition of the European Identity & Cloud Conference just finished. As always KuppingerCole Analysts has created a great industry conference and I am glad I was part of it this year. To relive the conference you can search for the tag #EIC13 on Twitter.

KuppingerCole manages each time to get all the Identity thought leaders together which makes the conference so valuable. You know you’ll be participating in some of the best conversations on Identity and Cloud related topics when people like Dave Kearns, Doc Searls, Paul Madsen, Kim Cameron, Craig Burton … are present. It’s a clear sign that KuppingerCole has grown into the international source for Identity related topics if you know that some of these thought leaders are employed by KuppingerCole themselves.

Throughout the conference a few topics kept popping up making them the ‘hot topics’ of 2013. These topics represent what you should keep in mind when dealing with Identity in the coming years:

XACML and SAML are ‘too complicated’

It seems that after the announced death of XACML everyone felt liberated and dared to talk. Many people find XAMCL too complicated. Soon SAML joined the club of ‘too complicated’. The source of the complexity was identified as XML, SOAP and satellite standards like WS-Security.

There is a reason protocols like OAuth, which stays far away from XML and family, have so rapidly gained so much followers. REST and JSON have become ‘sine qua none’ for Internet standards.

There is an ongoing effort for a REST/JSON profile for XACML. It’s not finished, let alone adopted, so we will have to wait and see what it gives.

That reminds me of a quote from Craig Burton during the conference:

Once a developer is bitten by the bug of simplicity, it’s hard to stop him.

It sheds some light on the (huge) success of OAuth and other Web 2.0 API’s. It also looks like a developer cannot be easily bitten by the bug of complexity. Developers must see serious rewards before they are willing to jump into complexity.

OAuth 2.0 has become the de-facto standard

Everyone declared OAuth 2.0, and it’s cousin OpenID Connect, to be the de facto Internet standard for federated authentication.

Why? Because it’s simple, even a mediocre developer who hasn’t seen anything but bad PHP is capable of using it. Try to achieve that with SAML. Of course, that doesn’t mean it’s not without problems. OAuth uses Bearer tokens that are not well understood by everyone which leads to some often seen security issues in the use of OAuth. On the other hand, given the complexity of SAML, do we really think everyone would use it as it should be used, avoiding security issues? Yes, indeed …

API Economy

A lot of talk about the ‘API Economy’. There are literally thousands and thousands of publicly available APIs (named “Open APIs”) and magnitudes more of hidden APIs (named “Dark APIs”) on the web. It has become so big and pervasive that it has become an ecosystem of its own.

New products and cloud services are being created around this phenomena. It’s not just about exposing a REST/JSON interface to your date. You need a whole infrastructure: throttling services, authentication, authorization, perhaps even an app store.

It’s also clear that developers once more become an important group. There is nu use to an Open API if nobody can or is willing to use it. Companies that depend on the use of their Open API suddenly see a whole new type of customer: developers. Having a good Developer API Portal is a key success factor.

Context for AuthN and AuthZ

Manye keynote and presentations referred to the need for authn and authz to become ‘contextual’. It was not entirely sure what was meant with that, nobody could give a clear picture. No idea what kind of technology or new standards it will require. But it was all agreed this was what we should be going to 😉

Obviously, the more information we can take into account when performing authn or authz, the better the result will be. Authz decisions that take present and past into account and not just whatever is directly related to the request, can produce a much more precise answer. In theory that is …

The problem with this is that computers are notoriously bad at anything that is not rule based. Once you move up the chain and starting including the context, next the past (heuristics) and ending at principles, computers are giving up pretty fast.

Of course, nothing keeps you from defining more rules that take contextual factors into account. But I would hardly call that ‘contextual’ authz. That’s just plain RuBAC with more PIPs available. It only becomes interesting if the authz engine is smart in itself and can decide, without hard wiring the logic in rules, which elements of the context are relevant and which aren’t. But as I said, computers are absolutely not good at that. They’ll look at us in despair and beg for rules, rules they can easily execute, millions at a time if needed.

The last day there was a presentation on RiskBAC or Risk Based Access Control. This is situated in the same domain of contextual authz. It’s something that would solve a lot but I would be surprised to see it anytime soon.

Don’t forget, the first thing computers do with anything we throw at them, is turning it into numbers. Numbers they can add and compare. So risks will be turned into numbers using rules we gave to computers and we all know what happens if we, humans, forgot to include a rule.

Graph Stores for identities

People got all excited by Graph Stores for identity management. Spurred by the interest in NoSQL and Windows Azure Active Directory Graph, people saw it as a much better way to store identities.

I can only applaud the refocus on relations when dealing with identity. It’s what I have been saying for almost 10 years now: Identities are the manifestations of relationship between two parties. I had some interesting conversations with people at the conference about this and it gave me some new ideas. I plan to pour some of those into a couple of blog articles. Keep on eye on this site.

The graph stores themselves are a rather new topic for me so I can’t give more details or opinions. I suggest you hop over to that Windows Azure URL and give it a read. Don’t forget that ForgeRock  already had a REST/JSON API on top of their directory and IDM components.

Life Management Platforms

Finally there was an entire separate track on Life Management Platforms. It took me a while to understanding what it was all about. Once I found out it was related to the VRM project of Doc Searls, it became more clear.

Since this recap is almost getting longer than the actual conference, I’ll hand the stage to Martin Kuppinger and let him explain Life Management Platforms.

That was the 2013 edition of the European Identity & Cloud Conference for me. It was a great time and even though I haven’t even gotten home yet, I already intend to be there as well next year.

Found: Identity Provider Business Case, or not?

I had the pleasure to have been part of the Identity community in the “early days”. Right before OpenID came into existence and most people thought of Microsoft Passport as innovative. One very hot topic was the idea of an “Identity Provider”. An Identity Provider is a party on the Internet who would be happy to serve any claims about me to a any relying party, obviously with the user’s consent and control. Anyone remember the expression “user centric identity”?

There were some unsolved issues:

  • How would an Identity Provider make money to pay for the operational costs?
  • How would relying parties know which Identity Provider to trust?

Especially the second issue was hard. Some relying parties just want to be able to recognize recurring visitors and are happy with a couple of standard attributes. For them, identity has low value. Well known examples are the myriad of forum sites.

Other relying parties however want more, they want identities with verified attributes, an identity they can trust. For them, identities have value and untrusted identities come with a business risk. An extreme example would be a financial institution who would, for these reasons, typically never outsources the Identity Provider role.

A couple of days I came across an interesting article about Facebook offering their commenting system to other sites, including authentication through Facebook. What caught my eye was an argument showing that Facebook serves as an almost perfect Identity Provider in some sense:

This offers publishers a number of benefits. They get more links to their site from inside the net’s most popular website. A lot of people are “registered” to comment on their sites. And, they have a system designed to discourage vitriol because it’s easy for the site owner to ban a user and tough for a user to create a new identity.

Especially that last part is interesting: relying parties can put some trust in the provided identities simply because … people invest time and effort in their Facebook identity and generally would not throw this away just to post rubbish on some forum. In other words, relying parties will like Facebook identities. They trust one Identity Provider, Facebook, and they get hundreds of millions of fairly trustful Identities.

But what is in it for Facebook? A lot!

For Facebook, the benefit is also clear. Users now have even more incentive to be constantly logged into Facebook (those who are already logged into Facebook don’t have to do anything to comment on a website using its system). Additionally, even more of Facebook’s users’ net activities flow through its site, since by default comments — and replies to them — post to a Facebook user’s wall. That deepens users’ ties to Facebook, adds more content to Facebook, and gives people more reason to check their Facebook newsfeed for the increased information flow.

By allowing relying parties to use Facebook identities to realize a comment system on their site, Facebook actually generates value for themselves. A win-win it seems.

Facebook, with their Facebook Connect, wants to be the primary Identity Provider on the Internet:

It also builds on what’s becoming Facebook’s most important function: being the identity provider and validator for the wider net. The system opens the door for what’s likely inevitable: having news sites rely on Facebook to identify its users and eventually to serve ads to its readers based on their individual Facebook pages.

Both issues are gone: the Identity Provider (Facebook in this case) can have a very viable business model and the relying party has an Identity Provider they can trust, one that brings hundreds of millions of identities.

So, all well now in the world of Identity? I don’t think so. The relationship between Facebook and relying parties is not a balanced one. Relying parties are clearly at the mercy of Facebook:

… just as Facebook jealously holds onto the e-mail addresses of the people you are connected to on Facebook so you can’t re-establish your network on some other site.

Promising as it may seem, this type of unbalanced relationship should not satisfy us. So, for those still active in the field of Internet Identity, what do you think about this?

Challenge Questions … a tale from reality

Last weekend I was in a shop buying a new subscription for my mobile phone. As usual I was hit by an avalanche of questions asking for my name, address, shoe size … One question in particular caught my attention …

The shop attendant asked me “for a password”, a password I could use if I couldn’t go to one of their shops and had to call their call center. By supplying the password to the call center they could verify it was really me.

I don’t know about anyone else but I personally call my mobile subscription provider about once every 5 years. The service they offer just works, rarely needs change and if it does need change, they often have a website to help you out. Chances are high I would have completely forgotten the password by the time I had to call them.

The conversation with the shop attendant went like this:

  • Attendant: “You have to choose a password, a password you can use when calling our call center
  • Me: “Oh … hmm … how many times can I guess when I am calling in?
  • Attendant (realizing I was afraid of forgetting the password): “We will give you hints if you forgot it
  • Attendant (still aiming to give excellent customer service): “People often choose the PIN code of their credit or debit card ...”
  • Attendant (now realizing not everyone liked this idea): “… but of course you don’t have to, you can pick any word

At that moment I tried not to, at least not obviously, show my emotions about this conversation.

This method to verify who is calling is flawed to say the least. Due to the very low frequency people have to call in, most people will have forgotten their password. Unless of course they used their PIN code, which they hopefully still remember. Since call center employees can obviously read the password (they need to verify it) they have clear text, and probably unnoticed, access to a lot of PIN numbers. Do I need to add that call centers employees are not the most loyal employees you can find?

The fact they give hints when you call in, is stupid as well. Not only do they admit their system is flawed by design, they also help in under mining it themselves. Imagine this conversation when a hacker calls in:

  • Hacker: “Hi, I am John and I need to change my subscription plan
  • Call Center: “Hi John, could you give us your password please
  • Hacker: “Oh, I forgot it … could you give a hint?
  • Call Center: “It looks like your birth year or perhaps your PIN code
  • Hacker (after a quick look on Facebook): “My year of birth? That should be 1975.
  • Call Center: “Sorry John, that is not correct, perhaps it’s your PIN code?
  • Hacker: “No, I would never give my PIN code to you, could you give me the first number? Perhaps I recognize it
  • Call Center (re-assured it could not be his PIN code): “It starts with 5 John

I am sure someone much more experienced in social engineering (I have virtually none) can get someone’s PIN code this way.

OpenID to avoid Phriend Phishing on Twitter?

Johannes Ernst suggests that using OpenID might be a good way to avoid phriend phishing on Twitter:

Should have guessed that Phriend Phishing was first going to happen to somebody famous.

Now, how could that have been prevented?

What if:

  • Twitter adopted OpenID as the only way of authenticating.
  • Twitter showed the authenticated OpenID identifier instead of a (possibly made up) user handle on all tweets.
  • Kanye West would have used his official website URL as his OpenID.
  • Ergo, everybody could follow the OpenID to determine whether any phriend phishing is going on or not.

I admit that scenario is not entirely viable yet. For example, users are not familiar and comfortable enough yet with OpenID that a major-volume site like Twitter could switch to OpenID-only. But it’s close, and that’s the kind of adoption barriers that we need to work on over the next 12-18 months in the OpenID community.

I don’t know how OpenID can help solve this issue. Changing someone’s Twitter ID to his authenticated OpenID is not helping us forward. These are the reasons.

First, OpenID’s are assigned on a first-come first-served basis. I can pick any OpenID provider and register “http://BradPitt.<openidprovider>.com”. Even when some OpenID providers are going to validate your request, others won’t so users have no clue what to assume about an OpenID.

Second, even when you pick your homepage as your OpenID (using some mechanism of OpenID delegation), the user has no way to know which one of these is the right one:

And last, what happens if someone is also named “Brad Pitt”? Is he not allowed to claim the OpenID “”?

I think OpenID has many added values, especially in the world of social media, but for the moment I don’t think owner assurance is one of them. With OpenID I can be fairly sure the Tweet came from someone owning that particular OpenID. But OpenID does not guarantee me that the names used in the OpenID URL itself are pointing to the owner.

(EIC-2009) Claims is what we have, claims is wat it will be

[Blogged from the 2009 edition of the European Identity Conference]

Slowly I am getting the impression that Microsoft is going to use claims for everything even remotely related to identity. During discussions at EIC 2009 I understood that Microsoft is also positioning claims for authorization. This is not entirely new, in their Azure Cloud Services offering they positioned claims this way: for authorization.

I am not feeling a 100% comfortable with this. Without more and detailed information on how Microsoft will realize it, I can hardly judge their strategy but here are some of my worries:

  • Is the claims request/response protocol capable of supporting authorization? XACML obviously has a more rich model that allows for fine grained context descriptions and nuanced responses (e.g. obligations). With claims it’s more simple.
  • Using claims for authorization makes the solution attribute driven. That opens the door for a heavy dependency on roles: roles are easy to represent in a claim. As far as I know Microsoft doesn’t have a solution to manage roles. Perhaps they have something on the horizon?
  • Microsoft already indicated that roles as we use them today are incomplete. They are looking for a standard way to accompany roles with predicates. For instance “The user has the role ABC but only between 9AM and 5PM”. I can agree with the usefulnes and semantics of this roles-predicates marriage but I smell a box of pandora: so many ways to mess this up.
  • Claims are a powerful concept and we can thank Kim Cameron and Microsoft for defining and pushing this. But there is this saying in Flanders “if the only tool you have is a hammer, everything looks like a nail”. This is a real trap for Microsoft: using claims to solve every IAM problem. I see the first signs of claims semantics being stretched too far.
  • Lastly, informally I heard some ideas on how they will align Sharepoint with the claims world (specifically in terms of authorization). Policies would not evaluate to authorization responses but might evaluate to query-like structures that can be used by Sharepoint to select the resources you have access rights for. I am not sure if I understood this correctly so I am going to hold back on commenting.

It will be interesting to see how all this will be evolving in 2009 and 2010. I assume more on this during EIC 2010.

Day two @ EIC 2009

I haven’t blogged about the European Identity Conference since it started. Although I have to say that I made up by using Twitter (@bderidder) during most of the keynotes and presentations. I was present at the very first EIC in 2007, skipped the 2008 edition and joined the 2009 edition again. That gives me a nice opportunity to see how this conference has evolved during it’s 3 first editions.

It has evolved … and mostly in a (very) positive way. Kuppinger Cole succeeded in creating a strong conference agenda with all important IAM and GRC topics covered. Even the catering is perfect! That was not really the case in 2007 during the first edition 😉

I do see a difference though. In 2007 there was this “grassroots” atmosphere. We had a lot of people working on emerging standards like Bandit, Higgins, OpenID, VRM … There was this constant buzz during the presentations, breaks and evening visits to Munich. Everyone felt as if they were part of this new thing called “Identity”.

The 2009 edition is different. It’s definitely a lot more mainstream. There is less of a buzz (if at all). I think that can mean two things. One, EIC is scheduling more “serious” presentations and, two, Identity has matured into something … well … mainstream. As always in these cases, it’s a little of both.

Heavily scheduling presentations about GRC (Governance, Risk and Compliance) is bound to create a more professional (dare I say boring) atmosphere. But, and that is a good thing, Identity is also a lot more mature. Most of the bleeding edge topics in 2007 are now being presented as commercial products and consultancy offerings. The best example would be all the offerings you can see around claims and XACML.  Topics like OpenID or SAML are not exotic anymore. They have become well accepted in the industry. One topic didn’t seem to make it though. “User centric identity” was lost somewhere in the last 2 years. It’s being recycled in the VRM (vendor relationship management) community but with less fanaticism.

Relating to my remark on GRC, hinting at it being a boring subject, I have to make a correction. It’s definitely not a boring subject. I would also say that Kuppinger Cole is absolutely right in scheduling it on the agenda. But you have to admit, it’s a more specialized subject with little to none “sexy” technical aspects.

The conference is not finished, it’s not even half way, yet I think I can make a couple of preliminary conclusions on what I will be taking home on Friday evening:

  1. Identity has matured, most of the exotic topics two years ago are now mainstream and being turned into products by Oracle, Sun, Microsoft, IBM … and numerous other larger and smaller players in the market. Clients also notice these offerings and buy them.
  2. It’s not clear if the current level of maturity of Identity is sufficient. There haven’t been any presentations on this and Kuppinger Cole is not making statements on this. Unless it’s about GRC of course, but what about other aspects? There are bound missing gaps in Identity right now and they are being forgotten in all the happiness surrounding claims, federation …
  3. There is a lot of talk about GRC, both in presentations and during breaks. Nevertheless, I personally still perceive it as something at a conceptual (hype?) level. That is at least the overall impression I got at this conference. Topics like these, high level business concepts, always carry a risk of remaining empty. It’s very easy to talk an entire day about GRC without knowing a thing about it, it’s a lot harder to do that with topics that have a direct technical link.
  4. Authorization is massively misunderstood and apparently has yet to reach the maturity level Identity currently has. Whenever the word “authorization” is dropped, people either go RBAC or think it’s about claims. It will probably take more then one year (and conference) to get this right.

I forgot some conclusions but since the conference is not over yet, I will get another chance to write about those.

For what it is worth, some advice for a 2010 conference:

  • Try to create some of that 2007 “grassroots” atmosphere, there are plenty of topics that can do this, both in Identity, Authorization and hopefully GRC as well.
  • Turn the GRC topics into something with real and tangiable content. It’s so easy to talk about GRC without actually saying anything.
  • GRC brings IAM to the world of “Business ICT Alignment”, that means to the world of Enterprise Architecture. So … where are the IAM and Enterprise Architecture topics?
  • Authorization definitely should come back and hopefully with the message that it is not about RBAC and not about claims. Those are merely tools and technologies that will have a much shorter lifespan then authorization itself. We have to dig deeper and unravel more of what authorization is really all about.
  • And last, an Identity Award for the longest blog post about day 2 of EIC 2009. Thank you.

Embarassing Facebook Moments

Paul Madsen confesses he was able to, just in time, avert an embarassing Facebook moment:

On what started as an innocuous thread on the relative merits of curling and football, comments were made by a non-work friend that, while completely appropriate to the relationship between myself and the commenter (we having a long history of questioning each other’s masculinity and mental health), were not appropriate for a work context (or 98% of any other contexts it must be said).

Paul also points to what he thinks is the root cause:

The fact that my Facebook friends list is an aggregation of both work and non-work hit home yesterday.

Facebook allows me to create lists but not, AFAICT, use those lists to compartmentalize through differentiated permissions, e.g. allow members of one list to participate in a thread and not another.

Since the first day I have been using Facebook I felt very uncomfortable with the way various friends list are managed. On Facebook, you always risk having embarassing “red face” moments when you have different types of friends list (work and friends for instance).

Facebook does have various settings related to privacy, who is allowed to see what, etcetera. But honestly, even I sometimes have it difficult to configure those in a way that I am confident no information is spilled from one group to another. Currently I even practically closed down my Facebook profile for everyone who is not a close friend. If you are not a close friend, you will only see some very basic information about me and that’s it (if you do see more and don’t consider yourself a closefriend, drop me line ;). But even with all careful configuration work, I know I will one day face a “Paul Madsen Moment” on Facebook.

Clearly, offering a bunch of configuration settings like Facebook does not solve the issue. First, it becomes (too) complicated very fast and second, even when configured properly, it still has open holes. Who has a good solution that works in complex environments like Facebook?

Hotel locations for the European Identity Conference 2009

For the upcoming European Identity Conference 2009 (a conference I can recommend) organized by Kuppinger-Cole, I was looking for the nearest hotel that had special conference rates. The list on the conference site only lists names and addresses. Since I have no clue what is where in Munich, it’s not easy to see where they are located in relation to the conference center.

To make this task easier, I created a Google Map that shows marks for the conference location and all listed hotels. I hope this is useful to others as well.

If you encounter any errors don’t hesitate to contact me!

Authorization Management and Attestation

A good read on authorization management from Kuppinger Cole (author is the Kuppinger part, Martin). One paragraph that I could relate to:

There is another interesting aspect of Authorization Management: Dynamic Authorization Management. Most of today’s approaches are static, e.g. they use provisioning tools or own interfaces to statically change mappings of users to groups, profiles, or roles in target systems. But there are many business rules which can’t be enforced statically. Someone might be allowed to do things up to a defined limit. Some data – for example some financial reports – have access restrictions during a quiet period. And so on. That requires authorization engines which are used as part of an externalization strategy (externalizing authentication, authorization, auditing and so on from applications) which provide the results of a dynamic authorization decision, according to defined rules, on the fly.

Most organizations I know are kind of stuck in static authorization management. It’s all about groups (and roles) that need to be populated by IAM tools. Even when the rules to do so are leaning towards dynamic authorization management. Sometimes they just have to, platforms like Microsoft Sharepoint depend largely on groups to perform authorization.

Also note the “European Identity Conference” organized by Kuppinger Cole. I was lucky to attend the first edition (as speaker) and can warmly recommend this conference to anyone interested. Atmosphere is great, content in-depth and a high concentration of (identity) brains. Now, do I qualify for free registration as a blogger ? 🙂

LDAP Referential Integrity

An old issue with LDAP servers has found the spot light again: referential integrity. This time it’s a call for attention made by James:

I also asked the question on How come there is no innovation in LDAP and was curious why no one is working towards standards that will allow for integration with XACML and SPML. I would be happy if OpenDS or OpenLDAP communitities figured out more basic things like incorporating referential integrity.

Pat pointed James to what he thinks is prove of support for referential integrity in LDAP (OpenDS, OpenLDAP and any Sun derivative):

For some reason, James has a bee in his bonnet over referential integrity and LDAP. I’m really not sure where he’s coming from here – both OpenDS and OpenLDAP offer referential integrity (OpenDS ref int doc, OpenLDAP ref int doc), and Sun Directory Server has offered it for years (Sun Directory Server ref int doc). Does this answer your question, James, or am I missing something?

I can’t answer for James of course, but if I had been asking that question … no Pat, it does not answer my question. Well, it kind of does, since it confirms that those LDAP incarnations have limited to no support for decent referential integrity. Let’s follow one of Pat’s links and see what it says (Sun Directory Server ref int doc):

When the referential integrity plug-in is enabled it performs integrity updates on specified attributes immediately after a delete, rename, or move operation. By default, the referential integrity plug-in is disabled.

Whenever you delete, rename, or move a user or group entry in the directory, the operation is logged to the referential integrity log file:


After a specified time, known as the update interval, the server performs a search on all attributes for which referential integrity is enabled, and matches the entries resulting from that search with the DNs of deleted or modified entries present in the log file. If the log file shows that the entry was deleted, the corresponding attribute is deleted. If the log file shows that the entry was changed, the corresponding attribute value is modified accordingly.

So it seems that Sun Directory Service let’s you delete a user but it promises to make sure that it will do it’s very best to delete any references to this user within a “update interval”. It does not mention what a read after the deletion but before the plug-in kicks in will see. Will it still see the user as a member in a group although the user is deleted? I am pretty sure it does. This is of course, at least for me, enough prove that this functionality does not offer referential integrity. At best it offers some kind of deferred cascading deletes (or updates) with no semantics for reads done during the time interval between the original operation and this cascaded deletes and updates.

Does this mean an LDAP server is something to avoid in any production environment? Absolutely not! In fact, I am not even sure if an LDAP server should offer “real” referential integrity at all. If you need that kind of guarantees, you are not far from full transaction support either, so why not upgrade to a relational database? Just my 2 cents of course.

To Sun (and any other LDAP implementator): what would the impact be on read/write performance in LDAP if they would implement full referential integrity?