And the solution is … SSL!

Today I attented a talk from Microsoft about their new Azure Cloud Computing platform. They had hired David Chappell to present the first topics that introduced the whole concept and the specific offering Microsoft is making in this area.

It was all interesting, David Chappell is a gifted speaker. At one point however I got disappointed. David was explaining how REST was a very good choice for communicating with cloud services. Amazon, Google, Microsoft … they all have cloud data services that can be accessed using a REST api. Someone in the audience asked how, if they don’t use SOAP with WS-*, how can they secure this. David’s answer came quickly “oh … use SSL … it’s only one endpoint talking to another endpoint, SSL can secure that”.

The days that there was always single network connection between the consumer (client) and the producer (server) are also over. At both sides, the message passes through various firewalls, gateways, messaging infrastructure … before being delivered to the real message endpoints. You can’t use SSL to protect it all. SSL will just protect a single network connection.

With the current solutions and architectures, people need to understand that message level (or application level) security can never be achieved by depending on transport level security alone. There are just too many hubs between the two endpoints. You need appropiate security controls at the message level as well. If people insist on using REST for scenario’s that go beyond low assurance needs, they must think about message level security and trust controls that are independent of the transport layer. If we continue to neglect message level security in REST while at the same time promoting the usage of REST in cloud data services, we are destined for a security nightmare in the near future.

The topic of REST and security is also definitely not new.

While we are at the topic of “built security in, from the start” … may I kindly ask both Microsoft and Adobe to support WS-* security standards in their RIA technologies (SilverLight and Flex). If Microsoft really is serious about security as they so often claim these days, then why does SilverLight 2.0 does not have support for web services security beyond just SSL? It looks like a missed opportunity for which we will pay dearly in a couple of years.

On the bright side … a lot of the services Microsoft will offer on their Azure platform will have full and first class support for claims based access control. At least a standards based authentication is possible. They do seem to think it also solves the authorization problem … that is wrong. Perhaps more on that later.

From here to there … AS-IS and TO-BE

All people in ICT have come across projects that will replace a current situation (the AS-IS) with a desired future situation (the TO-BE). At first sight it looks great: you analyze the current situation, including shortcomings and issues, and you document it in an AS-IS document. Then based on the results of this AS-IS and various gathered requirements you design and document the new state: the TO-BE. Looks good right? Not to me …

The drivers for these projects are often the same:

  1. an increasing perception that the current system is unable to fulfill the needs the system was created for in the first place.
  2. a general feeling that extending the current system, for instance to solve some of the issues, is becoming too expensive or too complex.

Before I would start designing a future TO-BE, I would like to now why the system as it is today, the AS-IS, is not fulfilling it’s goals anymore. Is it because technology has changed significantly during it’s life time? Is it because the people who designed, developed and maintained the system haven’t done a decent enough job? Is it because the system, once a perfect fit for the problem, started to loose alignment with the environment, slowly being rendered obsolete and in need of replacement?

Without proper answers to these questions and without a proper response in the TO-BE, that TO-BE is surely destined to become your next AS-IS. In a few years we will no doubt witness a presentation that explains us how the AS-IS (that TO-BE we are building today) is not good enough anymore and needs to be replaced with something new and more modern.

The world is constantly changing, so is your company and the environment it lives in. Any ICT system that operates as part of your company must realise it needs to change to keep in lign with that changing environment. If you only focus on building a static architecture that is unable to adapt to changes, you are doomed to recreate the system, in the form of a desired TO-BE, every couple of years.

Only during a smaller part of the existence of such a system, it is properly aligned to actual requirements. Most of the time in the lifespan of the system, is spend in either complaining about lack of alignment or promising improvement with the upcoming TO-BE.

I therefore don’t really believe in this AS-IS and TO-BE methodology. When you realize you are lagging behind while the world around you is changing, you won’t solve the problem by desperately catching up to the present. Because when you finally caught up (the TO-BE is delivered) you are already lagging behind again. Even if you went to great lengths to make that TO-BE as flexible as possible, you can never predict the future. If you can, give me a call.

What you want is a process that:

  1. periodically measures how well the system is aligned to the environment,
  2. identifies those elements of the system that are in danger of losing alignment,
  3. proposes gradual changes to the system to improve alignment.

Note how nowhere in this process we propose to redesign and reimplement the system. At a smaller scale this technique is well know in software development: it’s called refactoring. This is exactly what you also want to do at a larger scale with your architecture: refactor mercilessly. Refactoring should not be limited to the development phase but should be an integral part of the entire life cycle of a system.

Given a proper refactoring process and the obvious current, AS-IS, state of a system, I can gradually improve and align that system to an ever changing environment until the need of the very existance of the system itself is disappearing. I should avoid a big bang approach that proposes and develops a brand new TO-BE system.

Building for change is not a new slogan, yet it is not well understood nor implemented. Every day projects are born that are meant to create a new TO-BE and, sadly enough, at the same time the AS-IS of tomorrow.

Flex … not that flexible it seems …

In the last few weeks I have come into contact with Adobe Flex to create a separate Flex front-end that talks to a back-end using web services. The advantage would be rapid development of a front-end that can use the tons of fancy UI features offered by Flex.

After a few proof of concepts I quickly ran into several issues. The most important being that Flex does not support WS-Security. Read that again: Flex does not support WS-Security. Note that Flex is positioned as something that prefers to use web services to talk to the back end. Also check this bug report. There are some tutorials that explain how you can cheat and add WS-Security headers yourself. This is obviously limited to simple headers and does not include signing or encryption.

I wonder how Adobe can keep on positioning Flex as a great enterprise capable way of creating portable rich front ends when they don’t support WS-Security. They don’t support any of the WS-* standards.

Not supporting WS-Security is one thing, it might be on the road map but not yet implemented. There is however something else in that bug report that caught my attention …

dashes (-) are not allowed while naming things like classes, variables, attribute, etc in AS3. The elements named with dashes, when mapped to AS3 objects will not compile.

Gasp. Not allowing dashes in naming is something that other languages also do but having a standard mapping (XML/SOAP to ActionScript 3) that does not take this into account is more severe. They obviously didn’t test the mapping extensively or they did and ignored the results.

For me, as a developer, this is also an indication that their underlying code that maps XML to ActionScript objects started as a quick & dirty implementation to support simple demonstrations and somehow grew into code that went into the production version. The fact that they don’t support any of the WS-* standards only supports this theory.