Cloud IT as an Architectural Style

Martin Kuppinger from Kuppinger Cole, known from the excellent European Identity Conference, wrote a very interesting article on Cloud Computing: “It’s not about the cloud – it’s about Cloud IT“.

But the more you dive into the topic of cloud computing it becomes obvious that this cloudy thing of “cloud” (usually associated with the Internet and things which are provided there) isn’t the key thing. The key to success is that companies understand the value of Cloud IT.

What does this mean? Cloud IT stands for consequently using cloud principles in IT – and in every part of IT, not only for consuming some external services. That includes

  • well defined services (SLAs!!!)
  • a consistent service management across all services, regardless of where they are running (and, based on that, consistent approaches to cloud governance)
  • applications which are agnostic of where they are run or which hardware resources are available – there have to be parameters which might limit the ability to run applications everywhere and the application has to accept the currently available hardware resources but as well should understand that these resources can change dynamically

Defining everything in IT as services in a consistent manner is a fundamental change and the foundation for a flexible use of cloud services. Once you have made that move you can decide (based on parameters of a service) which service provider (internal or external) you will use. Thus, the first step is making your IT “cloud-ready”, e.g. moving towards a Cloud IT. Without that, using cloud services will always be sort of tactical and not strategic.

On the last day of the 2009 edition of the European Identity Conference I participated in a workshop on Cloud computing and Identity with Martin. In the workshop I told Martin that for me, an architect, the most interesting aspect of Cloud Computing is not the ability to house your application logic externally but a renewed and global attention for various architectural patterns.

The underlying current for most of these patterns is a high degree of abstraction and transparency combined with simplicity (not the bad kind, the good kind). In other words: keep it simple, abstract away everything that is not part of your application and don’t care about the environment you are running in (for instance network transparency). The advantages of following these principles are becoming more obvious due to Cloud Computing: scalability, continuity, flexibility, reusability …

Those patterns can equally be applied to classical internal IT. Yet, you rarely see this except at the application level. Cloud computing forces you into this thinking, traditional IT however gives you enough escape hatches. Not in the least because vendors keep on selling solutions that stifle innovation. As a simple example you can take the infamous network transparency. Demonstrated over and over again in the last 3 decades to be achievable (see for example the Inferno operating system) yet most commercial solutions still expose the network to you. So many good “inventions” but so little uptake from vendors.

In conclusion: I can only join Martin in his advice: get your IT cloud ready, move to a Cloud IT. Even if you will never ever actually move to the cloud. And more importantly, put pressure on your vendors to force them to innovate!

[edited: corrected some typos and grammar]

Fake SOA

I came across this article from Anne Thomas Manes. She is probably best known for her article on the death of SOA. The article has an interesting quote (emphasis added by myself):

Most organizations that I’ve spoken with are using service-oriented middleware to do integration (SOI rather than SOA). Very few companies are actually rearchitecting their systems, i.e., simplifying their applications and data architectures in order to increase agility.

Most if not all “SOA efforts” I have came across in the last couple of years suffer from the above. The prime focus is on integration technologies: use a service bus as integration middleware. It is no surprise that most ESB products have a EAI background and just reinvented themselves as an ESB.

The second interesting item in the article (emphasis added by myself):

Instead they are using WS-* or something similar to implement open interfaces to their existing applications (i.e., JABOWS). Over time, JABOWS typically results in increased architectural complexity and systems that are more fragile and more expensive than ever before. Although initially the initiative appears to be successful, the long term effect is actually a failure.

In a previous job I regularly questioned the abundant use of SOAP and WS-* to create a Service Oriented Architecture. JABOWS (Just A Bunch of Web Services) is definetely not the same as SOA and indeed often results in a far worse architecture. SOA is not so much about the technology realizing the interfaces, it’s about the services you define as part of an overall architecture.

Disturbances in the cloud

Cloud computing is cool, no doubt about that. There have never been more good looking and futuristic looking schematics been made in Visio. Thousands of presentations, workshops and even conferences have been held on the subject.

One question however has not be clearly answered yet … what about data ownership? What about privacy of that data? When your applications are running in the cloud you are also handing over your data to whoever is running the data center. How sure are you that they protect this data as they should do? What about these situations:

  1. Your cloud partner goes out of business and your data becomes a valuable asset that can be sold to pay of debt. How well are you protected from this scenario? Or … what are the guarantees about confidentiality? Think SalesForce …
  2. Your cloud partner goes out of business without any warnings, your applications are offline, your data is not accessible. Worst case you got a couple of days notice, best case a couple of weeks. Does your disaster recovery plan takes this into account? How fast can you move to a new cloud partner or your own data center? How much data will you loose? How recent is the data you go online with after recovery?
  3. Your cloud partner decides to disable a feature in their application, a feature you depend on. Does your disaster recovery plan takes this into account? This is not far fetched, in a small way this is what happened when Microsoft decided to disable anonymous comments on their Live Blog. They even did this retroactively and so revealed identity information of authors who previously had been anonymous.

None of these scenarios is purely technical in nature and none of these scenarios are far fetched. You can probably think of many more realistic and sure to happen situations.

In relation to the 3th scenario … how many companies have application versions that are far behind the lastest public version purely because of functionality or compatibility they depend on? At least all of the companies I have came into contact with are in this situation. If you run everything on your own servers you have a greater deal of control then you can imagine at first. Companies should do their homework when moving some of this into the cloud, they are often giving up far more control then they think they do and want to do. Contracts alone won’t solve it either.

And the solution is … SSL!

Today I attented a talk from Microsoft about their new Azure Cloud Computing platform. They had hired David Chappell to present the first topics that introduced the whole concept and the specific offering Microsoft is making in this area.

It was all interesting, David Chappell is a gifted speaker. At one point however I got disappointed. David was explaining how REST was a very good choice for communicating with cloud services. Amazon, Google, Microsoft … they all have cloud data services that can be accessed using a REST api. Someone in the audience asked how, if they don’t use SOAP with WS-*, how can they secure this. David’s answer came quickly “oh … use SSL … it’s only one endpoint talking to another endpoint, SSL can secure that”.

The days that there was always single network connection between the consumer (client) and the producer (server) are also over. At both sides, the message passes through various firewalls, gateways, messaging infrastructure … before being delivered to the real message endpoints. You can’t use SSL to protect it all. SSL will just protect a single network connection.

With the current solutions and architectures, people need to understand that message level (or application level) security can never be achieved by depending on transport level security alone. There are just too many hubs between the two endpoints. You need appropiate security controls at the message level as well. If people insist on using REST for scenario’s that go beyond low assurance needs, they must think about message level security and trust controls that are independent of the transport layer. If we continue to neglect message level security in REST while at the same time promoting the usage of REST in cloud data services, we are destined for a security nightmare in the near future.

The topic of REST and security is also definitely not new.

While we are at the topic of “built security in, from the start” … may I kindly ask both Microsoft and Adobe to support WS-* security standards in their RIA technologies (SilverLight and Flex). If Microsoft really is serious about security as they so often claim these days, then why does SilverLight 2.0 does not have support for web services security beyond just SSL? It looks like a missed opportunity for which we will pay dearly in a couple of years.

On the bright side … a lot of the services Microsoft will offer on their Azure platform will have full and first class support for claims based access control. At least a standards based authentication is possible. They do seem to think it also solves the authorization problem … that is wrong. Perhaps more on that later.