Microsoft on ‘Building Security In Maturity Model’

Last week I went to a presentation on the Building Security In Maturity Model by Gary McGraw. They interviewed about ten organisations who did have a software security team and asked them what they actually do today to make software more secure. They specifically went for a data-driven apporach, no “what could we do” but a 100% focus on “what do they do”. Piling all that information together, some magic processing (read: spreadsheet magic) and you have the Building Security In Maturity Model.

Microsoft was one of the organizatons participating in this effort. Steve Lipner himself wrote about how he experienced this effort and what he thinks from the outcome. One part of his article I would like to emphasise:

I’ve historically not been a fan of “maturity models” because many of them are so abstract and paper-oriented that you can rate “high” on the maturity model and still fail at whatever attribute of your products and processes (quality, timeliness, security) the model purports to measure.  In contrast, I like the BSIMM because

·         It’s specific.  The measures in the BSIMM are things that an development organization actually does to produce secure software.

·         It’s real-world.  Gary, Sammy, and Brian made a rule that no activity would be included in the BSIMM unless at least one of the organizations they interviewed actually performed that activity.

I have about the same idea on maturity models as Steve has. But then, I am mostly sceptical about any model and framework that abstracts away reality so vigorously that I wonder how any organization can successfully use them to achieve improvement.

TOGAF, Zachman … obsolete?

I stumbled accross this tweet from Danny Lagrouw:

Dion Hinchcliffe #qcon: “Architecture frameworks like Zachman, TOGAF, DODAF etc. aren’t keeping up with today’s web technologies.”

I found it interesting since it aligns with my own mixed feelings about these frameworks. On one hand they seem to do a good job on formalizing (enterprise) architecture but on the other hand I wonder if any organization can successfully realize their architecture, using those methods, in today’s rapid and agile environments.

Design Secure Software

Repeating an article by Bruce Schneier is kind of useless. I assume everyone even remotely related to information security has his blog as the first thing to read every morning. Nevertheless I’ll give it a go for those not yet aware of this excellent source of insight.

From his article titled IT Security: Blaming the Victim, I was happy to see these two quotes:

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

and (emphasis added by me)

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

There is absolutely not enough effort going into designing secure software and solutions. As I said before, too much energy is spend on dissecting vulnerabilities without even hinting at improvement. At the other end of the spectrum, organisations are spending large amounts of money on information security plans, risk analysis and governance but neglect to address securing the software that manipulates the assets they want to protect in the first place.

Always exciting in infosec?

I have a couple of Tweet searches I follow. One of them tracks tweets with the keyword “infosec”. This morning I woke up with this tweet in the list:

now an excel 0day. woohoo. it’s always exciting in infosec.

This tweet is very typical: it’s always about hacks, attacks in the wild … I personally find that very disappointing, the above tweet even has something morbid.

Although talking about specific vulnerabilities is important, it is a lot more important to talk about avoiding those vulnerabilities in the first place. I see extended articles explaining in great detail how they hacked Adobe PDF documents, web applications or something commonly used. They do this with such pride and amusement that I get this feeling they are sorry they can’t use them in the wild. It almost looks like as if the only thing that differentiates the real authors of malware from these infosec people, is the sense of ethics the second group has. Ethics that stand in the way of making money with the vulnerability found.

As I said, a detailed knowledge of vulnerabilities is very important. But talking about how to do better and avoiding them in the first place, that gives a lot more return on investment in the long term. What could authors of (faulty) software have done to make a better product? What specific design patterns, code patterns … would have avoided the vulnerability? Wich steps in their quality control methods are missing that could have prevented the vulnerability? Every article on a vulnerability is useless for me if it doesn’t mention advice to avoid the vulnerability tomorrow. Luckily there are many authors that do, but sadly also many that don’t.

I don’t think we got this far in constructing buildings by detailing every single collapse of a building without doing any lessons learned. We also try to find out how we can avoid disasters for any future building: tools, methods, procedures and guidelines are  updated as a consequence. That is what makes us move forward. That is what allows us to do bigger while at the same time become better.

Acting on today’s vulnerabilities will not protect us tomorrow. Today we need to work so we can prevent tomorrow’s vulnerabilities and help us control the overall risks.

10 Obstacles and Opportunities for Cloud Computing

My friends at Slashdot pointed me towards this reference of a good paper on cloud computing. This is probably one of the first decent articles I read about cloud computer. It covers real topics, real questions .. instead of the usual marketing gibberish. I am especially pleased they mention obstacles like “data lock-in” and “data confidentiality and auditability”. I wrote about some of these topics before: here and here.

Direct link to the PDF “Above the Clouds: A Berkeley View of Cloud Computing.

UAC seems almost useless in Windows 7

The recent turmoil on UAC seemed to be settled by Microsoft last week (my take on the issue). But now it’s time to question UAC again. This article explains that Microsoft is going in the wrong direction with UAC. From an annoying dialog that gives some security, it has degraded to just an annoying dialog.

Microsoft is now betting on what they call “trusted processes”: processes that are considered trusted so they don’t trigger a UAC dialog. A lot of those processes (like rundll32) are specifically designed to run external (untrusted) code:

In short, trusting executables is a poor policy, because so many executables can be encouraged to run arbitrary code. There is some irony in Microsoft’s behavior to use a trusted executable model; the company knows damn well that trusted executables aren’t safe, and uses this very argument to justify the UAC behavior in Vista. A system using trusted executables will only be secure if all of those executables are unable to run arbitrary code (either deliberately or through exploitation).

In other words (from the article mentioned above):

So, in spite of the most recent blog post, this remains a poorly-designed feature. UAC is now only as strong as the weakest auto-elevating program.

I wonder what happened to Microsoft’s security drive given these developments with Windows 7 security efforts.

Authorization Management and Attestation

A good read on authorization management from Kuppinger Cole (author is the Kuppinger part, Martin). One paragraph that I could relate to:

There is another interesting aspect of Authorization Management: Dynamic Authorization Management. Most of today’s approaches are static, e.g. they use provisioning tools or own interfaces to statically change mappings of users to groups, profiles, or roles in target systems. But there are many business rules which can’t be enforced statically. Someone might be allowed to do things up to a defined limit. Some data – for example some financial reports – have access restrictions during a quiet period. And so on. That requires authorization engines which are used as part of an externalization strategy (externalizing authentication, authorization, auditing and so on from applications) which provide the results of a dynamic authorization decision, according to defined rules, on the fly.

Most organizations I know are kind of stuck in static authorization management. It’s all about groups (and roles) that need to be populated by IAM tools. Even when the rules to do so are leaning towards dynamic authorization management. Sometimes they just have to, platforms like Microsoft Sharepoint depend largely on groups to perform authorization.

Also note the “European Identity Conference” organized by Kuppinger Cole. I was lucky to attend the first edition (as speaker) and can warmly recommend this conference to anyone interested. Atmosphere is great, content in-depth and a high concentration of (identity) brains. Now, do I qualify for free registration as a blogger ? 🙂

LDAP Referential Integrity

An old issue with LDAP servers has found the spot light again: referential integrity. This time it’s a call for attention made by James:

I also asked the question on How come there is no innovation in LDAP and was curious why no one is working towards standards that will allow for integration with XACML and SPML. I would be happy if OpenDS or OpenLDAP communitities figured out more basic things like incorporating referential integrity.

Pat pointed James to what he thinks is prove of support for referential integrity in LDAP (OpenDS, OpenLDAP and any Sun derivative):

For some reason, James has a bee in his bonnet over referential integrity and LDAP. I’m really not sure where he’s coming from here – both OpenDS and OpenLDAP offer referential integrity (OpenDS ref int doc, OpenLDAP ref int doc), and Sun Directory Server has offered it for years (Sun Directory Server ref int doc). Does this answer your question, James, or am I missing something?

I can’t answer for James of course, but if I had been asking that question … no Pat, it does not answer my question. Well, it kind of does, since it confirms that those LDAP incarnations have limited to no support for decent referential integrity. Let’s follow one of Pat’s links and see what it says (Sun Directory Server ref int doc):

When the referential integrity plug-in is enabled it performs integrity updates on specified attributes immediately after a delete, rename, or move operation. By default, the referential integrity plug-in is disabled.

Whenever you delete, rename, or move a user or group entry in the directory, the operation is logged to the referential integrity log file:

instance-path/logs/referint

After a specified time, known as the update interval, the server performs a search on all attributes for which referential integrity is enabled, and matches the entries resulting from that search with the DNs of deleted or modified entries present in the log file. If the log file shows that the entry was deleted, the corresponding attribute is deleted. If the log file shows that the entry was changed, the corresponding attribute value is modified accordingly.

So it seems that Sun Directory Service let’s you delete a user but it promises to make sure that it will do it’s very best to delete any references to this user within a “update interval”. It does not mention what a read after the deletion but before the plug-in kicks in will see. Will it still see the user as a member in a group although the user is deleted? I am pretty sure it does. This is of course, at least for me, enough prove that this functionality does not offer referential integrity. At best it offers some kind of deferred cascading deletes (or updates) with no semantics for reads done during the time interval between the original operation and this cascaded deletes and updates.

Does this mean an LDAP server is something to avoid in any production environment? Absolutely not! In fact, I am not even sure if an LDAP server should offer “real” referential integrity at all. If you need that kind of guarantees, you are not far from full transaction support either, so why not upgrade to a relational database? Just my 2 cents of course.

To Sun (and any other LDAP implementator): what would the impact be on read/write performance in LDAP if they would implement full referential integrity?

How to screw up your services architecture

I almost don’t dare to use the acronym SOA anymore. According to the latest news it is death and probably buried by now. But … long live services! And indeed, SOA has become a synonym for Grand Projects (with governance, business process improvement, methodology, enterprise integration … as small side projects). Needless to say that these projects either fail (for those who are lucky) or continue, without delivering much, until the end of time (for those who are unlucky). People forgot it’s all about a simple yet powerful architecture that uses services as a mental model. 

Even without SOA on the whiteboard and focusing on “just services”, most organization manage to screw that part up to. For them, services = web services = soap. So, they start to spit out “services” after “service”. Of course, in reality they are just spitting out SOAP binding after SOAP binding for .NET or Java methods. It should be considered a crime to call this “services”. Language and tool vendors have made it very easy for these organizations to create services in this way. It takes a couple of mouse clicks and a wizard or two to turn any class into a full fledged “services” (read: expose it with SOAP).

This library, bravely called “ServiceLayer” even takes this a step further … Forgot to create a “service” from your classes? No worries, you can now create a service from them, without recompiling. You just browse the compiled code and select the methods you want to expose as a service. If this continues the very word “service” will become so tainted we can’t even use that anymore.

Newsflash … top 10 list don’t work!

We all know at least a couple of Top 10 lists of security bugs. Probably one of the most famous is the OWASP Top 10. This excellent article argues that these lists have the potential to do more harm then good. Some typical harmful effects could be (and I have witnessed these first hand):

  • People judge the quality of their code based on these lists, thereby ignoring that the list is just that: a subset of things to avoid.
  • Instead of learning how to write good code, developers learn how to write code that avoids some common errors.
  • It does not educate people, new types of security bugs are only avoided when they appear on an updated version of the list.

I do agree with the above linked article, top 10 lists are not as innocent and helpful as they look. But I also have to admit that waving a top 10 list has helped me in the past to sell the idea of secure coding.

There is one argument of the article I would like to highlight. Focusing on lists of security bugs diverts attention from good designs. It is perfectly possible, in fact very likely, to have very sound designs that deliver on all requirements, functional and non-functional but fail miserably when security and risk is taken into consideration. Each time I had the opportunity to analyse a design in terms of its ability to be secure and behave predictable in the face of (partial) failure, I found that all of those designs suffered from serious risks. Risks that have a high impact and a much higher likelyhood of happening compared to hackers invading your network. Statistically it is much more likely that a software component starts to spit out erroneous data to its environment then it is likely that hackers invade your network and delete all your business data. Both have the same impact though. I bet you spend more money on keeping the hackers out then you are spending money to design for failure.

How many architects or developers take into account failure of components? What about partial failure? Does your solution support you in detecting and recover from those failures? Design for failure!