Design Secure Software

Repeating an article by Bruce Schneier is kind of useless. I assume everyone even remotely related to information security has his blog as the first thing to read every morning. Nevertheless I’ll give it a go for those not yet aware of this excellent source of insight.

From his article titled IT Security: Blaming the Victim, I was happy to see these two quotes:

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

and (emphasis added by me)

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

There is absolutely not enough effort going into designing secure software and solutions. As I said before, too much energy is spend on dissecting vulnerabilities without even hinting at improvement. At the other end of the spectrum, organisations are spending large amounts of money on information security plans, risk analysis and governance but neglect to address securing the software that manipulates the assets they want to protect in the first place.

Always exciting in infosec?

I have a couple of Tweet searches I follow. One of them tracks tweets with the keyword “infosec”. This morning I woke up with this tweet in the list:

now an excel 0day. woohoo. it’s always exciting in infosec.

This tweet is very typical: it’s always about hacks, attacks in the wild … I personally find that very disappointing, the above tweet even has something morbid.

Although talking about specific vulnerabilities is important, it is a lot more important to talk about avoiding those vulnerabilities in the first place. I see extended articles explaining in great detail how they hacked Adobe PDF documents, web applications or something commonly used. They do this with such pride and amusement that I get this feeling they are sorry they can’t use them in the wild. It almost looks like as if the only thing that differentiates the real authors of malware from these infosec people, is the sense of ethics the second group has. Ethics that stand in the way of making money with the vulnerability found.

As I said, a detailed knowledge of vulnerabilities is very important. But talking about how to do better and avoiding them in the first place, that gives a lot more return on investment in the long term. What could authors of (faulty) software have done to make a better product? What specific design patterns, code patterns … would have avoided the vulnerability? Wich steps in their quality control methods are missing that could have prevented the vulnerability? Every article on a vulnerability is useless for me if it doesn’t mention advice to avoid the vulnerability tomorrow. Luckily there are many authors that do, but sadly also many that don’t.

I don’t think we got this far in constructing buildings by detailing every single collapse of a building without doing any lessons learned. We also try to find out how we can avoid disasters for any future building: tools, methods, procedures and guidelines are  updated as a consequence. That is what makes us move forward. That is what allows us to do bigger while at the same time become better.

Acting on today’s vulnerabilities will not protect us tomorrow. Today we need to work so we can prevent tomorrow’s vulnerabilities and help us control the overall risks.

UAC seems almost useless in Windows 7

The recent turmoil on UAC seemed to be settled by Microsoft last week (my take on the issue). But now it’s time to question UAC again. This article explains that Microsoft is going in the wrong direction with UAC. From an annoying dialog that gives some security, it has degraded to just an annoying dialog.

Microsoft is now betting on what they call “trusted processes”: processes that are considered trusted so they don’t trigger a UAC dialog. A lot of those processes (like rundll32) are specifically designed to run external (untrusted) code:

In short, trusting executables is a poor policy, because so many executables can be encouraged to run arbitrary code. There is some irony in Microsoft’s behavior to use a trusted executable model; the company knows damn well that trusted executables aren’t safe, and uses this very argument to justify the UAC behavior in Vista. A system using trusted executables will only be secure if all of those executables are unable to run arbitrary code (either deliberately or through exploitation).

In other words (from the article mentioned above):

So, in spite of the most recent blog post, this remains a poorly-designed feature. UAC is now only as strong as the weakest auto-elevating program.

I wonder what happened to Microsoft’s security drive given these developments with Windows 7 security efforts.

Authorization Management and Attestation

A good read on authorization management from Kuppinger Cole (author is the Kuppinger part, Martin). One paragraph that I could relate to:

There is another interesting aspect of Authorization Management: Dynamic Authorization Management. Most of today’s approaches are static, e.g. they use provisioning tools or own interfaces to statically change mappings of users to groups, profiles, or roles in target systems. But there are many business rules which can’t be enforced statically. Someone might be allowed to do things up to a defined limit. Some data – for example some financial reports – have access restrictions during a quiet period. And so on. That requires authorization engines which are used as part of an externalization strategy (externalizing authentication, authorization, auditing and so on from applications) which provide the results of a dynamic authorization decision, according to defined rules, on the fly.

Most organizations I know are kind of stuck in static authorization management. It’s all about groups (and roles) that need to be populated by IAM tools. Even when the rules to do so are leaning towards dynamic authorization management. Sometimes they just have to, platforms like Microsoft Sharepoint depend largely on groups to perform authorization.

Also note the “European Identity Conference” organized by Kuppinger Cole. I was lucky to attend the first edition (as speaker) and can warmly recommend this conference to anyone interested. Atmosphere is great, content in-depth and a high concentration of (identity) brains. Now, do I qualify for free registration as a blogger ? 🙂

LDAP Referential Integrity

An old issue with LDAP servers has found the spot light again: referential integrity. This time it’s a call for attention made by James:

I also asked the question on How come there is no innovation in LDAP and was curious why no one is working towards standards that will allow for integration with XACML and SPML. I would be happy if OpenDS or OpenLDAP communitities figured out more basic things like incorporating referential integrity.

Pat pointed James to what he thinks is prove of support for referential integrity in LDAP (OpenDS, OpenLDAP and any Sun derivative):

For some reason, James has a bee in his bonnet over referential integrity and LDAP. I’m really not sure where he’s coming from here – both OpenDS and OpenLDAP offer referential integrity (OpenDS ref int doc, OpenLDAP ref int doc), and Sun Directory Server has offered it for years (Sun Directory Server ref int doc). Does this answer your question, James, or am I missing something?

I can’t answer for James of course, but if I had been asking that question … no Pat, it does not answer my question. Well, it kind of does, since it confirms that those LDAP incarnations have limited to no support for decent referential integrity. Let’s follow one of Pat’s links and see what it says (Sun Directory Server ref int doc):

When the referential integrity plug-in is enabled it performs integrity updates on specified attributes immediately after a delete, rename, or move operation. By default, the referential integrity plug-in is disabled.

Whenever you delete, rename, or move a user or group entry in the directory, the operation is logged to the referential integrity log file:

instance-path/logs/referint

After a specified time, known as the update interval, the server performs a search on all attributes for which referential integrity is enabled, and matches the entries resulting from that search with the DNs of deleted or modified entries present in the log file. If the log file shows that the entry was deleted, the corresponding attribute is deleted. If the log file shows that the entry was changed, the corresponding attribute value is modified accordingly.

So it seems that Sun Directory Service let’s you delete a user but it promises to make sure that it will do it’s very best to delete any references to this user within a “update interval”. It does not mention what a read after the deletion but before the plug-in kicks in will see. Will it still see the user as a member in a group although the user is deleted? I am pretty sure it does. This is of course, at least for me, enough prove that this functionality does not offer referential integrity. At best it offers some kind of deferred cascading deletes (or updates) with no semantics for reads done during the time interval between the original operation and this cascaded deletes and updates.

Does this mean an LDAP server is something to avoid in any production environment? Absolutely not! In fact, I am not even sure if an LDAP server should offer “real” referential integrity at all. If you need that kind of guarantees, you are not far from full transaction support either, so why not upgrade to a relational database? Just my 2 cents of course.

To Sun (and any other LDAP implementator): what would the impact be on read/write performance in LDAP if they would implement full referential integrity?

Newsflash … top 10 list don’t work!

We all know at least a couple of Top 10 lists of security bugs. Probably one of the most famous is the OWASP Top 10. This excellent article argues that these lists have the potential to do more harm then good. Some typical harmful effects could be (and I have witnessed these first hand):

  • People judge the quality of their code based on these lists, thereby ignoring that the list is just that: a subset of things to avoid.
  • Instead of learning how to write good code, developers learn how to write code that avoids some common errors.
  • It does not educate people, new types of security bugs are only avoided when they appear on an updated version of the list.

I do agree with the above linked article, top 10 lists are not as innocent and helpful as they look. But I also have to admit that waving a top 10 list has helped me in the past to sell the idea of secure coding.

There is one argument of the article I would like to highlight. Focusing on lists of security bugs diverts attention from good designs. It is perfectly possible, in fact very likely, to have very sound designs that deliver on all requirements, functional and non-functional but fail miserably when security and risk is taken into consideration. Each time I had the opportunity to analyse a design in terms of its ability to be secure and behave predictable in the face of (partial) failure, I found that all of those designs suffered from serious risks. Risks that have a high impact and a much higher likelyhood of happening compared to hackers invading your network. Statistically it is much more likely that a software component starts to spit out erroneous data to its environment then it is likely that hackers invade your network and delete all your business data. Both have the same impact though. I bet you spend more money on keeping the hackers out then you are spending money to design for failure.

How many architects or developers take into account failure of components? What about partial failure? Does your solution support you in detecting and recover from those failures? Design for failure!

Disturbances in the cloud

Cloud computing is cool, no doubt about that. There have never been more good looking and futuristic looking schematics been made in Visio. Thousands of presentations, workshops and even conferences have been held on the subject.

One question however has not be clearly answered yet … what about data ownership? What about privacy of that data? When your applications are running in the cloud you are also handing over your data to whoever is running the data center. How sure are you that they protect this data as they should do? What about these situations:

  1. Your cloud partner goes out of business and your data becomes a valuable asset that can be sold to pay of debt. How well are you protected from this scenario? Or … what are the guarantees about confidentiality? Think SalesForce …
  2. Your cloud partner goes out of business without any warnings, your applications are offline, your data is not accessible. Worst case you got a couple of days notice, best case a couple of weeks. Does your disaster recovery plan takes this into account? How fast can you move to a new cloud partner or your own data center? How much data will you loose? How recent is the data you go online with after recovery?
  3. Your cloud partner decides to disable a feature in their application, a feature you depend on. Does your disaster recovery plan takes this into account? This is not far fetched, in a small way this is what happened when Microsoft decided to disable anonymous comments on their Live Blog. They even did this retroactively and so revealed identity information of authors who previously had been anonymous.

None of these scenarios is purely technical in nature and none of these scenarios are far fetched. You can probably think of many more realistic and sure to happen situations.

In relation to the 3th scenario … how many companies have application versions that are far behind the lastest public version purely because of functionality or compatibility they depend on? At least all of the companies I have came into contact with are in this situation. If you run everything on your own servers you have a greater deal of control then you can imagine at first. Companies should do their homework when moving some of this into the cloud, they are often giving up far more control then they think they do and want to do. Contracts alone won’t solve it either.

Prank calls and counter measures.

Gunnar Peterson linked to this fun but interesting story:

“He sounded just like Obama,” she said on Thursday, referring to President-elect Barack Obama.
Sensing she was the victim of a spoof by a South Florida radio station, she promptly disconnected the call.

Trouble was, it was Obama.

A chagrined Ros-Lehtinen told the Fox News Channel that she also hung up on Obama’s chief of staff, Rahm Emanuel, when he called her back to explain it really was the next president on the line.

Both Emanuel and Obama tried to convince her the call was for real.

“Guys, it’s a great prank, really,” she said she told them.

It took a subsequent call from California Democratic Rep. Howard Berman, chairman of the House Foreign Affairs Committee, to finally convince Ros-Lehtinen to talk to Obama.

To convince her that it really was Berman, she said she told him, “Give me the private joke that we share.”

These type of prank calls, when someone calls you and pretends to be a high ranking official in some country, used to be low probability and medium impact risks. Low probability since there was a border most people did not cross. We all laughed with prank calls where a radio presentator pretended to be some unknown person who wanted to order something so extraordinary that it was hard to believe it was true … but funny. But pretending to be the President of France or the President-elect of the United States, that was a whole different story. That was not done, who knows what the repercusions would be! It was also a medium impact risk because in most cases the victim of the prank call was not a VIP or anything, often even just a receptionist at a company. The impact was all personal and completely forgotten after a couple of weeks.

But recently some people changed all this. Today this is a medium to high probability and high impact risk. Certainly a higher probability since others have done it and got away with it. Surely a higher impact was well, just look at what happened to Sarah Palin. It’s not something that is forgotten after a couple of weeks, this is something that sticks to your career now.

But my congratulations to Ros-Lehtinen who did not only recognize the change in the risk profile but also employed a simple but effective counter measure: use of a shared secret.

Fly secure, don’t drink

We all know how these days you are not allowed to bring any significant amount of liquid on the airplane. Every liquid you do bring with you is taken away swiftly. Bruce Schneier has an excellent blog entry on the usefulness of this rule.

In Belgium we have this television series “Airport Security” about the day to day aspects of security on our national airport (“Brussels Airport”). It actually is a spin off from similar US and UK shows. In one of the episodes they showed how they confiscated liquids. After a couple of days all the bottles amounted to a fairly large pile. All nicely tucked away in plastic storage boxes. Their content is however not safely disposed of (after all, they can contain potential explosives), they give it all away to a charity organization who then distributes it to people in need.

Although I support the fact they want to help charity organizations, it seems a bit illogical to me. One minute they threat these bottles as potentially dangerous, confiscating them without exception, the next minute their risk level seems to drop to zero and they are handed out to charity.

As Bruce states in the above mentioned article:

If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it’s not dangerous, then stop trying to keep it off airplanes.

So either we stop confiscating those liquids or we start handling them as really having a risk level: threat anyone who tries to bring it on as potentially dangerous and safely dispose of the liquids. Our current procedures are just stupid, annoying, incomplete and don’t add value to protecting those who travel by air.