The recent defacement of the United Nations web site is a prime example of why we programmers shouldn’t trust ourselves to write secure code – at least not without our computers’ help. The U.N. web site, according to Slashdot’s coverage of the incident, was defaced by way of a common, well-known attack: SQL injection. What’s interesting is that programmers can render this attack harmless by employing simple, readily available programming tools such as placeholders and prepared statements. Why, then, are so many web sites, including the UN site apparently, still vulnerable?
Some say it’s because the programmers of these sites are incompetent, but that argument ignores that programmers are human, while the security tools we give them offer meaningful protection only if wielded with inhuman perfection. Having the tools to plug security holes, even if the tools are simple to use and readily available, is not enough to ensure that every single security hole will be identified, let alone plugged. Even the most experienced programmer can be expected to overlook a hole now and then. Unfortunately, one hole is all it takes.
That’s because security is not like other software-quality challenges: its costs are fundamentally asymmetric. For the attacker, the bad guy, the challenge is to find just a single exploitable hole. For us, the good guys, the challenge is to achieve perfection: to plug all of the holes in our code, every single one. That’s because attackers, unlike regular users, can be expected to probe our code until they find a hole to exploit.
How then do we ensure that we have plugged every single hole in our code? Testing isn’t sufficient: we can easily overlook holes when writing tests – a perfectly human error. We could supplement testing with code reviews, painstakingly searching for remaining holes while enforcing the use of hole-preventing best practices, but reviews are expensive and, again, subject to human error. A better approach, both less costly and more reliable, is to delegate this burden to our computers, which can do the job correctly, every single time.
This kind of delegation is possible today with modern static type systems. For example, in A Type-Based Solution to the Strings Problem, I offered a tiny “safe strings” library for the Haskell programming language. The library takes advantage of Haskell’s powerful type system to detect unsafe string interactions at compile time. If we faithfully build our code on top of the library, and our code compiles without error, we can be assured that our code is free – completely free – of SQL-injection (and XSS) holes.
While this result is indeed quite beautiful, it certainly isn’t novel. Researchers have been proving interesting properties via type systems for a long time. As Oleg Kiselyov and Chung-chieh Shan pointed out in a comment on my earlier article, the foundational idea is over three decades old.
More recently, Kiselyov and Shan have extended the idea to guarantee more-interesting properties using a trusted kernel and types that represent lightweight static capabilities. The kernel, which is small enough to be reasoned about and formally verified, carefully hands out capabilities to untrusted application code. The untrusted code, in turn, presents the capabilities back to the kernel to invoke operations, which, thanks to the kernel’s trustworthiness, are guaranteed to be safe. (My safe-string library can be seen as a trivial implementation of this programming style.)
When static type systems are used in this way, they don’t merely catch typos and bugs that good testing would have caught as a matter of course, but offer programmers guarantees that would have been impractical to obtain any other way.1 If you consider security important, you might bear this fact in mind when choosing languages for your next project.
Going further, the security benefits of rich static type systems are only now starting to trickle into mainstream industry. As libraries like “safe strings” and idioms like static capabilities become more familiar and get woven into future generations of development frameworks, we can expect marked improvements in the security and robustness of our applications.
In the not-too-distant future, perhaps, we might look back in amazement at the days when important security properties were neither free nor guaranteed but expensive and uncertain, underwritten only by the heroic efforts of individual programmers, struggling against impossible odds to achieve inhuman perfection.
Then again, it sure took garbage collection a long time to catch on.
1. How, for example, could you eliminate the possibility of SQL-injection and XSS holes via testing?
I suppose you could do it if you worked at it hard enough. You could augment your string data structures with run-time information about what they represent: this string represents SQL, this string represents plain-old text, and so on. Then you could redefine your string operations and template interpolation systems to assert that their string inputs were compatible. Of course, if these assertions ever failed, they would do so only at run time, when it would be too late to do anything but die rather ungracefully. So you would be forced to augment your code-coverage tools to ensure that every string-path was covered during testing. That way you could catch all potential run-time string failures – indicating holes – during testing and eliminate the holes (and the subsequent need to fail at run time) before you deployed your application for real.
So, yes, you could do it. But to do so would require you, in effect, to write a crude, single-purpose type system that checks types at test time. That says something, doesn’t it?