Friday, September 09, 2011

Short note on static analysis and lobbying

After getting annoyed with Twitters 140-char limit to make a sane point, I switch back to an old-school medium that allows actual arguments ;)

I was commenting on the dishonesty in lobbying for legally mandated static analysis with "cyber security" as an underlying argument.

I think this is a total bullshit argument.

Static analysis is good at removing a large number of low-hanging fruit when it comes to both reliability and security bugs. But: Static analysis alone fails to significantly increase the resilience of real-world systems against determined attacks (although it may increase the resilience to really casual attacks).

So I think it is good & sane to mandate static analysis for some pieces of code for reliability reasons. I'd rather fly in an airplane with formally verified avionics code.

The reality is though that even the most sophisticated static analysis systems nowadays are not terribly good at dealing with dynamically allocated memory, suffer from grotesque overapproximation as a result of summarizing dynamically allocated memory, and in general don't deal with many common programming patterns very well.

Avionics code without dynamic memory allocation and a team of people getting a static analyzer going profits a lot from static analysis. The browser that I am typing this in profits much less: It uses a gazillion programming idioms that are notoriously hard to analyze. No existing and generally available static analysis will significantly impact the difficulty of finding a remotely exploitable bug in this browser.

So, in short: Using SA can have great benefits in particular scenarios. Verifying an avionics system is a great example. Verifying a microkernel that allows me to safely sandbox my terribly buggy browser is another example. Mandating SA for general software development is insanity, though: The current state of research (let's not even speak about available products) isn't capable of impacting the resiliency (against determined attack) of a browser or document viewer significantly. Let's not kid ourselves: The technology just isn't there, and won't be for another few years.

3 comments:

Unknown said...

Summery/tangent: any reasonably complex program (say 10 KLOC, including base libraries) can't be verified unless it was written with that in mind from the get-go and then, only with extreme effort. With anything longer, SA will, after a point, just give noise.

pascal_cuoq said...

We use the word "safety" to describe a plane that does not endanger its passengers. It is often but not always the same as "reliability". The plane that is always grounded by technical failures is unreliable, but it is safe.

Actually, little formal methods are currently used for verifying the safety of life-critical software. Part of the reason is that the verification is made according to principles laid out in standards (DO-178B in the case of avionics), that list as means the techniques that were the best at the time the standard was drafted (tests; lots of tests; and more testing). The next iteration, DO-178C, will allow (not mandate) static analysis techniques where they are appropriate.

In order to avoid bad habits being taken and enormous delays in the adoption of useful new tools, standards should be flexible and give implementors a bit of leeway in choosing their tools. Dynamic allocation is an irrelevant technical detail. There already exist techniques that cope well with the verification of programs that contain dynamic allocation. They have other limitations, of course. But the reason not to mandate specifically this or that technique shouldn't be whether it works or not right now, or whether it is the best right now. Standard should simply allow implementors to choose their tools and give them a chance to justify their choices.

Thierry Zoller said...

+1