Ever heard of full disclosure?

Wikipedia defines it as:

Full disclosure requires that full details of a security vulnerability are disclosed to the public, including details of the vulnerability and how to detect and exploit it. The theory behind full disclosure is that releasing vulnerability information immediately results in quicker fixes and better security. Fixes are produced faster because vendors and authors are forced to respond in order to save face. Security is improved because the window of exposure, the amount of time the vulnerability is open to attack, is reduced.

Of course, full disclosure is not without its own set of problems. If on the one hand it puts pressure on the vendor to fix the vulnerable system ASAP, on the other hand, if enough information is provided, it makes it easier for exploits to be developed and used (sometimes exploits are provided when the bug is reported…). And this in turn causes yet another problem: even if the vendor patches the flaw in a timely fashion, there’s no guarantee that the users will do the same. Thus if exploits are available (or can be easily devised) script-kiddies galore!

But that’s exactly why we need full disclosure. It puts pressure not only on the vendor to patch the affected system, but also on the users to update their systems:

Getting rid of full disclosure would only make these problems worse. Sure, as Ranum argues, there would be less script kiddies spewing Web graffiti and shutting down sites with denial of service. But that would be replaced with something far worse: attackers who can uncover their own vulnerabilities, or have the connections to pay for them. With an environment of silence these attackers could cruise through networks with impunity knowing that their vulnerability knowledge will be useful for many months.

I can almost hear a counter argument, using the car analogy: back in the day, when cars were a novelty, you (almost) had to be a mechanic to drive one. Nowadays, cars have become more reliable, and you can safely drive one with little to no knowledge of its internals. This is (one of) the goals software development should strive to. More over, users should not have to know (or care) about updating their software. Wrong! If you can drive a car without being a mechanic, you do have to know (or learn) how to drive! Similarly if you want to use the internet, you should know (or learn) how to keep at the very least the software on your machine clean.

Counter argument number 2: «responsible disclosure». Here the idea is to release information stating that the bug exists to the widespread users, but disclosing details only with the vendors/developers. This approach has two problems: first, the vendor might downplay the bug, which if a bad thing by itself, it might have an ever worse effect: that of making users swallow that downplay, and not paying enough attention to updates. The other problem is that if the details of the security bug are never disclosed, one can never be sure the problem is really fixed (wasn’t Eric Raymond that said something in the lines of: “never trust closed source”?)

Finally, I quote the last paragraph from the above link:

Common sense has a tendency to trump all rules and regulations. While the decision whether or not to disclose or report a vulnerability is a difficult one, common sense should prevail. All the law requires is that we act reasonably. If only we could agree on what that was.

One response to “Ever heard of full disclosure?

  1. This essay by Bruce Schneier is a recommended reading. It goes into the subject more deeply than the survey I do here, reaching the same conclusion (if with some differences in detail): full disclosure’s benefits far outweighs it’s risks.