Ethics and software bugs

4/28/2015 09:45:00 AM
Tweetable
Google has a new team that hunts for security vulnerabilities in software--not just their own but software from any company--and publishes the details of the vulnerability, which spells out everything hackers would need to exploit it. They don't publish right away--everyone agrees that would be unethical. Rather, they contact the software makers privately first, and give them 90 days to come up with a fix. My question is, does that make it ethical?

One could doubt the arbitrary 90-day limit. Several software makers, including Microsoft, have been caught just on the edge with a fix prepared within 90 days but unable to push it to users in time to beat the deadline. The Microsoft case forced Google to revise policy so companies can get a short extension if a fix is almost ready, but in other situations, 90 days might simply not be long enough to devise a fix.

But even ignoring those issues, I'm still not convinced that publishing the details is the right approach. It's easy to blame software makers for not taking care of a security risk, but they aren't the only parties to be concerned with. The fact is that a pretty large share of software users regularly delay or even refuse security updates. Sometimes they simply aren't able to update right away--spotty internet connection or limited battery life, for example, or even just a busy in-real-life schedule that can't be bothered with computer stuff. Others actively distrust updates--it's often hard to tell the difference between a critical security update and spam wanting to install crapware. And a third group--possibly a majority of computer users--are simply apathetic, and don't understand the security risks.

Thus, most software makers seem to be able to satisfy the demands of Google's doomsday device, but we really don't know what share of software users do the same. I'd bet the numbers are not good.

I guess the moral in all this is you should make sure you always have all your updates for all your software and operating systems on all your devices. Because Google is telling everyone how to hack them.
Charles Guo 4/28/2015 12:13:00 PM
Isn't it more complex than this? Say someone finds a critical security flaw in, say, Chrome, software for which there are many competing products. Security researchers ultimately have a responsibility to the public to try to keep them safe, and I think the assumption in the community is that black-hats (private and government alike) are pretty good at finding these bugs. So you need to weigh allowing the software vendor sufficient time to patch the bug before everybody and their mother figures out how to exploit it against the possibility (probability) that black-hats are actively using the bug in the wild, against many people who would probably switch to better software were they aware of the bug.

A second (and IMO weaker) argument is that the arbitrary but standardized time limit incentivizes software vendors to react promptly to security flaw disclosures. It's an argument similar to that made for the existence of criminal punishment.
Matthew Martin 4/28/2015 01:00:00 PM
I see what you're saying about the need to inform customers about their risks. The ethical concerns cut both ways. I suspect the answer depends on a case-by-case basis though. In cases where users are responsive to the advice and can reasonably limit their risks, the bug should be reported. But caution is needed in cases where either users would not be able to limit risk quickly enough (for example, a security flaw on the server side beyond their control), or where users are simply unlikely to heed the advice.