When Chris Roberts took the controversial action of accessing sensitive flight systems aboard a live United Airlines flight in May 2015, he claimed that it was after years of inaction from the airline.
Roberts reported that he had been in contact with them as early as 2010, with little movement on their end. While the risk of his actions were much too high to justify, it is easy to imagine the frustration a dedicated security researcher would feel after years of rebuttal. Indeed, I cannot imagine a more important area of vulnerability research than airlines, who carry thousands of passengers each day.
An uncomfortable grey area for vendors
This case is the prime example of the uncomfortable grey area that often arises with responsible vulnerability disclosure. Supporters of responsible disclosure suggest time frames ranging from weeks to months to a public disclosure resulting from the inaction of a vendor. Vendor inaction is most definitely an ethical pressure and central to this ongoing discussion: it is one of the reasons the concept of responsible disclosure exists. A security researcher has a moral responsibility to inform the public of a vulnerability, if the vendor chooses to ignore a disclosure through incompetence or sheer will. In theory, this makes logical and moral sense. In practice, these guidelines have rough edges even after two decades of disclosing bugs.
In practice, public disclosures resulting from vendor inaction get results. Again, United Airlines provides us with some great examples here. Not long after the Chris Roberts story broke, the airline put into practice a bug bounty program. While the program only covered the mobile app and public website, the timing of its launch was telling.
Not soon after the program was launched, the airline awarded one million airline miles to a researcher who discovered a serious vulnerability that might have resulted in remote code execution. Strangely, after that, the airline’s bug bounty program seems to have gone off the rails. A researcher detailed an interaction with United this past November, when he discovered an information disclosure vulnerability that might have allowed for leakage of sensitive information for any rewards member. It went unpatched by the airline for six months, until the threat of public disclosure seemed to have pushed it to the top of the list. The bug was then quickly addressed.
Does a vendor need to define its own boundaries?
The aforementioned rough edges of responsible disclosure continue to show themselves. The December incident between Facebook and SynAck researcher Wesley Wineberg serves as a great example. There were good intentions on both sides. Facebook is highly respected in the security community with an established bug bounty program. Wineberg is a respected security researcher with some well-known vulnerability discoveries under his belt. The two came to blows when Facebook felt Wineberg overstepped with some of the vulnerability research he was doing against the Instagram service. Wineberg felt that Facebook did not adequately define the boundaries of its program. The security community was largely split on who it sided with in this case.
These interactions show that there is still a large trust chasm between vendors and security researchers. To move forward in this discussion, our time will be best spent trying to build that trust relationship. One model that I think has great promise is that of the bug bounty broker. The broker sits in between the vendor and the security researcher, facilitating the inception and management of the bug bounty program. Vendors can engage with the broker if they have a product that they would like analysed from a security point of view. The broker often helps the vendor develop a bounty program that best fits its needs, offering guidance around disclosure guidelines and program scope. Security researchers can engage with the broker to offer their services and be rewarded for their work. The broker acts as a trusted third party and mediator.
The bug bounty broker
Bug bounty brokers have a role that isn’t often heralded: encouraging ethical behaviour in the vulnerability disclosure process. They are in an ideal position to drive the responsible disclosure discussion forward. Keeping in mind that ethical behaviour should be adhered to by both sides, the vendor and the security researcher, I took a cursory look at two leaders in this field, HackerOne and BugCrowd. HackerOne clearly addresses the response teams of vendors in their vulnerability disclosure guidelines. BugCrowd only addresses security researchers in its code of conduct.
Vendors can and do act unethically, even if that act is simple inaction. The responsible disclosure process is best served through ongoing collaboration between the vendor and researcher. Anything we can do to facilitate that collaboration, to build that bridge of trust, strengthens our industry. Step one is finding ways to encourage both sides to engage with each other.
Stephen Cox, Chief Security Architect, SecureAuth
Image Credit: frank_peters/Shutterstock