Skip to main content

Friends don’t let friends accidentally drop zero-days on the projects they love

(Image credit: Image Credit: Deepadesigns / Shutterstock)

Open source vulnerabilities have been on the rise for years, and that’s a good thing.

While at first this may sound counterintuitive since it means that the red team has more intel for attacking vulnerable components in our applications, we know that the constant discovery and reporting of vulnerabilities play an essential role in making open source components more secure.

This is because it alerts project maintainers to the fact that they have a vulnerability in their code, and when the disclosure process is done responsibly, it gives them the opportunity to create the fix without endangering their users. Afterall, knowing is half the battle. However like most other systems meant to keep the lights on, the vulnerability reporting process is not without its challenges and the consequences can be severe when disclosure is not done responsibly.

In hopes of better understanding how this process is supposed to work when done right, where it can trip up, and how the community of GitHub users is making strides to improve it, then we need to start at the very beginning.

The first step is knowing when you have a problem

There is a long running debate over whether open source or proprietary code is more secure.

While there are good arguments on both sides, Justin Hutchings, a senior product manager who deals with security at GitHub makes a strong point when he says that, "Open source code isn't intrinsically more secure, [but] it is more securable," referring to the process of vulnerability disclosures.

Open source libraries are constantly being reviewed by the “thousand eyes” of the community, meaning that unlike proprietary code which is supposedly blocked from public view, security researchers are able to freely comb through the code in search of vulnerabilities. Modern tooling can even make that possible in an automated and scalable way.

When a vulnerability is found, usually by a security researcher, they notify the project maintainer and ideally start the process to get a CVE assigned. Then the clock starts ticking for a fix to be created. In general, the project maintainer is given a timeframe of between 30-90 days to create a patch that users of the open source component can use to secure their software before the researcher goes public with their findings.

The security fix is often released as a specific patch, but sometimes it is instead bundled into an upcoming release that includes other non-security fixes and perhaps even features. If developers are following best practices by enabling a vulnerability notification service, they should be alerted soon after and can quickly secure their application with the fix, perhaps even via an automated Pull Request.

Unfortunately, certain challenges can arise when it comes to properly disclosing the vulnerability to the right parties involved, leading to some less than optimal outcomes.

Challenges to responsible disclosure of open source vulnerabilities

Often the first issue that a security researcher may encounter is simply finding the right person to contact about the vulnerable open source component. While some open source components are maintained by large organisations like the Apache Foundation and have easy to find contact details, many projects are much less well-documented.

Perhaps the information on how to reach the relevant contact person is in a README file or embedded somewhere else in the repo, but in plenty of other cases the project owner may not have left their details, thus leaving our researcher high and dry.

It is not uncommon for some smaller projects which were created by a developer years previously to have been abandoned or maintainers simply did not take the responsible step of making their contact info available. This can cause significant problems later, especially if their orphaned project ends up being used as a dependency for a bigger project down the line.

Maintaining a level of secrecy is crucial for the safe operation of vulnerability disclosure when it comes to open source components. Because open source components are reusable, with popular projects being used by thousands of applications, a single vulnerability can be used to target a large range of projects if they are not given the opportunity to defend themselves.

This means that a responsible security researcher needs to take precautions and avoid shortcuts in finding the project maintainer or risk dumping a free zero-day exploit on the community. Tweets or comments in an issue tracker talking about the existence of the vulnerability, let alone how to carry out the exploit, could lead to a lot of painful attacks in very short order.

Given the constraints and frustrations, there is a recognition that there has to be a better way to simplify the disclosure process. Thankfully one of the biggest players in the open source space has announced a new program that has the potential to make this a more secure and efficient effort.

Introducing GitHub Security Lab

As one of the key players in the open source world, GitHub launched a new program in 2019 aimed at streamlining the vulnerability disclosure process.

GitHub Security Lab has created a new type of standard community file to make the information on how to report the vulnerability to the project maintainer more straight forward. What they have done here is created a new “Security Policy” section in the repo security tab that explains how to report a vulnerability. The good folks at GitHub have even thought to make posting this policy standard throughout an organisation’s repos, helping to scale the solution with less demands on the team.

Another interesting step that GitHub has taken here is to actually add a little bit of friction to the process, using the security policy to help prevent a community member from accidentally disclosing the vulnerability in a public issue. This reduces the chances of a zero-day incident that can send everyone scuttling for cover, and instead guides them on how to report it responsibly through a secure channel as set by the maintainer.

Once the vulnerability has been disclosed and validated, GitHub helps the maintainer to securely deal with working on the fix. This is most often done with a “shadow repo” that is hidden from view, giving the maintainer and their trusted contributors the safe space to figure out a solution. The team here has added a helpful feature, allowing the maintainer to invite outside help from folks like the researcher who reported the issue, while only granting minimal access to those outside the immediate circle of the trusted team. When the job is done and it is time to say goodbye, they make it easy to remove collaborators, tightening back to the core team.

Kudos for adhering to the principle of granting the least amount of access to the smallest group possible.   

While the team is working on the security advisory, making patches to fix the vulnerability, it can be plenty difficult to keep track of all the moving parts. To help with this, GitHub has new temporary workspaces that can be created for tinkering with a specific security advisory, becoming a central point for the ongoing versions. When they are ready to release their patch, they can merge everything in one go, saving time and managing better pull requests.

When everything is set to go, maintainers can move forward with merging their work in the shadow repo into their public one. All of the documentation, packages, and pull requests have been kept organised, so making the switch is much, much simpler. They have even made creating a CVE and advisory directly through GitHub.

What are the likely implications of GitHub Security Lab for the open source community?

In software security like most aspects of life, simple is good so we can hopefully expect a lot of positive results coming from GitHub’s initiative.

One possible outcome, albeit one we should expect, is that the number of reported vulnerabilities is likely to rise further because of this initiative for a couple of reasons.

  • By making the process of reporting responsibly easier, many researchers who may have previously decided that jumping through the hoops of a hard to contact maintainer was not worth the effort, are likely to notify more maintainers about vulnerabilities.
  • GitHub and their partners have begun offering bug bounties through the GitHub Security Lab. While the amounts for reports vary and are not competitive with many proprietary bug programs, the cash incentive is likely to play a part in bringing in more reports.
  • CodeQL is a semantic code analysis engine security tool that GitHub Security Lab has put out to help researchers uncover vulnerabilities, hopefully making it easier for more researchers to get involved.
  • The project makes it much easier for maintainers to report an issue, saving them the time previously spent having to go through the CVE. In fact it should increase the number of CVEs because in the past maintainers with little experience in security reporting may have skipped the process.

Members of the open source community generally want to help make the code better for all. The more that influential actors like GitHub can do by lowering obstacles to responsible disclosure and make it easier to disclose the right way, then the more vulnerabilities we are likely to see reported.

Rhys Arkins, Director of Product Management, WhiteSource