Microsoft, non-profit MITRE Corporation, IBM, Nvidia and a handful of other companies have collaborated on a new framework called the Adversarial ML Threat Matrix.
In a blog post (opens in new tab), Microsoft described the tool as an industry-focused open framework built to help security analysts detect, respond to and remediate threats against machine learning (opens in new tab) (ML) systems.
Citing analyst firm Gartner, as well as its own research, Microsoft claims the vast majority of companies don’t have the right tools in place to secure their machine learning models.
The matrix works by curating a set of vulnerabilities and behaviors that Microsoft and MITRE vetted as effective against production systems. It is built upon plenty of input from researchers from different universities, including the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University.
The tool also comes with a list of tactics hackers and criminals usually use, as well as case studies and illustrations covering well-known attacks.
According to a VentureBeat (opens in new tab) report, the kit is now available on GitHub (opens in new tab), where Microsoft and MITRE will also encourage contributions from the open source community. Researchers can use the platform to submit studies on attacks against ML systems (opens in new tab) running on AWS, Azure, Google Cloud or IBM Watson.