Skip to main content

AI: The need for transparency in the cybersecurity industry

(Image credit: Image Credit: Geralt / Pixabay)

Over the last decade, AI, once a far-fetched theme of old-school science-fiction movies, has quickly grown into one of the most prolific emerging technologies – and, by association, one of the most recognizable buzzwords out there. Virtually every industry today – healthcare, transportation, manufacturing, agriculture, banking, retail, finance – has either implemented or is planning to implement AI in some way. It’s the major technology trend of our era, driving everything from voice-controlled consumer tech to factory robots. And we know this because many of the companies operating in these spaces publicize their use of AI. What looks more cutting-edge than announcing your new AI-powered initiative? It’s as much a marketing gimmick as it is about product and service efficiency.

As with other fields, AI has surfaced as an accelerator of cybersecurity innovation over the past five years. However, this progress has been slightly mixed and inefficient. For example, cybersecurity firms still rely predominantly on non-AI systems to detect vulnerabilities in their codebases and sophisticated adversaries on their networks. This lack of advancement is due to a culture of secrecy, joined with a culture of hollow marketing hype, which inhibits the cybersecurity industry’s adoption of AI. This is the opposite when it comes to other AI application areas – like computer vision, voice recognition, and natural language understanding – where firms have created an “intellectual commons,” in the form of open benchmarks, conferences, and workshops in which innovations are shared openly with the goal of elevating AI research in those industries.

The effects of these are significant. On one hand, firms engaged in genuine AI research are motivated to withhold their findings, because they know other firms likely won’t either. Conversely, because firms are not open about their AI technologies, free riders who use poorly performing AI systems are not being held accountable for them.

An obscure situation

The tech giants of the world such as Google, Amazon and Facebooks, are increasingly more open about their AI research. A good reason for this is the need to recruit top-notch talent, including AI researchers from academia. Being able to attract those candidates often means allowing them to continue publishing their work, which naturally leads to more openness about their AI R&D.

Comparatively, there seems to be a lot more uncertainty and embellishment in the security space. There’s a group of cybersecurity companies that claim to be working in AI but are really practising something closer to Stats 101. These companies try to act as if they’re building some algorithmic secret sauce behind closed doors, when in reality, they’re just trying to get people to pay no attention to the wizard behind the curtain.

Coupled with that is a certain level of mystification and even pomposity. Many security firms claim to be deploying AI and machine learning, but when asked to explain what’s actually going on, you’ll get replies along the lines of: “You wouldn’t understand it, just trust us. It’s too sophisticated to explain. We can’t reveal it because our competitors would just copy us.” This defensive posture doesn’t just deteriorate the credibility of the security industry as a whole; it also means legitimate AI innovators have to carry the weight of these bad actors.

When cybersecurity firms become reluctant to publish details of their AI technology, it becomes harder to differentiate true innovation from marketing hype. To truly pave the way in using AI to solve the hardest problems in cybersecurity, like detecting sophisticated crooks on our networks or tainted code in software supply chains, cybersecurity AI innovators will have to learn to emulate the openness and scientific culture of the broader AI community.

AI silos jeopardize cybersecurity innovation. Intelligence sharing fuels it.

It is clear that the security industry’s tendency to build silos negatively affects innovation. When organizations publicize and share information, there are major social benefits. Language modeling and machine translation are two good examples of applications we’ve seen bubble with innovation over the last five years, because the companies that have invested hundreds of millions into them are becoming more open to sharing their findings. They may not reveal everything about their projects, like source code and data sets, but they share enough information to help the wider industry grow. Something the cybersecurity industry has overlooked.

This reluctance to share innovation doesn’t just harm the vendors; it harms the buyers, too. When buyers don’t have many reference points in the market to compare vendors and solutions, they don’t have the information they need to make more informed purchasing decisions.  

By contrast, when security vendors are more open about their solutions, methodologies, and especially their AI applications, it opens two sets of opportunities for the industry. First, it gives buyers more information to make better decisions, not just about the security solutions they purchase but also about their overall security stance and where it may need some improvements (e.g., “I’ve bought Vendor X, but now I also know I need to complement it with Vendor Y.”).

And second, the benefits contribute to the vendor as well. The more vendors publicize their AI research and applications, the more they learn from each other’s faults, complement each other’s strengths, create new test environments, establish themselves as credible, and standardize measurements for how these solutions may work in production. More openness and transparency around AI in security will in turn make everyone’s case to AI in security that much stronger – and as a result, more attractive to buyers.

Cybersecurity can’t be a self-centered industry

This closed-minded approach has to end. Otherwise, our industry will become a market of lemons, where the lowest-quality products succeed because they’re cheaper to make, degrading the presence of information asymmetry and damaging the reputations of both cybersecurity as a whole and AI in cybersecurity specifically.

Smart regulations could be one option for providing a sense of direction on how vendors talk about their solutions and use of AI. But another is for companies to just take the proactive step to simply open up. The more open vendors are, the more likely they’ll inspire others to join them in creating more specific and meaningful conversations with buyers – and at the same time, encourage vendors to pursue new methods of innovating AI applications in cybersecurity.

That’s what drove Sophos and our team of SophosAI data scientists. We’re working to lead the charge in the cybersecurity AI space through a series of initiatives aimed at galvanizing the community towards more openness and less mindless hype. That includes commitments to publish academic papers on the AI systems in our products, make large, high-quality benchmarks available to other private-sector and academic research groups, and to open source our technologies for the wider research community and the world.

Joshua Saxe, Chief Scientist, Sophos