Skip to main content

Communication, communication, communication: the route to actionable cyber threat intelligence

(Image credit: Pixabay)

Knowledge is key to success in any type of combat. Intelligence on an adversary can mean the difference between victory and defeat. However, poor communication often leads to misunderstanding or a garbled message, which means that advantage can easily be lost. Cyber Threat Intelligence (CTI) is one area where this is particularly pertinent and can result in security teams struggling to reap the full benefits of intelligence. 

Achieving actionable CTI is a highly worthwhile exercise – but the path to success doesn’t tend to be quick or easy. The first key milestone for security teams to enhance the CTI sharing process begins with understanding the problem. 

Identifying the issues

Fundamentally, the problem many organisations face when sharing insights comes down to a breakdown in communication. Insights get lost in translation between security professionals and intelligence professionals. The latter needs actionable insight to help with defence (venturing more in to strategic and abstract concepts), whereas security pros need to know how to protect their organisation (often a far more tangible and operational endeavour). 

As a result, both sets of practitioners tend to approach the issue from different angles. Security professionals are focused on what is attacking their network and what they can do to defend against it. Intelligence teams, on the other hand, care about why the network is being targeted and why they may or may not need to do something about the attack. 

What’s more, the problem is exacerbated when security and intelligence professionals communicate through a threat intelligence sharing function. This can lead to a number of issues, including interpretation and bias. Sharing ideas and communication is at the core of CTI, but how ideas are presented and subsequently interpreted can be open to bias and misinterpretation. And while some professionals understand that communication is subjective, ensuring their ideas are understood takes time and effort. 

Not only that, while there are close links between security and intelligence, the aims of the two functions can seem to be in direct opposition. For example, the security team would deem it a success if a cyber attacker was foiled in their attempt to get into an organisation. However, it is an intelligence ‘loss’ as it means that nothing has been learnt about the capabilities or motivations of the attacker. 

In short, both security and intelligence teams are exchanging insights ineffectively, and it’s clear that a protocol should be in place to ensure a common framework for idea sharing. STIX and other structured languages are a good first step in a framework like this. But a common language does not immediately mean a common understanding, so these languages make up just a small piece of the puzzle. 

Finding a solution

The reason behind a protocol is straightforward. Put simply, it’s a means of agreeing that some forms of data or information need to be communicated quickly and objectively. Over time, this protocol can be refined to ensure the objective and useful data is extracted, with this ultimately becoming the only method of communication. 

There are several tangible benefits to this approach. Using a pre-defined ontology (such as STIX) makes sure that the language of the author does not need to be reverse-engineered by analysts, saving time and resource. The process of reverse-engineering a report’s language means it can be open to interpretation. As such, a protocol can significantly reduce ambiguity. These processes present a number of opportunities for human error – whether through copy and paste errors, typos or other mistakes. A protocol can reduce this, ensuring messaging is consistent and reliable. 

Defining the protocol

From here, there are three steps needed to define the protocol. This includes structured intelligence, indicator watchlists and TTP clustering. 

Indicator watchlists form the first stage of this process and are a big step towards defining the protocol.  In practice, this means the inclusion of a list of indicators the author believes to be particularly important for the audience to receives. Security professionals are required to work at a higher tempo than intelligence teams, so often need easy to understand indicators that can be deployed to their security controls quickly. 

Stage two is TTP clustering. This is a big step towards automating the derivation of context behind activity, and is something the industry has been rallying behind as a means of categorising CTI recently.

When detail is extracted from written reports and is included in the protocol, the volume of data tied up in the analyst’s ideas is reduced, meaning that detail is made available for more automated activities. This presents the ability to understand context without needing to read all the supporting information. And whether you’re using a universally agreed library like Mitre ATT&CK or a protocol agreed in-house such as a malware naming convention, it presents a huge step in the maturity of CTI. 

Finally, while both security and intelligence teams tend to accept a shared understanding of the categorisation of data, a problem arises when trying to express the meaning in more than two dimensions. Data within cybersecurity gets complex quite quickly, so this is a particular problem within the industry. Two-dimensional categorisation (for example, Indicators classified in to TTPs) works to a point – but assumes that the ‘wild’ follows the same neat hierarchy of data classification.

To counter this, we need to look at structured intelligence as stage three of the protocol. Recording data in a structured way considers the reality of a multi-dimensional complexity of cybersecurity threats. It also means the facts of the intelligence can be spelled out in a way that can be queried for the specific data required. 

With the industry moving through these various stages of the protocol, we are beginning to see the intelligence analysts’ knowledge evolve from something that exists in their heads to a shared corporate understanding. As analyst knowledge becomes less bespoke, there is the opportunity for it to be shared with the wider community in a common format and contribute to a common effort. As a result, we are seeing teams respond to incidents much quicker and ensuring team resources are used in the most effective way. 

 Chris O’Brien is Director, Intelligence Collaboration, EclecticIQ (opens in new tab)

Chris is director, intelligence operations at EclecticIQ.