Skip to main content

Real security depends on real knowledge - how to protect all your IT assets

(Image credit: Image Credit: Deepadesigns / Shutterstock)

Confucius once said, “Real knowledge is to know the extent of one’s own ignorance.” Modern IT teams have to know about far more devices, IT assets, servers, cloud services and web applications than ever before. However, without accurate and timely data on IT assets, security teams can be ignorant of their own ignorance.

Why is the IT asset inventory task so hard today?

To understand why IT asset inventory is an issue that still exists today, we have to look back at how IT security teams have had to deal with wave upon wave of new IT devices in different places. There is a huge variance in how these assets are described - from multiple names for the same company or software product, through to multiple tools or products that meet the same business need. If you consider the sheer number of collaboration tools available from Microsoft, from Skype and Skype For Business through to Lync and now Microsoft Teams, then you can see the potential issue around software variance.

From traditional central IT assets that only existed on corporate networks or in data centres, today we have more IT systems that exist in public cloud services or in third-party data centres instead. We have shared responsibility around these assets rather than complete control. And we have IT systems that may exist for days or even just hours at a time, rather than the long-term investments that used to exist.

Alongside this, we have traditionally run IT security teams on fairly lean principles. Typically, security departments would acquire best-of-breed solutions to protect their environments and to face off against different risks. These new tools would extend security for the enterprise, but it would also mean that each product would be an island of technology responsible for its own operations. Each time a new solution was brought in it would mean another management panel to look at and another security solution to administer. 

This approach worked when everything was centralised. However, it quickly became overwhelming as attackers discovered new ways to penetrate network defences and more new solutions were required to keep up. While each solution might do its own job well, it became increasingly difficult to build up the global visibility needed to respond effectively to cyberattacks.

In today’s environment, the perimeters between internal IT and external services have been blurred and erased. Achieving accurate insight into everything is more difficult, while that fabled ‘single source of truth’ around IT is now more important than ever.

To deal with these issues, getting a central service in place that can bring together all the right telemetry data in a single place is essential. The data collected can be normalised, correlated and enriched to build up a truly accurate picture of what assets exist. Rather than deploying, maintaining and operating this plethora of traditional best-of-breed solutions individually, these solutions can work as part of a cohesive whole approach. This is particularly important when we see teams struggling around a shortage of talent to fill vacant roles, and when providing a satisfying career progression is essential to keeping those staff over time.

Orchestrating your approach around data

The traditional approach to enterprise security was based on many different products, all with their own management panel and limited integration. To get that unified approach to security in place, these systems have to be integrated so that any data can be correlated and enriched. This can be achieved by using the APIs that products have to share data effectively between each other, for example by providing information on available patches alongside security vulnerability or threat intelligence data and linking this to IT asset data contained in a configuration management database (CMDB).

Alongside the technical element of managing APIs, it’s important to consider how this security orchestration can function over time. This involves looking at security as procedures that sit alongside other business and IT processes, and then making those workflows as intuitive and automated as possible for users.

This approach is based on the lessons learnt around DevOps, where developers and operations teams collaborate to get new code releases into production faster. By meshing and automating security into the DevOps pipelines that power digital transformation projects, the approach to security can be made more efficient as it takes place alongside any development and delivery of code. IT Security teams can replace the model of “security as gatekeeper” where they only get involved before large scale production deployments or when services have been deployed.

Instead, this approach can actually help businesses save on money and time as well as improving security. Rather than trying to fix software in production, when the cost of downtime or making changes is much more expensive, the changes can take part earlier in the development process. By looking at security as a built-in process from the start, you will have to make some changes in your approach.

For example, this means looking for vulnerabilities and testing security earlier at the point where you develop the application. Rather than getting into an adversarial approach with developers, security can concentrate on making it easier to find flaws and fix them before they get out of the development phases. This approach to collaborating and orchestrating security relies on good quality, accurate data.

Where is security headed?

At the most basic level, IT security starts with knowing, at all times, what is connected to your network and/or holds your data. To get this level of knowledge involves automatically categorising devices, operating systems, databases, applications and hardware spanning across on-premises, endpoints, cloud, and mobile environments, as well as expanding into the Internet of Things and operational technology too.

While the complexity of IT has gone up, it is still possible to get instant visibility across all IT assets, from on-premise devices through to assets held in clouds, and remote endpoints. By employing continuous monitoring of all these assets and orchestrating the right responses we can all improve how well we respond to potential threats over time.

We have to address this major pain point around IT asset knowledge around not truly knowing what is connected to our networks. This is an age-old problem for us as individuals, so it should not be a surprise that this issue still exists in modern IT systems. However, the truth remains: you can’t secure what you can’t see and don’t know about. As the popular proverb goes, "Knowledge is power” - however, it is only powerful if knowledge leads to actionable insight.

Marco Rottigni, Chief Technical Security Officer EMEA, Qualys