Skip to main content

Security and asset management - how ITAM and security teams should be travelling together

(Image credit: Image Credit: Tsyhun / Shutterstock)

For security teams, getting in-depth insight into the assets that a company has is an essential part of their work. For IT asset management (ITAM) teams, maintaining this list is just as essential for their work on managing licenses, deployments and asset life-cycles. However, many teams don’t collaborate on this work - instead, they each keep their asset lists up to date independently. This duplication of effort can lead to additional costs, as well as leading to issues over time when it comes to security.

So how can ITAM and security teams work together more effectively? What should the joint goals be, and how can you make use of data more efficiently to achieve those objectives?

Starting from a blank slate?

For most of us, the prospect of starting from scratch can be a daunting one - it represents a huge amount of effort, particularly when there might already be existing approaches or frameworks in place that address the issue. However, it’s also important to recognise what economists call sunk costs, where time or investment has gone into a task and the potential value that may come out won’t ever pay enough back to deliver a profit. Rather than continuing with an existing approach that is not delivering enough value, it can be easier to change.

For ITAM and security teams, this sunk cost idea is just as relevant. For asset lists and inventories, the biggest challenge is not getting some initial data, but keeping that data up to date. It’s here where thinking differently can add future value, and by understanding the goals for the security team, ITAM professionals can take advantage of their investments.

For security teams, accurate IT asset lists are used to check for potential software vulnerabilities across operating systems, applications, supporting software and networks. Traditionally, this would involve scanning all the devices attached to a network on a regular basis to check for any issues then fixing any that came up. External-facing web applications would be scanned for vulnerabilities that might be exploited by hackers and fixed. Alongside this, regular patch updates from the likes of Microsoft and Adobe were released on the second Tuesday of every month. This made managing the update process easier over time, as patches could be tested and rolled out when teams were ready.

Today, that same approach is true for traditional software and operating systems - Microsoft, Adobe and Oracle all release updates on their regular cadences, while other vendors release updates too. However, the volume of updates has changed significantly. Firstly, the number of application components, services and software elements has increased as companies build more and more applications. Secondly, the range of assets and IT infrastructure that companies use today has gone up as more users have multiple devices and ways of using IT services. Thirdly, new services like cloud computing and software containers make it easier to build applications, run those services and then scale them up or down as required.

Is that the sound of changing processes?

These services may exist outside the traditional vulnerability scanning window, getting turned on for short periods when they are needed then turned off for the rest of the month. If these assets aren’t adequately acknowledged or understood then security teams can run the risk of not applying updates. Like the philosophical question about whether a tree makes a sound when it falls in the forest and there is no-one there to hear it, can a security update be completed adequately if the scan doesn’t find anything at the time when it should?

Software containers represent a new challenge for both IT security and ITAM teams - they can’t be investigated in the way that traditional IT asset inventories would be. There are two reasons for this. The first is that they can be turned on and off based on demand levels, so a ‘point in time’ view is potentially going to become inaccurate very quickly. The other is that containers are typically built from image libraries that can be defined and kept secure; however, once those containers are started, they can drift from that original image through developers using their account privileges to add more software components over time.

With businesses of all sizes adopting these new approaches to running IT, the older approaches that existed around handling potential software vulnerabilities has to change. This is where ITAM and security teams can collaborate.

Rather than looking at assets as static, they should be viewed as changing continuously and treated as such. Rather than traditional licensed software, many software products are moving over to being delivered as a service or as open source options. For infrastructure, cloud services are now more mutable and ephemeral than before. Consequently, tracking these assets over time involves looking more often, at more granular views. Ideally, this should be continuous.

How fast can you deal with vulnerabilities?

Similarly, security teams need to get accurate data on all assets that exist and any potential vulnerabilities. The window of time between risks being discovered and new attacks created is getting shorter and shorter. The emphasis is therefore on getting full visibility across all IT assets inside and outside of the business, from the biggest cloud app through to individual user devices. By being able to analyse these vulnerabilities in real time, prioritise the most critical issues, and then fix them quickly as part of an overall workflow, security teams can be more efficient. At the same time, this asset data can be used to support ITAM initiatives around software licensing compliance, service delivery and asset life cycles.

Alongside this, both security and ITAM teams need to be aware of how data is gathered from cloud instances. In many cases, these resources can only be seen if they are queried via a cloud API (application programming interface) to get the data, as traditional asset agents can’t be used. Equally, passive network scanning is also essential to spot unauthorised computers or Internet of Things devices joining the company network. Instead, teams should know how to get the data across each platform that they need, so they can get an ‘apples-to-apples’ comparison. This involves more work around normalising data so that it can be presented as a single view, but that helps both teams get real insight into what is taking place.

This data doesn’t need to be expensive to get and manage. In fact, there are new free services that can provide up to date asset data back to teams automatically. The hurdle for this is how to work around changes in process across security and ITAM based on how to collaborate in the future. However, working together has a multiplier bonus when it comes to delivering results. This is particularly important for compliance reporting, where misconfigurations or mistakes in conforming to security frameworks can be spotted and fixed.

The most important challenge for security is the speed at which new vulnerabilities have to be dealt with before they lead to business risk. The biggest issue for ITAM is providing a service back to the company that can reduce costs, ensure compliance and stop unnecessary spending. By collaborating on data around assets, ITAM and security teams can meet both their goals over time.

Marco Rottigni, Chief Technical Security Officer EMEA, Qualys