Skip to main content

Ensuring immutability in any cloud

(Image credit: Image Credit: Rawpixel / Shutterstock)

With root privilege on a Linux VM, a sophisticated attacker can defeat agent-based defences and embed themselves deep in the OS, avoiding detection for weeks or months. Security teams understand this deficiency, so they implement alerting and remediation tools to help discover malware that evades the IDS/IPS, Firewall, and signature-based defences on the perimeter. Unfortunately, these tools are inherently reactive to malicious activity that has already occurred, leaving a window of insecurity between discovery of a new exploit and when it is patched. This window is a little over 2 months on average for every vulnerability found. The severe impact that a single vulnerability can have on your business demands a more proactive way to secure your infrastructure, one that not only protects you during the window but is also itself impervious to the attack itself.

Is malware a big problem in the data-centre or is it mostly on the endpoints (laptops/desktops/mobile)?

Malware and exploits go where the money is since almost 73 per cent of attacks are financially motivated (Verizon report on data breaches). Servers tend to have valuable data or are within reach of other systems with valuable data and this makes them a target. In particular, Linux malware has become prevalent in the threat landscape, and it will pose a greater risk as organisations move workloads to the cloud. Just last year the attacks on Linux servers in the enterprise data-centre grew 300 per cent. Recent news further exemplifies this trend, as multiple large enterprises storing personal and sensitive data have had their servers compromised and data stolen right out from under their watch (or lack thereof). This has been eye-opening not just for IT, but also to attackers that such sensitive data is being hosted insecurely.

What are the common ways that data-centre servers get compromised?

Unfortunately, one of the most common methods still involves sloppy privilege and credential management. This is a problem that has several solutions in the market that give IT a way to ensure federated access to systems, multi-factor authentication, centralised management. This threat category should decrease over time as IT adopts these technologies and they become more commonplace and easy to use.

The other more interesting way is when hackers exploit application vulnerabilities. This is a harder problem for security teams to solve because there will always be new vulnerabilities from bugs in the application and/or third party software running on the server. These vulnerabilities are both difficult to find and difficult to patch. On average, it takes organisations 120 days to patch new vulnerabilities, leaving the application and infrastructure exposed for that period of time. To make matters worse, after 60 days, the probability of that vulnerability being exploited is 90 per cent. This means malicious actors have ample time to create exploits for these vulnerabilities, use them to compromise an application, and gain persistence in a server.

Why is it hard to patch vulnerabilities when they are found?

It is a combination of technology, process and perceived risk. First, application developers have to fix the bug or third party vendors have to release a fix, which takes time. Furthermore, patching a revenue generating or mission critical application in production requires cooperation and coordination across different organisations. It could mean a decision to lose known revenue for an unknown possibility of a breach. In the case of the recent Equifax breach, the vulnerability was known and the fix was released, but it was not fully patched in production, resulting in the loss of 143 million personal records. Ultimately, enterprise security needs servers that can defend themselves from malicious code without having to patch running systems.

What is the state of detection/prevention technologies today?

Security technologies have evolved from firewalls to enforce which traffic is allowed in and out of the data-centre to Network IPS/IDS devices that detect intrusions by analysing network traffic behind the perimeter to Host IPS/IDS that detected intrusions in the server by analysing user sessions and log files etc. to WAFs which looked for SQL injection attacks on a web facing application.

Current security solutions can be divided into signature based and behaviour based technologies. Signature based technologies look for specific patterns to identify an attack. Anti-virus software and WAFs that look for specific SQL patterns are examples of this kind of technology. The drawback here is that these technologies do not protect against zero-day vulnerabilities--bugs in code that are unknown or not publically available--and can’t prevent their exploitation. These solutions are particularly insufficient considering that an average of 40 new CVEs (Common Vulnerabilities and Exposures) are found every day. The other class of technologies try to detect anomalous behaviour of an application that might indicate a compromise. These technologies are early in their maturity and can be noisy with a lot of false positives depending on how the application behaves.

An interesting new approach is to assume that a server will be compromised. Instead of focusing on preventing the initial compromise, focus on preventing everything an attacker can do after the compromise occurs (exfiltration commands, encryption, and/or establishing persistence). It is during these latter stages that your server is defenceless to a sophisticated attacker, which explains why security solutions have historically focused on stopping attacks from ever getting to that point. The new approach is security controls that are separate from the OS, which is why we built our security software to run under the OS like a hypervisor and protect it from the outside. This means our protections cannot be disabled, even with root access or admin privileges, making it the ideal platform for preventing new threats.

What is a good strategy to use when defending against zero day exploits?

Defence in depth and immutability are core tenets of a robust security strategy. First, defence in depth refers to deploying multiple layers of defence to detect threats and prevent data exfiltration. The first layer of defence is to implement least privilege when it comes to server, network and data access, this reduces your attack surface as well as the ‘blast radius’ in case of an attack. For enterprises on only one cloud, this can be accomplished using IAM for that cloud, but in the case of hybrid cloud deployments, Bracket provides controls that work consistently for both on-premise and cloud infrastructure. The next layer is detecting/preventing known attacks using signature based technologies like WAF, IPS/IDS--these technologies are typically required for compliance. The final layer is to prevent unknown exploits. Behaviour-based detection mechanisms sit in this layer, as does Bracket’s technology, which is uniquely positioned to prevent kernel-level exploits that are otherwise invisible to your OS and its host-based defences.

The other half of preventing zero-day exploits involves implementing immutable infrastructure on the host itself. This is a paradigm where infrastructure once deployed is not patched or modified in place, but is instead rebuilt and redeployed with the new changes. The big benefit of this approach is that it prevents persistence of malware by ensuring no new changes to executables on disk, no additional open network ports, and data access permissions cannot be modified once a server is running. Ultimately, access to production systems is shut down and server activity is monitored externally. Immutable servers reduce the number of attack vectors by ensuring a locked down environment that cannot change. In this model a change in behaviour can get highlighted more easily and could be considered an indicator of compromise. Because Bracket’s technology gets deployed outside the context of the guest OS, it is positioned well to enforce and prove immutability.

How can you ensure that a server is immutable or locked and nothing has changed?

There are ways to do this using open source tools like SElinux and AppArmor that are part of Linux distros, but these tools come with major usability concerns that limit their appeal. The first is that they have to be meticulously configured at boot time to match the application running in the OS. Unfortunately, this information is not always known when the image is built, and furthermore a standard image is often used across a variety of applications. Most deployments of these technologies are running strictly in detection mode with wide privileges so as not to impact business. To make these controls usable, they need to be simplified, automated, and dynamically configured based on what is running in the OS.

The second problem is that in case of an OS vulnerability these controls themselves can be overridden by exploits that escalate privileges to root. This drives the need for security controls that can’t be overridden by malware, mistakes or even malicious insiders. Bracket’s solution provides controls that are dynamically configured to lock down the OS and cannot be disabled or turned off even with root access. This can give IT/OPs teams an assurance that malware cannot gain persistence on the server.

How do I get visibility if I have locked my servers down?

Visibility can be achieved with good monitoring and logging techniques. Without appropriate monitoring, the success of deploying immutable infrastructure is at risk because if a server is locked down and IT/OPs can be blind to malicious activity that may be going on in the server. Tools to collect data on running processes, bound ports, flows between different processes, and modified files, provide data to understand if there is anything anomalous happening. The data by itself is noisy unless it is contextualised and processed. If servers are contextualised and data is processed based on that context, it can reveal interesting findings. For example, if all your web servers connect to a set of ports/services and one app server has an additional connection to a new port, it could highlight a problem which may have gone unnoticed.

Apart from activity in the server, any changes to configuration should be logged and reviewed. If ssh is opened or policy permits a new port to be open, these changes should always be logged and contextualised and should be associated with some reference to change management software (ticket id etc).

What does the future look like for defences against these attacks?

We have seen that these attacks have evolved from being just a nuisance to causing billions in damages and are now even impacting elections. The attacks are going to get more sophisticated and will be a step ahead of the security defences that are widely deployed. Based on recent data breaches and exposures, enterprises and government organisations have to take a hard look at the security practices and controls they have in place to protect personal and sensitive data. It’s not just a technology issue but also process issue. The nature of zero days or unknown exploits requires multiple layers of defence that have to be breached before getting to the data. Machine learning and AI approaches will mature in the future to a point where they could understand application behaviour better and be more accurate at identifying bad actors in the infrastructure. Public clouds are introducing new abstractions and it is yet to be seen whether these abstractions will make applications more secure or make attacks more invisible.

Vinay Wagh, Head of Product, Bracket Computing
Image Credit: Rawpixel / Shutterstock

Vinay Wagh is the Head of Product at Bracket Computing.