Interview: Charting the cloud security landscape

As more and more companies of all shapes and sizes embrace cloud computing, combined with the rising prominence of hackers and cyber attacks, the issue of cloud security has never been more serious.

To shed more light on the current trends in the area of cloud security, we spoke to David Howorth, VP EMEA at Alert Logic, who discussed the recent spate of high-profile hack attacks and the problems enterprises and SMBs are facing.

  1. You recently released your Cloud Security Report. What were the key findings?

There were three key findings from this year’s Cloud Security Report.

  • Top cyberattack methods aimed at cloud deployments grew 45 per cent (Application Attacks), 36 per cent (Suspicious Activity) and 27 per cent (Brute Force attacks) respectively over the previous year, while top attacks aimed at on-premises deployments remained relatively flat. We attribute this increase in cloud attacks as being driven by the overall strong adoption of cloud computing platforms. In other words, cyber criminals are logically attempting to break into a growing number of applications being deployed in the cloud.
  • The type of cyber-attack perpetrated against a company is determined more by how it interacts with its customers and the size of its online presence than by where its IT infrastructure resides. Understanding this threat profile can help a company determine which type of cyber-attack it is most vulnerable to, as well as the type and size of the security investment required to keep them safe.
  • Understanding the Cyber Kill Chain® can give insight into where cyber criminals are more likely to breach a company’s environment and how they can be stopped. This representation of the attack flow can help organisations target a defence strategy based on the way attackers approach infiltrating their businesses.
  1. How has the type of attacks favoured by hackers changed?

Cyber crime was born following the birth of the Morris Worm in the 1980’s, the first ever computer virus, a concept that would spread like wildfire over the next 4 decades. As the internet developed during the late 1990’s and into the early 2000’s, so did the idea of utilising it to make money. Email proved to be a key application, and as user’s embraced email, so did a new phenomenon – the spammer. Spammers were making millions of dollars per month by promoting products of a dubious nature through unsolicited email. As anti-spam systems blacklisted their servers, the spammers discovered that they needed large numbers of fresh computers to continue to deliver spam to inboxes.

With the malware/spam revelation well under way, innovative minds identified new criminal business opportunities that could be provided by botnets. In February of 2000 ‘Mafia Boy’, a 15 year old high school student, discovered that if many computers access a website at once, the spike in demand consumed the website’s resources rendering it unable to serve web pages. Despite not being criminal in its nature, this early form of Denial of Service (DoS) targeted websites such as CNN, Yahoo! and eBay, and caused approximately $1.7bn in damages. These DoS attacks could disable a website for days at a time, causing financial harm to the fledgling dot com industry. Hence, the 21st century equivalent of the protection racket evolved where criminals demand payment from websites, or launch denial of service attacks to disable the website unless the owner pays the ransom.

As legitimate web services grew and flourished, so did criminal services. Towards the late 2000’s criminals discovered that credentials and personal information could be harvested from malware infected computers. Criminal specialists knew how to monetise this stolen information, but didn’t have the specialised skills necessary to write and distribute the malware needed to collect the information. This lead to the development of underground markets where individuals who could infect computers, or collect stolen information, could meet and sell their services to those who could capitalise on the stolen data. This form of underground market allowed cybercrime to become easier, more profitable and more efficient, with the likes of PayPal, Netflix and other types of account information up for sale.

As criminals profited from information stolen by malware, so nation states began to invest in the development of espionage by malware, and the era of the Advanced Persistent Threat (APT) was born. State sponsored teams of hackers could take the time to invest in stealthy and persistent attacks against chosen targets and steal valuable information for geo political reasons, or economic gain.

Cyber criminals seek to make money, and as technology evolves they are able to make money in new ways, hide their tracks and remain hidden in the shadows. The 2014 Internet Organised Crime Threat Assessment (iOCTA) highlights that a service-based criminal industry is developing to the point where an increasing number of those operating in the virtual underground are starting to make products and services for use by other criminals – “Crime-as-a-service” business model. Some underground markets specialised in the sale of different types of malware, with those who were able to create it selling it on to those who couldn’t write malware but wished to create their own botnets. The diffusion of such a sales model allows cyber criminals without considerable technical expertise to operate..

Another form of this crime-as-a-service was ransomware. Created using spam emails that trick the user into clicking the link, the computer would automatically be searched for a hole in their computer protection and use that to spread malware through the machine. The next time the user logged on they would be presented by a message alerting them to the fact that all their personal files had been encrypted, and the only way to get them back would be to pay a ransom to the cyber criminals. Of course there was no way to guarantee that even if you did pay your files would be returned safely to you.

From Heartbleed and Shellshock to the Target breach and the Sony hack, cyber attacks are daily news, and as technology continues to develop we must remain aware of the lessons from the past and consider how new systems expose us to crime in new ways.

  1. In the report you talk about the Cyber Kill Chain. What is this and how can it help companies stay secure?

Lockheed Martin’s Computer Incident Response Team developed the Cyber Kill Chain ® model to describe the different stages of an attack, from initial reconnaissance to objective completion. The Alert Logic Security-as-a-Service solution is designed to identify threats at any point along the Cyber Kill Chain ® and in this report, we created a composite organisation to provide more depth about how cybercriminals target an IT infrastructure at each stage.

It’s imperative that organisations approach securing their environments with the mindset of the attacker; this perspective will help uncover the weak spots in any framework and keep organisations one step ahead of attackers.

  1. What do you make of the recent spate of high profile attacks? Where are companies going wrong?

We understand more than ever how cybercriminals infiltrate organisations and where along the Cyber Kill Chain this occurs. There are a number of steps organisations can take to secure their IT infrastructures. Two key areas of focus are understanding the shared security model and knowing your threat profile. Public cloud computing platforms like AWS and Rackspace provide security controls that typically protect the physical, perimeter network and hypervisor layer but customers carry the responsibility for protecting applications, data and network infrastructure on top of that cloud platform. Cloud platform as well as traditional, on-premises security tools do not extend to cloud applications. It’s key that you employ security tools and services built in and specifically for the cloud for cloud applications.

We cannot continue to rely on legacy security tools and techniques in the battle against the modern day cyber criminals that are targeting our organisations on a global scale. Fundamentally it is safer to assume that we will be a target of an attack (and in many cases an advanced threat) and look at the problem from the inside out. Clearly it’s important to look at how we can better prevent data breaches and implement more effective tools to identify pre and post compromise activity, however CISO’s, CSO’s and CEO’s should take the lessons learned from the countless data breaches we’ve seen this past while and seek to answer the question on how well prepared is the organisation in the event a data-breach does occur and how can customer data be better protected should the worst happen.

Historically a great deal of time has been spent on deploying best of breed/next generation/industry leading point solutions across the entire systems, network and application infrastructure, by organisations. Vendors have focused a great deal of time into providing solutions to the market that are incredible in terms of their ability to detect known attacks, anomalies, suspicious data activity and advanced persistent threats. That said however, the challenge has always been that the content these solutions provide customers, can often leave them feeling overwhelmed; The issue is further compounded when we have different systems, managed through separate portals, spanning host based security agents, servers, network infrastructure devices and logs from applications, operating, authentication and file access servers. Security and administration teams are having to dedicate more time to analysing the output and making advanced policy decisions on what is negative and what is positive organisation wide; with the pressure to maintain business availability and less resources to achieve the result..

Clearly our first line of defence against many attacks will be investment in good perimeter and internet security solutions, such as firewalls (with application intelligence), Threat management and authentication systems. We also need to ensure good host based monitoring tools, that allow us to gain visibility of ‘all’ assets for file integrity monitoring, system logs, security logs, not least effective network control and systems management solutions to ensure hosts, servers and hand-held devices maintain the security and compliance posture required to access the network. How an organisation can tell if they’ve been breached, rests largely on how deep they can provide visibility into what happens right across the IT infrastructure; but more critically it’s about how the data output by the systems they’ve deployed, is audited, analysed and reviewed.

Ultimately understanding how to detect a breach in your organisation, means understanding the anatomy of an attack in the context of the risks to your business. You need to understand where the critical assets in your organisation are, what data they hold and map this back to previous examples of data-breaches against the very assets held by your organisation. Only then can you truly ensure you put in place the right processes and audit points, to detect activity that would point to a breach or hopefully an attempted one that can be stopped before it succeeds. Continual review of those processes and systems put in place, not least up-to-date knowledge of industry threat trends, will ensure you stay one step ahead of attempted data-breaches, or are in the position to put in place the tools and processes to contain a breach and limit the damage, should it ever occur. .

  1. What different issues do SMB's and enterprise companies face with regards to cloud security?

Companies are vigorously embracing the Cloud. However, all too often organisations deploy Cloud systems without considering security. Organisations, both small and large, need to understand where their Cloud providers’ security responsibilities end and theirs begin.

Consider Cloud systems as similar to the OSI network layer model. Cloud systems contain a series of layers of functionality built on top of each other, each one reliant on the layers underneath. At the base is a physical layer, the cloud system resides in one (or more) physical data centres, a few layers further up contains the hypervisor providing multi-tenancy, and a few layers higher resides the application layer of software at the summit of the system.

Clearly no cloud provider would expect a customer to enforce security of the physical layer. The provider will not expect or permit their customers to hire a door supervisor to verify the IDs of anyone entering the data centre and prevent access to anyone deemed unworthy. The physical security layer is clearly the responsibility of the provider, but customers should verify that the level of physical security offered meets their needs.

Similarly, Cloud providers are unlikely to verify the security of customer written application layer code since this is clearly the responsibility of the customer rather than the provider. If attackers compromise the customer’s data held in the application layer, it is of limited concern to the Cloud provider.

At some point in the stack, responsibility for providing security shifts from the provider to the customer. Customers need to be clear where this boundary lies so that they can take on the necessary security tasks and not assume that essential functions such as applying operating system patches or monitoring systems for unauthorised us will be taken care of by their provider.

The speed and ease by which Cloud systems can be deployed can lead organisations to overlook issues of security especially the day to day tasks of monitoring for unauthorised access and ensuring that patches are kept up to date. Frequently the issues of national jurisdiction of Cloud systems and the risks of lawfully mandated data access by nation states become confused with the broader issues of information security and securing Cloud systems against attack. The sad state of affairs is that it’s far sexier to be concerned at the risks of data compromise due to a foreign government’s judicial oversight than it is to resolve patching issues and responsibilities that can lead to your system being left open to compromise, handing carte blanche to any hacker with an interest in your data.

  1. What trends do you expect to see in the next 12 months?

2016 is going to be the first year where people choose cloud because of its security benefits. It’s been happening for some time and there’s a lot of clear reasons why the cloud is more secure – but it’s coming more and more well known, and it’s starting to become one of the architecture tenants. Before, cloud was chosen for its agility, but going forwards cloud is going to be chosen for security reasons. The number of attacks on cloud environments are rising – but this doesn’t mean they’re successful (data point in CSR). One of the biggest reasons why this is happening is simply because there are more things in the cloud worth attacking. We’re not just talking about early adopters and experimental cloud deployments anymore, 2016 will be the year of mainstream cloud, which means that attacks will be more serious as well.

Large enterprise IT players news acquisitions may be surprising now, such as HP separating out and Dell acquiring EMC, but this isn’t the last of this sort of news we’ll see. These sort of surprises will continue. When an IT company can’t innovate anymore, financial innovation is the next step.

Nobody really compares security spending side by side, but it is obvious that enterprises are overspending on endpoint security products (antivirus, for example, is a multi-billion dollar industry). Data centre technologies, such as WAF, are a very small market, which means spending is not in-line with what is actually functional.

That is going to normalise in 2016. Enterprises will focus more spending on data centres and rethink all of their security decisions. There will be less focus on endpoint security and more focus on web applications and data centre security.

Image source: Shutterstock/faithie