How organisations get public cloud security wrong

If, just a few years ago, you were to ask a CIO about their advice for securing a public cloud, the odds aren’t bad that their response would have been 'just don’t use one'. Today, you’re far more likely to get a nuanced response, the result of increased practical experience with both security and broader governance issues in public clouds.

However, as Oscar Wilde wrote, 'experience is the name that we give to our mistakes'. What follows are some of the security and governance-related mistakes that I’ve had related to me by a variety of architects, IT managers, and consultants that led to gaining experience.

Failure to take a business-based approach to risk

A lot of the pushback against the use of public clouds (and, for that matter, other trends such as employee-owned smartphones and laptops) has focused on the risks. Or the 'what could go wrong'. Risks certainly need to evaluated and perhaps mitigated. For example, an organisation might allow employees to use public cloud resources and personal devices but only if they use two-factor authentication.

However, risks also need to be considered in a business context. Perhaps using some third-party service does introduce a new level or type of risk, such as the provider going out of business or discontinuing a service. But if the business benefit associated with getting access to, say, better customer analytics, is significant, perhaps the incremental risk is worthwhile. Or not. In any case, the risk has to be viewed in a broader context than a narrow IT-focused one.

Adopting a hardline stance that led to 'Shadow IT'

A widespread focus on risk, rather than cost/benefit, led directly in many cases to what came to be known as 'Shadow IT'. Faced with IT organisations that decided the safest and most secure approach was to simply prohibit (or perhaps 'take time to further study') public clouds and other new aspects of computing, lines of business and individual users just took out their credit cards. They procured services on their own.

This was (and is) not a problem in some cases; with technology so pervasive within modern businesses, it’s neither practical nor beneficial for IT to be involved in every technology decision. However, at the same time, IT can play a valuable role in establishing best practices for security and evaluating third-party solutions. Those benefits go away when decisions are effectively being hidden from the IT organisation.

Unrealistic expectations for on-premise information security

Much of the resistance to public clouds seems to have come from its comparison to what was largely a strawman -- namely the on-premise IT infrastructure that never had a misconfigured firewall, was never accessed by a rogue employee, and that was always promptly updated and patched with the latest security updates. Certainly, some IT organisations run a tight ship. Others not so much (especially in smaller organisations lacking specialised security expertise). However, one doesn’t need to read too many headlines before coming across examples of on-premise data breaches. Especially with the increase in laws requiring that customer data breaches be disclosed, it’s clear that data breaches are common-- no matter where the computing is hosted.

None of this is to say that one shouldn’t do due diligence with respect to the processes, certifications, and track record of public cloud providers. However, that due diligence needs to take into account that perfection isn’t a reality just about anywhere.

Failing to apply existing best practices

Once organisations adopt public clouds for at least some of their workloads, some then go on to make what is effectively the opposite mistake. Having decided that public clouds are acceptable, they delegate aspects of security to the provider that they, in fact, maintain control over and therefore responsibility for. (When discussing public cloud providers, there’s the idea of a 'shared responsibility model' whereby, depending upon the type of cloud service, the provider is responsible for certain aspects of security while the user retains responsibility for others.)

For example, in the case of Infrastructure-as-a-Service, the user provides and maintains the operating system images running in the cloud. This means that the user needs to apply the same best practices around obtaining the software from trusted sources, keeping it updated and patched, monitoring it for vulnerabilities, and operating it in a secure manner that they’d use in their own datacentre.

Lack of a comprehensive management strategy

Historically, IT built infrastructure and wrote applications to run on that infrastructure. With public clouds and other third-party services, IT has been forced to transform into a broader business enablement role. This hasn’t always been an easy transition. It means taking a far more multi-faceted approach to delivering and managing a broad set of services in partnership with the lines of business.

From a security and governance perspective, this has often led to a lack of consistent policy over sharing data with third-party services and over where data can be stored. It’s led to a fragmentation of identity services and access controls. It’s led to the inconsistent application of best practices such as described above.

IT organisations are addressing some of these issues with specific technologies, such as cloud management platforms, single sign-on, and identity management. However, dealing with this changing environment is also driving organisational changes such as the creation of cross-functional teams that include both IT and business owners. And that’s perhaps the most important message. Adapting to hybrid and public clouds will often require some specific practices, processes, and technologies. But it also requires an understanding of how IT and the relationship of IT to the rest of the business is changing.

Gordon Haff, cloud strategy, Red Hat