It’s still amazing to me that each one of us is a few clicks away from starting a cluster of servers ready to process data at any scale.
Not so long ago, we needed to buy hardware, CPUs, memory, networks, and storage. It took considerable effort to set up our data centres and connect the devices to our networks.
Now, even classical big organisations who already have huge server farms are taking advantage of the simplicity and scalability of cloud technologies.
But what about security in cloud-native environments? Infrastructure? Application?
The business says infrastructure
No one starts to build a business by ordering machines. We define what we want to do, develop a system, and deploy it. We don’t really care about the brand printed on the server blade. Our systems have to be running; they need to be reliable, usable, and prompt – and secure.
Every IaaS vendor – be it AWS, Google, Microsoft, or someone else – offers infrastructure security. By using their infrastructure, business delegates considerable amount of the security responsibilities to cloud vendors. At this point in time, business assumes that the work done by AWS, Google and Microsoft is more secure than they could do on their own.
A perspective on infrastructure
Let’s look at the layered model of modern computing.
Cloud infrastructure services (IaaS) provide the virtual machine -- memory, storage, processors, and networking. Higher-level services provide the operating system, orchestration, and object stores.
Security features of the infrastructure can only prevent attacks from the layers below them. For example, if you choose Amazon Elastic Block Storage (EBS) encryption, your data on the actual data storage will be encrypted at the virtualisation level between the OS level and the hardware. If the attacker breaks into Amazon’s data centre and steals the disk, takes it home, and attaches it to his computer, he will see encrypted data.
If the attacker breaches the same virtual machine remotely, he can open the files on the same EBS volume and read the data transparently just as the legit application does because the virtualisation layer has no way of telling who is trying to read the information.
The same applies to other infrastructure-level security features like firewalls. If I have services A and B, where B is a client for A, I can define firewall rules that restrict access to the machine running A so only the B machine can access it. Therefore, attackers breaking into machine B have easy access to A.
In general, if the attack’s origin is above the layer of protection, the protection isn’t effective. Given that attacks are mostly coming from the direction of the application layer, the infrastructure level protection is giving only partial security.
While the infrastructure can limit application-level activities to prevent unwanted behaviour, the result will be very tight and very expensive to maintain. This means that the perimeter is either too wide to provide enough security or too narrow to maintain security in the cloud-native world.
The myth of real application security
If applications could secure themselves, it would be a big step toward complete cloud-native security. Of course, instead of treating applications as the things needing protection, the industry has invented many infrastructure security features to get around the problem.
Furthermore, self-protecting applications are hard to configure and maintain; their security levels are all over the place. Actually, in this type of environment, true application security is very hard to accomplish because their versions may vary, and they come from a variety of vendors, as well.
The ineffective case of SSL/TLS
This security protocol, the de facto industry standard of protecting TCP connections (sorry, SSH), was invented in the ‘90s. While its design is exemplary, what’s important for our discussion is that TLS connections were designed to be created between a browser application and web server software. It is not an infrastructure feature; it isn’t even a feature of the network drivers. It is a pure application-level feature, which means ideally only the application can access the data sent over the network.
As time passed, server-side TLS products evolved, like RSA’s TLS server termination hardware. TLS termination has become a common practice, meaning the TLS connection arrives at a reverse proxy software or hardware, whose only goal is to strip the connection from protection and forward it to the right web server unprotected.
Why do it?
On one hand, it is not as secure, but it’s hard to maintain TLS certificates and private keys across the whole server park. When it became clear that internal service-to-service communications must be protected as well, different cloud vendors had different answers – the same infrastructure security problem we discussed. Cloud independent solutions like Istio and other side-car solutions have put an extra container next to the protected application, performing the TLS termination as it was done with web servers, but it isn’t effective.
TLS has been used so sub-optimally because it is hard to use to configure and maintain applications. TLS requires constant reconfiguration (certificate renewals) and key protection (private keys whose theft compromise the entire TLS system). All applications are configured a little bit differently, which makes maintenance difficult; some applications, of course, do not support TLS at all.
Of course, this simple example of TLS highlights operational problems with putting broad security features into the application. Also, business and application development are focused on functionality; security is secondary, if at all...
What is the real solution?
Business-driven thinking pushes security within the infrastructure; it should be there out of the box. In many cases it is -- but security in infrastructure is limited. The infrastructure-focused approach to application security isn’t working either.
The answer – security has to be at the application level but not part of the application.
Ben Hirschberg, VP of R&D and Co-founder, Cyber Armor