Skip to main content

Creating enterprise-level security for databases in the cloud

(Image credit: Image Credit: Melpomene / Shutterstock)

Databases and other data stores hosted across different platforms and providers present an appealing target to threat actors. The sheer volume of information in them can be used to make vast sums either through selling it on the dark market or exploiting it. Depending on the data concerned, exploitation can include using personal and financial information to steal money or commit fraud, or extortion. Suffering a database breach is also very likely to impact compliance with the GDPR and other regulations such as the upcoming California Consumer Protection Act (CCPA), potentially leading to huge fines for the organisation in addition to the cost of the breach.

As such, databases in enterprises of all sizes frequently come under sustained attacks, with intrusions often passing undetected for long period of time, meaning that they need to be protected by both reactive and proactive security systems.

This is easier said than done. As database infrastructure has become more complex, so too has securing the information within them. Enterprises now have databases that could be hosted anywhere – on-premise, hybrid, public cloud and private cloud – meaning that a unified view of security, risk and compliance issues is required for ultimate protection. Things are made more complicated by the lack of security standardisation in cloud environments. Amazon, Microsoft and other cloud providers often use very different tools and processes, making it even more difficult to manage when an enterprise operates a multi-cloud set-up.

When looking to secure a database in the cloud, enterprises have to put in place all the security measures they would normally use for an on-premise one. For instance, it is still necessary to know what assets there are, how access is managed, and what data validation and protection is in place. However, many organisations make the mistake of assuming that their cloud provider will take care of all their privacy and security needs, when in reality they must still assume ultimate responsibility for security themselves.

Knowing your assets

As organisations grow, merge or acquire others, their database assets and architectures are likely to expand and become increasingly complex. They could be based anywhere, on any platform, but to ensure seamless business processes these by necessity need to be linked. This can pose a significant security threat, particularly with databases that are based or were created, in countries with different local data protection and privacy laws. The threat level is increased if a security team is unaware of how the database has been configured and secured.  In the worst case scenario, the security team will not even be aware that a database exists at all.

These rogue databases offer threat actors with an enormous opportunity to steal data or compromise systems, and, even worse, provide an easier route deep into a corporate network than through other more protected assets.

Knowing exactly what your assets are and where they are located is crucial for effective database security.  Asset monitoring needs to happen down to a granular level and in real time, so that security teams can be alerted straight away to any changes to data or architecture that could indicate a system has been infiltrated.

Managing access

Controlling who can login to specific databases and their associated privileges are essential security basics when users can potentially access documents from anywhere. Privileged user access needs to be built upon a robust access-management regime based on role-based rules and privileges.

 

Operating a user rights management policy of Least Privilege ensures that users can only access the resources and perform the actions needed to carry out their job role. This limits an organisation’s exposure to unauthorised access, either from employees or external threat actors.

Enforcing Segregation of Duties is a best practice guideline that is often required by government and regulatory bodies. This approach requires the enterprise to demonstrate effective controls for sensitive data and is not only a good way of limiting risk but a useful method for demonstrating compliance.

For such policies to be enacted effectively, a security team needs to have oversight and control of all permissions across its heterogeneous database environment, so it can manage and eliminate excess permissions in a consistent way.

This also needs to be monitored on a regular basis, and ideally in real-time. Looking at access logs every 30 days or so may reveal signs of suspicious activity but will leave a large window of time for threat actors to stay undetected. It is also possible for a canny attacker to syphon out security logs and manipulate them to mask their activity before they are checked.

Alongside detecting potential threats, monitoring may also reveal database accounts that have not been used for a long period of time, indicating that they might no longer require access and can have their privileges revoked. This is good practice as the individual concerned might have changed job roles and require access to a different set of data, or none at all. If permissions do not change in line with job roles, certain workers could potentially have access to whole areas of databases they are no longer entitled to creating a security issue through overexposed data. Of course, privileges can easily be reinstated if required. To achieve this level of oversight and control over user rights assessments can mean upwards of 80 man-hours per database instance. Therefore, enterprises should look to automation.

Database Activity Monitoring (DAM) can automatically detect user activity that violates policies or otherwise looks suspicious. DAM solutions can automatically apply actions such as terminating user sessions or locking accounts, as well as triggering other scripted actions such as initiating a malware scan. In addition, the solution can be used to instantly notify the security team, which can then investigate and, if necessary, act to prevent any possible threats.

Encryption and data validation

Many cloud companies will provide their customers with the option of running redundant instances of data as aback-up measure, meaning that information isn’t lost if a server crashes for whatever reason. While this can be a useful, these redundant instances could be on a completely different server from the original elsewhere in the world. In these instances, it is the enterprise’s responsibility to ensure that every database containing a copy of its data has been configured correctly and is being kept secure.

This makes knowing how safe data is more uncertain. To negate this, organisations should look at adding greater encryption and granular data controls. As the data is not simply being stored in, accessed from and transmitted across corporate networks, but in multiple networks via different service providers, all information needs to be encrypted at rest, in use and in motion where possible. This means that even if someone does manage to access a database, they will not be able to read the data without a decryption key.

Conclusion

Securing databases is a complex but necessary task even in the simplest of homogenous environments, requiring a range of security policies and procedures. Databases that are hosted across platforms, deployed on-premise, in the cloud, or in a hybrid model, are a challenge that even the largest IT teams would find overwhelming to effectively secure. The situation is made even more complicated by the fact that enterprises increasingly use the services of multiple cloud service providers. The introduction of data security and privacy regulations such as the GDPR and the upcoming CCPA also means that protecting cloud-based databases has become more important than ever.

Enterprises need to take a risk-based approach to their cloud-based assets, assessing the potential threats and impact of a security incident, and balancing this against their investment into security.

Automating the key elements of database management is one of the most effective options for securing the cloud. Key elements of vulnerability management, user rights management and activity monitoring all benefit from automation, helping to ensure data is kept secure without overtaxing resource constrained security teams.

Andrew C. Herlands, CISSP, VP Global Security Architects, Trustwave