Data is firmly established as one of the most valuable resources in the world today and is the lifeblood of most organisations. Accordingly, it has also become the most sought-after prize by cyber criminals. Where thieves would once have broken into bank vaults to steal gold and cash, they are now seeking to hack into company databases to access their confidential data.
Accessing a database full of personal information is one of the most reliable paydays in the world of cybercrime. Personally Identifiable Information (PII) is the basis of the underground cyber economy and is constantly bought and sold on the dark web, with healthcare records commanding a particularly high value. Armed with enough personal data, criminals can commit identity fraud or use the information to launch more elaborate and targeted social engineering attacks.
Database and Big Data platforms such as MySQL, MongoDB and Hadoop are also popular targets for ransomware attacks. Attackers know that shutting companies out of their own data can quickly cripple an organisation and put the pressure on to consider paying the ransom.
Leaving the vault door unlocked
Despite databases being the ultimate target in the majority of cyberattacks, it is common to find companies have failed to properly secure them. Even firms that have taken effective security precautions in other areas will often overlook the databases themselves and fall victim to easily avoided flaws and vulnerabilities.
Many companies have suffered serious data breaches because they failed to properly secure or configure their databases. Misconfigured databases have frequently led to confidential customer data being inadvertently left exposed online for anyone to find. Criminals commonly use automated bots to trawl through the internet and quickly find and access any databases left unsecured.
Alongside external cybercriminals, poor database security will also leave a company vulnerable to internal threats. Rogue employees can easily access and copy the data from a poorly secured database to either sell to criminals or pass on to rival firms to further their own prospects. Inconsistent enforcement or a lack of strong user policies also increases the chances of a well-intentioned employee accidently leaving data exposed, particularly when accessing networks remotely.
Creating an effective database security program
Establishing a high level of database security requires commitment from multiple parties across the organisation and like all other areas of security, people and processes are just as important as technology. All elements must be continually reviewed and monitored to ensure best practice becomes a company-wide standard.
To get started, the first step is to assess the current state of the company’s databases. Key factors include identifying all databases on the network, and going through all policy management, vulnerability management and user-rights least privileges assessments. This assessment will make it possible to establish a baseline of current database configurations and user privileges and help to identify areas that need immediate attention.
Completing a thorough assessment and understanding how critical system elements integrate is absolutely essential for the success of future monitoring efforts that will help keep databases optimised and secure.
Auditing for compliance
Databases generally play a crucial role in regulatory compliance checks and IT audits since they usually serve as the organisation’s largest storehouse of sensitive data. There are a wide variety of different standards and regulations that include databases within their scope, including but not limited to: PCI DSS, FISMA, ASD, SDPA, MTCS and DISA-STIG. The newly established GDPR, with its focus on handling and securing customer data, naturally also requires solid database security management.
Meeting these various standards requires a continuous program of monitoring and auditing all databases on the network after first conducting a baseline audit. The Continuous Diagnostics and Mitigation (opens in new tab) (CDM) mandate developed by the Department of Homeland Security (DHS) is a particularly strong model to follow for ensuring database vulnerability compliancy.
Establishing standards and policies
Well-defined standards and policies are a central pillar of an organisation’s ability to measure its progress against benchmarks and monitor its compliance. Policy management should be a continuous process, and many organisations make the mistake of only reacting to security incidents rather than addressing them proactively in accordance with a standard or policy.
Additionally, most out-of-the-box database installations only have the most obvious security controls enabled, and organisations should not fall into a false sense of security from the in-built policies alone. Another common blind spot is for an organisation to develop a robust policy for protecting how data travels around the network but fail to map those policies all the way back to the database itself.
When defining standards and polices, organisations should be able to account for the frequency of policy updates, triggers for policy change, where responsibility for updates lie, and the approval process for implementing any changes.
I recommend reviewing all policies whenever a vulnerability has been patched or the software has been updated to ensure they remain relevant for the new configuration.
Controlling user access
One of the most common database security issues is a failure to apply the Principle of Least Privilege (opens in new tab). Frequent organisational and staffing changes, complex user and role-based permission sets, combined with human error or admin workloads, database user accounts are often created or left with higher database privileges than required. incidents of malicious or accidental data exposure by insiders are likely to happen more frequently when poor user access controls exist. Excessive privilege will also hand much more power to any criminals who are able to seize control of a user through stolen credentials or a compromised device.
Knowing who can access what data, and more importantly how they were granted access and by whom, is essential for establishing meaningful controls and properly securing databases. There are many database scanning tools available that can automatically identify users, roles and privileges. Once a baseline has been established, there should be frequent reviews that ensure users still have the appropriate level of access for their roles – particularly when it comes to administrative rights.
The need for real-time monitoring
Once thorough assessments have been made and standards and policies have been put in place the final step is to implement Database Activity Monitoring (DAM). Being equipped with real-time DAM will enable security teams to instantly identify potential threats and take action to mitigate. For example, user sessions can be terminated, or accounts locked down if policies are violated or other suspicious behaviour is detected which likely signifies a threat.
This ability is particularly important for keeping track of privileged user sessions that can access confidential and mission critical data. As important as real-time DAM can be, it is only effective when the proper foundations have been laid. Companies that attempt to skip the assessment phase and immediately deploy DAM will likely find themselves inundated with thousands to millions of recorded events (good and bad), with the added burden on the security team to weed through an unnecessarily high number of false positive and negative alerts.
By taking the time to be thorough and create an in-depth database security plan however, organisations can protect the data that keeps their business functioning and close the security gaps and vulnerabilities that are commonly exploited by criminals looking for a big payday.
Andrew Herlands, VP Global Systems Engineering, Trustwave (opens in new tab)
Image Credit: Pavel Ignatov / Shutterstock