When discussing cybercriminals today, the security industry tends to focus on the danger posed by elite threat actors. We paint a picture of a highly-skilled, innovative, and determined individual who can discover hidden vulnerabilities and develop extremely advanced malware tools and attack techniques. These shadowy figures will relentlessly pursue their quarry for months if necessary, and they always find a way to execute a breach in the end, regardless of their target’s defensive efforts.
There are, of course, many such individuals out there in the world, and they often work together in organised gangs or with the broader cybercriminal community to spread their expertise. However, the truth is that most organisations are unlikely to encounter one of these ruthless elites. If a breach occurs, it will likely be at the hands of an opportunist who is merely pursuing the path of least resistance.
Like any other criminal, the average cyberattacker is motivated by profit, which means they will look for the fastest and easiest route to their target. Unfortunately, enterprises seem all too happy to accommodate them, with most of the significant breaches in recent years occurring due to fundamental security mistakes. In particular, misconfiguration is one of the most common security issues that provides a path of least resistance for attackers.
Why are misconfigurations so common?
A misconfiguration can roughly be defined as anything that is “overprivileged.” This applies to user account permissions as well as network service accessibility and usually occurs when organisations have failed to follow the principles of least privilege during software deployments.
Least privilege means restricting access to a resource for only the appropriate level required for the user’s job role. Following this principle will go a long way to mitigating the risk of a data breach, as it will both prevent rogue insiders from abusing their privileges and make it much harder for intruders to move laterally and access critical or sensitive resources.
However, this approach is rarely the default setting for a software service, and we find organisations will rarely apply the principle after the fact. Enterprises should ask their software vendors if its products were securely developed and request post-installation hardening guides that will make it easy for them to restrict access.
This issue has been exacerbated by a common tendency to buy new tools rather than securing or hardening existing ones. The Microsoft OS, for example, is an enterprise-class tool that includes built-in security features. Correctly configured, it provides everything needed to create an environment that will make an attacker’s job difficult. In most cases, however, IT and security teams will opt to purchase a tool to fix the problem instead, seeing this as a faster and more efficient option.
Why is this a problem?
This trend can create security issues for a couple of reasons. First of all, it is common to find that software applications come out of the box with all of their features turned on and accessible to the network by default. As a result, the onus is on the enterprise to go through and turn off any features they don’t want. Ideally, they should be doing this with the least privilege approach - if a feature is not being used, it should be disabled.
However, this seldom happens with any degree of thoroughness, and most networks are littered with software applications that have multiple unused features active. This is particularly problematic if the software becomes outdated, as these legacy protocols provide an easy attack path. Whenever we carry out a red teaming activity, these protocols are one of the first things we look for, and criminals do the same thing.
Ironically, the opposite is often true of security products, which typically come with more advanced features disabled. Security vendors often do this, so the software will work out-of-the-box without breaking the environment. However, if the enterprise IT team does not take the initiative to go through all the settings, they will leave themselves vulnerable.
In both cases, implementing new software without altering the default settings means that not only will the operating system itself not be configured appropriately, but the tools installed over the top will not be hardened. We have often encountered organisations that have attempted to protect themselves by running fully-patched operating systems, but still left themselves vulnerable by running third-party software that can be used against them.
Closing the path of least resistance
“Not if, but when” has become something of a mantra in the security industry when it comes to discussing the possibility of a breach occurring. A combination of innovation and dogged determination means that it is all but impossible to keep out the most elite threat actors.
However, the good news is that these implacable individuals are rare, and the average enterprise is more than capable of defending itself against the common cybercriminals that will come calling. The vast majority of these opportunists can be stopped in their tracks by addressing fundamental issues like misconfiguration.
There are two main areas that an enterprise should focus on to close the attack path presented by misconfiguration:
When there is a need to purchase new software, the organisation should ensure they have an in-depth discussion with the vendor. Alongside the obvious look at the software’s features, the vendor should be asked to demonstrate their security credentials – did they develop it securely, can they provide a hardening guide, and so on. IT teams should ask specifically about the installation guide for the product and aim for a balance between security, functionality, and usability.
An organisation’s workforce is one of its strongest security allies. These people know the company and its environment inside out. By investing in the right training, the IT team will be able to unlock the full potential of the tools the company already has. This will help to ensure that the operating system and existing software are properly configured and hardened, and in many cases, preclude the need to purchase further solutions altogether. This requires a dedicated, continuous approach to training and development, which can take some time. However, spending the time to develop personnel is always a better investment than dropping money on expensive new software, the company may not even need.
By turning these two activities into standard on-going practices, an organisation will be able to deny opportunistic attackers an easy path of least resistance, drastically reducing the chances of a breach occurring.
John Cartrett, Director of Americas at SpiderLabs, Trustwave