1. If the traditional perimeter is disappearing, is there a sense that security teams are having to give up control over the devices and people that can access their networks?
It may be daunting for organisations to think about the erosion of the external perimeter but this is definitely a trend we’re seeing. It’s a consequence of the big shift that’s been taking place in the way we work and how we access data. In the cloud-driven era, data can be stored outside of corporate walls, a more mobile workforce is working remotely and most users will have two or three devices – all of which is great for productivity and more flexible ways of working.
However, this adds layers of complexity to an organisation’s security posture. In the past, firewalls were relied on to keep out external threats. While this is still true, we can’t rely on only reinforcing the perimeter. How we consume applications and data as end users has changed, and this changes the way that we need to approach security. We must adapt to ensure access only to trusted users and devices, and that these policies are tailored to risk profiles of the data and workloads being accessed. There are ways to do this which ensure that teams can get on with what they need to get their jobs done. Security professionals can do this without resorting to a ‘block all’ approach, but by securely managing who is connecting to individual applications in a smarter and automated fashion.
2. Why is it becoming more complex to ensure only trusted users access the appropriate corporate applications?
Fundamentally, it’s a numbers game: the challenge comes from the fact that there are more users connecting from more remote locations and more devices than ever before. For security professionals, this can be a serious security headache; they must ensure that what looks like a legitimate user isn’t, in fact, an impostor that may have stolen a set of credentials. It’s also about ensuring that they can trust the devices that are connecting to the network. Are these secure, patched, healthy and running up-to-date software? Are they confident that there are no devices that are connecting to corporate applications that could jeopardise the security of the entire network? Even a trusted user may be using a mobile device or computer with out-of-date software, leaving them susceptible to vulnerabilities.
In line with changing working practices, organisations need to approach security from a ‘zero trust’ perspective – this is one in which access to applications is granted based on the trustworthiness of the user and the device, as pioneered in Google’s BeyondCorp framework. In this architecture, trust is not assigned based on the origin of the request - like from within a corporate network - but rather the proper set of credentials and rights combined with appropriate security posture of the device.
3. How is the way that we authenticate users changing to keep pace with changing working practices?
Traditionally, security has worked on absolutes and decisions are made on a rules-based approach: block this person, or allow this device. As working practices are shifting, this static, rules-based approach to authentication is no longer feasible, and organisations can’t simply work in absolutes of allowing access or blocking the user entirely.
The next shift we’re now seeing is the trend towards a more granular, risk-based approach to authentication, which takes into consideration the circumstances of the request and the attributes of the user, such as: what system they’re accessing and what they’re doing before enabling access. For example, if a user needs access to an application containing sensitive data, from a location or device that they don’t normally log in from, we may ask that access can only be granted from a corporate, managed device. The level of access and authentication are dynamic, and adapts to the context of each request.
4. What are the benefits of a more risk-based approach to authentication?
There are some clear benefits to this more contextual approach to authentication. First, as it takes into account different risk thresholds, organisations can bolster their security and mitigate risk for specific situations, applications and data. This helps to reduce the risks of data breaches and unauthorised access to corporate applications.
There are also productivity benefits as it ensures that legitimate users can access what they need to quickly and without being blocked, unnecessarily. From the user’s perspective it provides a simple and convenient experience which is based on the most appropriate level of security according to each request. Good security should be transparent to end users.
5. When it comes to adaptive authentication, how is it possible to achieve the right balance between control and automation?
For security to scale in the future, it’s important to achieve a balance between control and automation. Artificial intelligence and machine learning will have an increasingly important role to play, however, over-reliance on automation can be risky as it means that security professionals could potentially have a diminished understanding of their overall security strategy.
From an end user perspective, as with many aspects of security, it comes down to making it practical and user-friendly. Organisations need to be able to adapt to different risk profiles in real-time, whilst empowering employees to use what they want and get on with their jobs. In the end, security and productivity doesn’t have to be mutually exclusive, we won’t have to trade one for the other.
In this way, they can adapt to the changes that the perimeter-less era has ushered in – without exposing their organisation to a security breach.
Ruoting Sun, Head of Technology Partnerships, Duo Security
Image Credit: Rawpixel.com / Shutterstock