Skip to main content

Robo-helpers will soon help themselves to data

(Image credit: Image Credit: Alex KNight / Unsplash)

Over the coming years, organisations will experience growing disruption as threats from the digital world have an impact on the physical. Invasive technologies will be adopted across both industrial and consumer markets, creating an increasingly turbulent and unpredictable security environment. The requirement for a flexible approach to security and resilience will be crucial as a hybrid threat environment emerges.

While Rosie the Robot Maid from The Jetsons may have seemed to be the perfect helper, by 2022, the Information Security Forum (ISF) anticipates that a range of robotic devices, developed to perform a growing number of both mundane and complex human tasks, will be deployed in organisations and homes around the world. Friendly-faced, innocently-branded, and loaded with a selection of cameras and sensors, these constantly connected devices will roam freely. Poorly secured robo-helpers will be weaponised by attackers, committing acts of corporate espionage and stealing intellectual property. Attackers will exploit robo-helpers to target the most vulnerable members of society, such as the elderly or sick at home, in care homes or hospitals, resulting in reputational damage for both manufacturers and corporate users.

Organisations will be caught unawares as compromised robo-helpers such as autonomous vacuum cleaners, remote telepresence devices and miniature delivery vehicles roam unattended and unmonitored. The potential for these invasive machines to steal intellectual property and corporate secrets through a range of onboard cameras and sensors will become a significant concern. Organisations developing and using care-bots, a type of robo-helper designed for healthcare, will face significant financial and reputational damage when vulnerable individuals suffer emotional, physical, psychological and financial harm when care-bots are compromised.

This proliferation of robo-helpers into the home, offices, factories and hospitals will provide attackers with a range of opportunities to make financial gains and cause operational damage. Nation states and competitors will target robo-helpers that have access to sensitive areas in order to steal critical information. Organised criminal groups and hackers will also use manipulative techniques to frighten and coerce individuals into sending money or giving up sensitive information.

Imagine this scenario: the building maintenance division of a large pharmaceutical organisation decides to replace its staff at the research and development (R&D) site with a range of outsourced, automated robots. These robo-helpers carry out building maintenance and sanitation operations in place of their human counterparts. Each unit is fitted with cameras and sensors and requires network connectivity in order to operate. Shortly after their deployment, details of an early phase experimental drug trial are leaked to the media.

Are you sure that your robo-helpers are secure?

What is the justification for this threat?

The extent to which robo-helpers are adopted and used, especially in homes and office spaces, currently differs significantly depending on geography and culture. Japan, China and South Korea, amongst other Asian nations, are typically more accepting of robots, whereas Western nations are currently less so. Robo-helpers are particularly seen in a positive light in Japan, with The International Federation of Robotics attributing the cultural influence of the Japanese religion of Shinto – where both people and objects are believed to possess a spirit – as a key enabler for the high rate of robotics adoption in Japan. China, the US and Japan are currently the biggest exporters of robots in the world, with overall growth expected to increase worldwide.

There is a growing acceptance of robots in the home and workplace, which may indicate that organisations are ready to accelerate the rate of robo-helper adoption. In offices and homes, a growing number of semi-autonomous robo-helpers are due to hit global consumer markets as early as 2020, all built with a range of networked cameras and sensors. As with poorly secured IoT devices that are constantly connected to an organisation’s network, a security flaw or vulnerability in a robo-helper will further broaden attack surfaces, presenting yet another access point for attackers to exploit.

Robotics have been used in manufacturing for decades, but as they become more popular these robo-helpers will perform a greater range of tasks, giving them access to a wealth of sensitive data and locations. In the education sector robots will soon be used in schools, with developers in Silicon Valley creating robo-helpers for teachers that can scan students’ facial expressions and provide one-to-one support for logical subjects such as languages and mathematics. In healthcare there have also been breakthroughs – in November 2019 the world’s first brain aneurysm surgery using a robo-helper was completed, demonstrating that robot-assisted procedures enhance flexibility, control and precision.

As these robots gain greater autonomy and perform a greater number of surgeries over time, the need to secure them will become ever more urgent. In logistics, delivery-bots have seen significant investment and improvement, now using onboard cameras and sensors to navigate difficult terrain and unfamiliar environments.

Robo-helpers will make their way into the lives of more vulnerable individuals in care homes, schools and community centers and people will increasingly feel comfortable sharing sensitive information about their lives with them. Attackers will realise this, aiming to exploit these non-tech-savvy members of society into transferring funds or giving up sensitive information. Organisations developing these products or using them in their business will face serious reputational damage, as well as legal and financial repercussions when their customers become victims.

With the proliferation of robo-helpers across a growing number of countries and into a greater number of industries and homes, the opportunities for attackers to compromise individuals and organisations that use them will be alarming.

How should your organisation prepare?

Organisations using robo-helpers in their business, or providing them to others, should ensure that devices are properly protected against attacks and cannot be used to compromise the privacy and rights of customers.

In the short term, organisations should restrict robo-helper access to sensitive locations. We recommend that they segregate access and monitor traffic between robo-helpers and the corporate network and ensure that robo-helpers using cameras and sensors comply with data protection regulations. Finally, dispose of robo-helpers securely.

In the long term, gain assurance over robo-helpers used in the organisation and limit the capabilities of robo-helpers to ensure that ethical norms are not breached. Monitor specific robo-helpers for signs of fraudulent or dangerous activities and provide training and awareness around appropriate use and behaviours.

Steve Durbin, Managing Director, Information Security Forum

Steve Durbin
Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cyber security, BYOD, the cloud, and social media across both the corporate and personal environments. Previously, he was senior vice president at Gartner.