Skip to main content

Could your team be suffering from ‘robo anxiety’?

artificial intelligence
(Image credit: Image source: Shutterstock/PHOTOCREO Michal Bednarek)

Robots have not unleashed a nuclear holocaust, yet, but they have committed other harms that have understandably increased people's anxiety. Bias is one such harm. And it can be detrimental when it goes undetected during screening job and mortgage applications. In fact, in the case of the COMPAS sentencing algorithm used in the US legal system, we saw black people penalized with harsher sentences than white people due to bias.

The reason? Those robots were designed without ethical principles and frameworks in mind. The emergence of ‘robo anxiety’ is preventing these technologies from being fully adopted and delivering on their potential.

So how did we get to this point? The main issue is a lack of guidance when it comes to the ethical deployment of artificial intelligence technology – and it’s stopping many businesses from making the most of the new technologies available to them. 

The main problem is that the existing discourse on ethics and AI, developed mainly in ivory towers of governments and academia, is long, opaque, and hard to operationalize. Those who deploy AI need to have ethics in business language – short, simple, and easy to implement in their daily job.

A good starting point is Isaac Asimov’s Three Laws of Robotics – a set of rules outlined in the writer’s 1942 short story Runaround. These state that a robot cannot injure a human being or allow a human being to come to harm through inaction and must always obey orders given by a human being unless doing so would conflict with either of the first two laws.

These laws were written for a world that looked vastly different from the one we live in today when robots were still the stuff of science-fiction. And that’s exactly what Runaround was: fiction. 

Software robotics to break out

To address this, and combat the growing sense of robo anxiety that comes from the lack of simple and business-oriented guidance, NICE came up with its own robo-ethical framework for clients, partners, and any other businesses that utilize artificial intelligence and robotics to follow. 

While not ubiquitous yet, Gartner predicts that the robotic process automation market – software robotics - is set to be worth almost $2 billion this year, which is actually a fraction of its potential. Especially when we consider the wider AI market is expected to break the $500 billion mark by 2024.

By the end of next year, 90 percent of large companies will have deployed the technology in some form. The reasons for doing so are clear – robotic process automation helps businesses drastically reduce the amount of time and money spent on repetitive and easily automatable tasks. Which, in turn, frees employees to spend time on meaningful work rather than tedious tasks. 

But where there lies automation, so too lies twenty-first-century risk. While this five-point framework is inspired by Asimov’s Laws written in 1942, it has been written with more contemporary technologies and use cases in mind to help give employees more guidance and confidence in how they use AI at work. This framework is added to every robot NICE delivers to customers so it is easily brought to life. 

The first point is self-explanatory: robots must be designed for positive impact. This means that any project involving robots should have at least one positive rationale clearly defined, whether it’s societal, economic, or environmental. 

Second, robots must be designed to disregard group identities, which means ethnicity, religion, sex, gender, age, and any other personal attributes should not be considered when making decisions. To ensure this is the case, training algorithms must be tested periodically. 

Point three says that robots must be designed to minimize the risk of individual harm. Robots make decisions based on how they’re programmed, not based on any actual understanding of whether what they’re doing is right or wrong. Humans should, therefore, choose how to delegate decisions to robots, with all algorithms, processes, and decisions embedded within them open to examination. Humans should also be able to audit everything a robot does.

No magic formula

Point four says that robots must be trained and function on verified data sources only, but they should also be free from bias and tampering. Any data sources used for

training algorithms should also be maintained with the ability to reference the original source.

Finally, point five says that robots must be designed with governance and control. That basically means they should protect against misuse through proactive monitoring and authenticating any access. They might be robots, but they still represent your organization, so treat them as you would any other employee. 

This framework is fundamental to design and development at NICE. As such, we have put procedures in place that ensure our automation developers comply with this framework at every step. 

Yet, industry-wide change takes time. Anything more formalized involves a complex play between multiple stakeholders – government, those at the forefront of developing technology, supranational organizations, employees and citizens. 

Ethical frameworks, like ours, aren’t a magic formula. But using technology that’s been designed with an ethical framework in mind and communicating the same ideals to employees helps to give confidence in developing and using these tools to their full potential, which can have a huge impact on the success of a business. 

In my experience, employees that experience how robotic process automation can free them up from manual, boring aspects of their work, quickly become converts to it. Indeed, some surveys are showing workers actively calling for deeper adoption of artificial intelligence in their everyday professional lives. 

Employees are the ones on the front line when building positive human-robot relationships, so it’s their feelings towards these new technologies that must be considered when deploying them. Making employees feel like they can raise concerns regarding the use of artificial intelligence-powered robotics is the only way to gauge the level of robo anxiety that exists within the organization. But once you know that you can adopt a regulatory framework that suits everybody.

Oded Karev, General Manager, NICE Advanced Process Automation

With extensive experience in corporate strategy and operations, Oded leads NICE’s global Advanced Process Automation line of business, covering the full spectrum of robotics solutions. Oded is a respected industry thought leader and keynote speaker in the field of Robotic Process Automation.