Skip to main content

How much is too much?: The ethics of AI and data in the workplace

(Image credit: Image source: Shutterstock/PHOTOCREO Michal Bednarek)

It seems as though we’re finally starting to understand just how much influence Artificial Intelligence (AI) can have on our everyday lives. As a concept, AI has technically existed since the 1950s, but it’s only within the last few years that we’ve seen the technology begin to make its lasting mark - from robust and intuitive enterprise technologies right down to the smart devices now powering our homes.

Many of us might not be aware, but whether we’re using voice assistants such as Alexa, shopping online or scrolling through Facebook, there are various elements of data-heavy AI and Machine Learning (ML) at play.

The fact that AI technology has so seamlessly integrated itself into everyday life is certainly remarkable, but this information also arrives with an ethical conundrum. In the days of GDPR, we must now accept cookies in order to consent collection of our personal data outside of work. We’re now starting to see businesses implement AI and machine learning tools in the work environment, of which many employees may well be unaware.

Technology to relieve the mundane

The idea of a ‘smart office’ appeals to many. Today, we’re bound by many mundane tasks that could soon be taken over by technology - and more specifically, AI and ML. From work assignment and performance reviews, to cascading information and enforcing the resulting directives, machines won’t replace workers in this instance. However, the technology will help to make the more menial day-to-day tasks much more efficient (or take them away completely). There is so much potential for AI to take workload off many office workers, benefiting employee wellbeing and work/life balance in the process, and allowing staff to stay focused on the more enjoyable aspects of their jobs. However, as with any emerging technology, so too arrives the unknown. 

Machines feed off information, and with more data comes more accurate responses. We’re in a time of increased focus on corporate social responsibility (CSR) - where employee wellbeing and a more balanced approach to working are beginning to take priority for many businesses. But the question of just how much data needs to be collected from workers to achieve this could certainly raise eyebrows about ethical of AI development.

Improving employee wellbeing with data

For example, some organisations are now asking employees to wear smart devices to track biorhythmic detail, which predicts stress patterns in staff. This is all well and good at first glance; technology such as this is designed to manage employee workloads and ensure that their work/life balance remains healthy. Are stress levels rising in one member of staff? Inform the manager, decrease workload, support that employee through whatever is troubling them. On the surface, it appears a positive welfare message.

But how much is too much? Are employees aware that their heart rates are being tracked? Have they consented to this data being taken? Or, in their eyes, have they just been handed a nifty little smart watch to wear at work, allowing them to track their footsteps over the course of the day?

At this stage, if staff have consented to giving this information, and they’re aware that their stress levels are being tracked, perhaps all is fine. If it is benefiting their wellbeing then they’re happy, the company is happy, and it ticks a box in the mission to improve CSR. But what if it goes deeper than that?

How far is too far?

Assume that this company is an organisation of 500 employees, and every single one of them has been given this smart device (and consented to it). From this position, the technology can begin to foresee sick days, employee disgruntlement, or even vulnerability. 

After a year of collecting this data, the company - thanks to ML - might be able to discover when people are planning on leaving their jobs. After X amount of time at Y levels of stress, there may be a pattern of employees booking so-called doctor or dentist appointments. From here, two weeks later they might be handing in their notice.

Walking the line

ML is designed to find these types of patterns. This example is simplified, and theoretically, the technology can dig much deeper, but the principle remains. With this information readily available, managers could begin predicting when people are looking to leave their jobs - and perhaps in the worst case scenario, block certain requests for time off. Conversely, they might start interviewing for replacements, but even in doing that, there lies an inherent ethical question.

This doesn’t necessarily have to reflect employees leaving the business, either. The tool could maybe even track how often employees are chatting with each other, how long they’re out for lunch and how much time they spend in the toilet. It could ever go as far as predicting pay rise/promotion requests, or when employees want to book holidays.

What is much darker, is that companies might actually act on this information. They might separate colleagues who distract each other (of course, this would be unknown to them), take steps to dissuade various requests, or persuade employees to change their hours to improve the employees productivity based purely on their bio data. The consideration for businesses looking to utilise this type of information is that it is hard to imagine anyone consenting to this sort of data usage, even if they did have clear visibility over where and how their data will be used by their employer.

There is also a propensity for businesses to see AI not just as a way to automate low value tasks, but to remove the human element completely from certain functions. As business and technology leaders, we have a responsibility to consider the ethical impact of AI on our workforce, using AI to add value and efficiencies to their jobs, and where job displacement is inevitable – look to provide re-training opportunities to those affected. We may also need to keep “humans in the loop” – that is, involve humans in the monitoring of the decisions made by AI to give confidence that there is still an element of human control.

We are still in the infancy of AI in the workplace and it is difficult to predict how such technologies may be applied, and ideally, much like GDPR and online activity, there would be regulatory bodies introduced to create ethical guidance and ensure that companies take their responsibilities to their employees and consumers seriously when introducing AI into their business. Such guidance may be necessary if we are to avoid an Orwellian working environment.

Tim Purcell, R&D Director, Datel