“Never before in history have such a small number of designers – a handful of young, mostly male engineers, living in the Bay area of California, working at a handful of tech companies – had such a large influence on two billion people’s thoughts and choices.”
Putting aside the obvious comparison to Winston Churchill’s famed World War II speech, this quote from former Google design ethicist Tristan Harris perfectly describes the power that digital designers have in today’s ever-more digitalised world. Indeed, we have only to look to this year’s Cambridge Analytica scandal to see when the delicate balance in ethics and technology goes wrong.
As digital design increasingly impacts the way we live and work, so too has the need for designers to consider the consequences of their design decisions. Unfortunately, businesses of all shapes, sizes, and sectors are currently falling short in this regard, and have become increasingly good at doing “the wrong thing in the right way”.
But what does this actually look like?
Let’s take a look at some key examples of where design decisions haven’t been quite as ethical as they should have been.
Push notifications – defined here as how devices, such as smartphones and tablets, proactively bring something to a user’s attention - tend to draw a mixed response. While a timely reminder to take action can often be useful, these notifications can also act as a distraction at best, and a downright intrusion at worst.
For example, a mindfulness app – an increasingly popular outlet for increasing calm and emotional well-being – may send the user periodic notifications to encourage them to meditate. Rather than being a helpful reminder, these can cause feelings of stress, anxiety and shame in the user; the exact opposite of their intended effect.
Being bombarded by notifications can be particularly prevalent on (and indeed from) social media platforms. Social media sites live and die by user engagement, and will use an increasingly frequent stream of notifications to ensure that users continue to browse, tap, and swipe away. In some extreme cases, sites such as Facebook and Instagram have been known to conjure up engagement by sending a notification just to inform a user that they have zero notifications. Not only is this unnecessary, it can also lead to negative feelings such as loneliness or shame.
The way pop-ups and on-site overlays are designed can also be unintentionally unethical.
These tools were originally designed to provide a quick, convenient way for customers to confirm that they’d like to receive email communications from a business, without having to go to unnecessary effort. In a post-GDPR world, however, data has become a form of currency in its own right, and customer opt-in has since taken on a great deal more importance. This, in turn, has led to brands moving beyond simple yes/no opt-ins in favour of a far more direct approach.
This approach often takes the form of “confirm shaming”, wherein a website “shames” the user for opting out of signing up to their service through making the text for “no” passive-aggressive (e.g. “no thanks, I don’t want to treat myself”, or “no, I don’t want to be healthy”).
While businesses may see this as a simple, harmless retargeting technique, it is in fact an abrasive way to coerce users into sharing their personal data for the wrong reasons. Confirm shaming is, essentially, gaining consent through making it more difficult for users to say no, rather than to express informed consent - a requirement of GDPR.
Digital products and services will often reflect the biases of those who design them; consciously or otherwise. Racial bias appears to be a particularly common theme, first hitting the headlines in 2015 after Google Photos incorrectly tagged two black people as “gorillas” – a catastrophic error.
Unfortunately, as machines become more advanced, and forms of artificial intelligence – such as machine learning – take on a greater importance, examples of these biases are becoming increasingly commonplace.
But why is this? The genetic makeup of Silicon Valley could be an important contributing factor. At the top 75 companies in Silicon Valley, only 3% of the employees are black, and this lack of diversity could in turn lead to a lack of perspective.
A prime example of this lack of perspective is how machine learning algorithms are “trained” to recognise faces. Essentially, the example photos used to give these algorithms an idea of what faces look like tend to come from predominantly white datasets, meaning that facial recognition systems naturally associate Caucasian features to be what a “normal” face looks like, and misidentifies, or even rejects faces which do not fit this specific profile.
Indeed, a recent study from M.I.T. researcher Joy Buolamwini found that while machine learning algorithms can correctly identify a white man’s face in a photo, 99% of the time, this number drops dramatically to 65% when black women are tested instead. This is a worrying figure, which highlights the effect that unconscious, coded bias can have on the digital products and services we now use every day.
It’s time to think longer-term
While many designers and businesses are no doubt well-meaning, there is also a cognitive dissonance between good intent, and subsequent negative impact. This in turn points to a real need for digital designers to begin to think about the long-term consequences of their design decisions.
This doesn’t mean failing to find a solution and leaving it at that, but rather taking a more collaborative, considered approach. Collaborative design approaches, and suggesting alternate design paths, can be integral to this, allowing designers to better discuss a range of approaches with a wider, more diverse set of users and stakeholders, as well as the consequences of these options.
In turn, this may empower designers to suggest alternative, less hurtful, paths that take into account all of the voices in the room, as well as – most importantly – the voices of those not in the room, who are no less affected by the choices that digital designers make.
Overall, if digital design is to become truly human-centric, it must move away from the traditional Silicon Valley ideas of “progress for progresses sake”, or “move fast and break things”, and instead take into greater consideration the impact that digital design can have.
Such consideration will be crucial in ensuring that nobody is hurt – unintentionally or otherwise - by the technology that directly influences the lives of more people every day.
Hilary Stephenson, Founder and MD of Sigma
Image Credit: Syda Productions / Shutterstock