Skip to main content

Ways products are designed for human behaviour

Designing products for the end user is vital to any business model. In software, this tends to manifest as GUI design and functionality, but so much software is focused on social interaction that the mentality of the users sometimes needs to be accounted for in the software design. Twitter, Facebook and Reddit are just a few of the social platforms where an engaging user experience are vital to the company's business model, but an engaging user experience isn't necessarily a positive one. While few companies have monopolies strong enough to retain customers while consistently disappointing them, negative experiences can be engaging so long as the company isn't blamed. Negative memories last longer than positive ones, which means association with a negative experience will make a product stick with a user. Allowing users to behave poorly is one way companies can get users to invest emotion in their platform, albeit one that can backfire.

Reading an infuriating comment on a social outlet like Reddit or Facebook can cause a user to close the browser entirely or engage them emotionally, because they feel compelled to interact, particularly when anonymity is involved. Reddit offers the chance to down vote a bad comment, allowing users to feel as though they are creating a better community. On both platforms, those bad comments can be responded to. In a situation where a user cannot act on a negative comment, the user will feel frustrated and the frustration is more likely to reflect on the platform being used.

Reddit in particular, benefits from this by providing diverse communities with controversial stances. Even in cases where the behaviour of the users is illegal or blatantly immoral, Reddit hesitates to intervene because participation is opt-in. Reddit had several communities dedicated to hosting pictures from Apple's iCloud breach. It took Reddit longer than necessary to censor those communities. Reddit had to weigh the benefit of being a neutral social platform against the detriment of being perceived as condoning that behaviour. Other message boards have received increased traffic from allowing any kind of behaviour from its users. Even with its standards, Reddit was host to a combined user effort to find the perpetrators of the Boston bombing, resulting in the misidentification of innocent people as the suspects and posting their private data publicly.

Twitter, on the other hand, has been focusing on eliminating bullying and trolling, creating ways to report offenders. Both Twitter and Reddit have hosted conversations about "GamerGate", which veiled itself as a way of pointing out corruption in gaming journalism, but involves harassment and death threats on a game developer- some delivered on Twitter. It's notable that not a single claim about corruption has panned out. There isn't much for Twitter to do, however, beyond cooperating with authorities and creating ways to report behaviour like that. Some companies are investing in developing ways to actively discourage it, however.

Online games are an interesting data source for social interaction, because the in-game communities often reflect society in miniature. Some researchers have even suggesting using data from Blizzard's World of Warcraft for the psychology behind the spread of disease. Many multiplayer online games require players to cooperate and compete with each other in a friendly context. Games involve an investment of time, effort and ego and, in a highly competitive atmosphere, emotions can run high, leading players to troll, berate and belittle other players. In some contexts, this may make players work harder, but in others it can lead to customer loss.

The developer of League of Legends, Riot Games, is focusing on that aspect. They created a department of social behaviour focused on minimising the impact of 'toxic players'. Its report system allows players to report others for using vulgarities and poor sportsmanship. Players can be banned or warned, this is the punishment system.

The reported game is filed to a case of five reported games involving that player, which are then decided upon by other players who log onto the website and review the anonymous game summaries and chat logs to determine if the reported player was at fault. Sitting in on cases rewarded players with points to be spent in-game. This is the reform aspect of the tribunal. The most clever aspect of the tribunal is the effect that reading and reflecting upon the way others receive the bad behaviour and how that might reflect on their own behaviour. Players have posted statistical analysis of publicly available data about the tribunal to see what is punished, so it's clear that users were participating actively. At the moment, the tribunal system is undergoing upgrades.

League of Legends also has a complementary reward system, where players can acknowledge others for teamwork or other skills. Getting enough acknowledgements grants a ribbon which others can see. The core of Riot Game's system is a simple carrot and stick model.

By contrast, EVE Online is a massive multiplayer online game centred around an in-game economy. Riot Games hires social scientists, while CCP hires economists. Play time can become an in-game asset, to be used in trades. Many players acquire play time through trade, while others use in-game force. Players in EVE group together in corporations, which control territories and products and go to war over them. This can lead to people joining corporations just to acquire sensitive information or access to materials and steal what can be equated to real money in the form of game time. CCP allows this and encourages players to resolve issues in-game. The majority of bannable offences are for breaking the game system or working outside the game (e.g. selling game items on a website or revealing player information). Long term player retention is startlingly high; people get very invested in the conflict.

When social aspects are involved in a product, customer retention requires more than customer service and a good product, it requires a good design for human behaviour. Antagonistic behaviour is expected in some situations, like games. Users feel invested if some effort is involved, but anti-social behaviour like catfishing, trolling and harassment provide an overall negative experience, particularly if the platform doesn't provide them the ability to respond or stop the behaviour. Whether it's through reporting the user, attempting to argue with them or simply down-voting them, a response can provide catharsis. Negative experiences with no response reflect poorly not just on the individual responsible, but the platform.

It's impossible to regulate the behaviour of everyone online, but most platforms incentivise certain behaviours and punish others. Teaching users how to use a platform correctly or at least creating ways for users to police the service themselves could help curb communities which are created to harass or marginalise certain groups. Anticipating the anti-social behaviour of users is becoming a greater priority as bullying, fraud and abuse become more prevalent and more damaging.