What does 2017 have in store for the bot?

There will be obstacles, but once ethical standards are agreed, developers should be free to discover exactly what they can do with the technology.

(Image: © Image source: Shutterstock/Sarah Holmlund)

A lot can happen in 12 months. Last year, bots exploded into the mainstream, and adoption was rapid. For example, over $1.5 billion was invested in AI startups; Microsoft has over 35,000 bot developers on its platform; The Economist even asked if bots are the new apps.

As with any technology that grows so quickly in such a short amount of time, sometimes we need to take a step back. Last year, we saw bots enter the home through the likes of Amazon Echo’s Alexa and Google Assistant, and this paved the way for more bots to be introduced to our everyday lives.

So what happens next? What does 2017 have in store for the bot?

DIY coding

As people become more and more familiar with – or even reliant on – AI, this year bots will transform into colleagues in the workplace. Business bots will thrive and start driving automation in the likes of manufacturing, recruiting and freelancing. These bots won’t be intrusive to work, instead optimising simple tasks that allow people to get on with the things they’re there to do; admin work will hopefully soon be a thing of the past.

This extends to the workplace of bot developers, too. In 2017, AI will start writing its own code; developers currently spend a great deal of time on debugging and improving code quality, yet this can be automated as AI self-learns. Already there are open source tools starting to emerge, allowing developers to focus on innovation and pioneering work.

Ethical issues

As bots become more a part of our lives, we must focus on the ethical issues that come with them. The personalisation of bots, that feeling that they really know who you are, they really understand what you’re saying and respond accordingly, throws up many issues that are only just starting to be addressed.

At Sage we’ve voiced the potential risk of the human trait of sexism being replicated in bots. It is undeniable that there is an existential threat with AI, and larger scale social issues that exist in the tech community. For example, Last year, Microsoft’s ‘Tay’ showed us what happens when chatbots learn from racists and trolls. 

When I talk to tech companies, they’re open about these challenges, which didn’t seem to be the case even six months ago. It’s a welcome, positive progression that will need to continue as bots become more culturally embedded. AI is accelerating incredibly quickly, which is why we need to get on top of its wider societal effects. Governments are already taking note, with the White House releasing a report on AI’s potential – both positive and negative – in December.

He, she or it?

Those designing the personalities of chatbots face many challenges. There’s a great New Yorker piece that delves into these: “People have always created personalities for objects,” says Lisa Feldman Barrett, a professor of psychology at Northeastern University. “People have always talked to their cars; they’ve talked to their plants; they’ve talked to their blankets; they’ve talked to their stuffed animals. It’s just that now the things talk back.”

But what voice should they talk back in? Is it a male or female voice? Should it have a gender at all? Should AI become the third gender?

At the moment, there’s a resistance to AI as a personality – people assign the male or female gender to refer to it. More often than not, the voice of AI is female, which is problematic in itself when the roles they adopt reinforce gender stereotypes.

User-focused AI roles commonly emulate the traditional roles that females have historically managed in the workplace, primarily administrative and personal assistant roles. This then fulfils the gender bias.

Just as we need a diverse human workforce, we need AI to complement and reflect diversity. AI learns from the data we feed it. If the data is not an accurate representation of the workforce, then what AI learns will naturally be inaccurate. 

At Sage, our core philosophy when developing Pegg was that AI does not have to pretend to be human, it just needs to add value. It will take some time, but 2017 should be the year this approach is taken more widely.

Changing the world

Another issue where AI is concerned is less an obstacle, and more an opportunity. AI can solve real humanitarian problems of the world: it can bring access to emergency services for refugees; help people suffering from domestic abuse when it’s too dangerous to make a phone call; chatbots can provide legal advice for people who can’t afford an expensive lawyer. Even the NHS is trialling an AI app to replace its 111 helpline. 

For me, this is the most exciting side of bots and AI we’ll see this year. We will see true change for good, all thanks to code and the ambition of those wanting to make a difference. 

2017 is going to be a year of real experimentation for the bot. Last year was all about bot adoption, and now it’s about personalisation. Whether in the workplace or helping make the world a better place, AI has real potential to be truly adaptable to specific users in specific environments.

There will be obstacles, but once ethical standards are agreed, developers should be free to discover exactly what they can do with the technology. And that freedom is exhilarating.

Kriti Sharma, VP of Bots and AI, Sage
Image source: Shutterstock/Sarah Holmlund

ABOUT THE AUTHOR

Kriti Sharma is VP of Bots and AI at Sage, she is also the founder of Messaging Bots London, the largest developer group for Bots and AI in Europe.