Let’s not sacrifice privacy by “democratising” AI

Bots and the AI revolution are here - they help people to appeal parking tickets, choose films, book flights and hotels, speak to banks and even submit asylum applications. 

133 million monthly active users who’ve asked over 12 billion questions from Microsoft’s digital assistant Cortana are proof that investing into voice recognition, machine learning, pattern matching, and other building blocks of the AI, is starting to be big business. 

The brightest technical minds, industry attention and investor dollars have moved from building apps to building bots, or integrations, on top of existing communication platforms, and tools to build these bots and services. This will have a huge impact on the data privacy and security. 

A question of privacy 

The creators behind the integrations and personalised services delivered via chat or voice assistants want to learn about you, your preferences, tastes and habits, and to use that knowledge to personalise the service and anticipate your needs. 

In March this year a bug was reported in Google Allo that leaked people’s last search results to others in a shared chats, potentially revealing sensitive information. It is clear that data protection and security needs to be a major focus, especially as these tools transition into business use. A banking bot, for example, should never have the ability to share data on the last customer it was talking to with another user. In addition to bots, integrations, which is the enabling of different external applications to interact with messaging systems are quickly becoming an intrinsic to the evolution of businesses.   

Many bots and integrations are deployed on platforms run by companies whose business models are collecting and selling your personal data for adverts or to advertisers directly, which is a privacy issue in itself. 

Take for example, The Bank of America which now has a bot on Facebook to help people manage their personal finances. The knock-on effect of this is that the social media platform now has information about customers’ finances, and is happy to analyse it and sell it to advertisers.  

The psychology of advertising is complex and well-informed – consider adverts targeted to you at the right point in your pay cycle, or when your credit balance is high, and think about how much more likely they are to succeed. Or vice versa: payday loan ads when your bank account is empty can have a very damaging effect on people’s long time financial outlook. 

The challenge for business is even more complex. For example, a pizza delivery company that uses a bot to serve its customers could be helping Facebook to identify pizza lovers. This in turn allows competing services to buy advertising from Facebook to target and promote their offers to the same people, jeopardising the business which initially used the bot. 

E2EE for bot integrations 

There are many types of bot interactions that people don’t want to share with companies, and they shouldn’t have to. Consumers and businesses will need to draw a line and decide if they are happy with the communication platform having data on all of their interactions. If not, then they need to send a signal and move to a platform that can offer better privacy. 

Luckily the first end-to-end encryption communication companies have started offering more privacy-focused service options. 

End-to-end encryption means that only the sender and the receiver can read the messages and the platform provider can’t access this content. This opens new opportunities for service providers to provide a variety of services and integrations that deal with sensitive personal data. The exact same conversation previously alluded to with The Bank of America could happen but without the negative advertising effects. 

The same is true for the ever growing number of integrations that are available to businesses through various messaging services. Many businesses are using APIs built into messaging programmes to enable new capabilities and easy interactions with existing tools. 

However, the weakness of this from a business perspective is that most, if not all of the popular messaging platforms would then have awareness of, and potentially access to, what was previously sensitive business information. For example, if a journalist sends a recorded interview to a speech recognition service to get it automatically transcribed and returned, the information will likely become available also the communication platform they are using. 

The advantages and limitations of securing bots 

Utilising secure bots does have its limitations, depending on where you stand on the privacy discussion. If encryption is used, then each service or bot is limited to user information already known by the service, and what that user chooses to share. Aggregating data across various different service providers would is impractical as it would require sharing the encryption access with each provider, invalidating the point of E2EE. 

Companies must take steps to secure the bot experience for consumers before the ecosystem becomes widespread, rather than trying to do so after it is popular and building “data gathering” clauses into obtuse terms of service. Being prepared also lends itself to becoming compliant with increasingly stringent regulations around data protection that could lead to heavy fines if not addressed adequately. 

With the European Union’s General Data Protection Regulation steadily creeping upon us, the obligation upon companies to comply is stronger than ever. Failure to do so could lead to staggering monetary penalties of up to €20 million but the consequences of losing customer trust as a result of cyberattacks exploiting data will linger long after. The National Cyber Security Alliance even found that as many as 60 per cent of hacked SMBs go out of business after six months. Evidently, leaving any aspect of your business unsecured is simply not worth the risk. Businesses need to know that any integrations or bots that they utilise are not creating a security weakness, or they will not be able to use them. 

Users and businesses alike need to be reassured. They need to know that their communication with bots and use of integrations are not another tool targeting them with advertising. Instead, they need to be reassured that they’re there to provide them with a service, otherwise people will not be comfortable letting them into their lives. 

What does the future hold for 3rd party services on messaging platforms? 

If 2017 is the year that machine learning, AI-driven services start to become a mainstream part of our lives, then it means it's also time for a frank discussion about how much we want messaging companies or voice assistant platforms to know about our communications. 

What does it mean for our security and privacy if the private communications with doctors, lawyers, banks, and business partners are moving from email to chat apps? Are we, as individuals, employees and business owners really willing to sacrifice another level of our privacy just to get a quicker service? 

We as an industry need to address these questions as the bot ecosystem matures and becomes more widespread. 

People concerned about their privacy need to consider the alternatives. They need to choose platforms whose business model doesn’t undermine the privacy and security of the services provided, including bot integrations.   

Businesses need to consider the fact that bots and integrations can be both productive and safe when secured with E2EE, which will surely provide peace of mind to company and customer alike. With the constant rise of threats to business security, and the need for increasing efficiency brought by bots and integrations it is vital that companies not only begin to embrace these new technologies but also have a plan to secure them from the beginning. 

Alan Duric, CEO and Co-Founder of Wire 

Image Credit: Montri Nipitvittaya / Shutterstock