Skip to main content

Trust issues: Just how safe is it to confide in our digital assistants?

(Image credit: Flickr / Masaki Tokutomi)

Have you ever felt like you were being followed? That somebody has been listening in on your private conversations? We live in a world filled with coincidences, just like when you open up Instagram and see an advert for a trampoline park after telling a friend you were thinking about taking the family to one at the weekend. But what if the adverts you see online aren’t actually coincidental and that you really do have a stalker who’s obsessed with showing you adverts that you might be interested in?

Well, there’s mounting evidence suggesting that you, in fact, do have a stalker. And its name is Alexa. Or Cortana. Or Siri. Or Google Home. Or all four at the same time.

It’s projected that around 1.8 billion people will be using digital assistants by 2021.

In this day and age, we’re very fortunate to be blessed with mind-boggling technology designed to bring unprecedented levels of convenience to our lives. We can ask our smart speakers for the latest weather forecast without even needing to leave our beds, or have a detailed discussion with a chatbot capable of conveying near-human levels of comprehension and emotion, or we could ask the most oblique question on our laptop’s home screen to be returned with thousands of intuitive results.

But just how much information are our new robotic companions remembering about us? And how safe should we feel in confiding with them? Let’s take a deeper look into the trustworthiness of our digital friends:

Is Alexa eavesdropping on you?

TechCrunch reports that over a quarter of adults in the US use a smart speaker now - with the Amazon Echo proving most popular.

The meteoric rise of the smart speaker is down to the unparalleled convenience it brings to the everyday lives of users. We should’ve known that this was an inevitability: Just as the PC eventually gave way to the portable and compact smartphones; now we’re presented with a piece of technology that’s entirely handsfree and smart enough to turn our words into actions.

Now we can play our favourite songs, buy products online and access the latest news without the need for even looking at a screen. From Amazon’s Echo, to Google’s Home, to Sonos’ One, there are plenty of big names to choose from among the companies supplying smart speakers and owners are encouraged to converse with their new robotic assistant by saying a prompt, like ‘Alexa…’ or ‘OK Google…’ wherein the machine will jump into life and listen out for your commands.

But what if our smart speakers are always listening? Ready to learn more about you, your interests and buying habits?

Eric Johnson, CIO at Talend, explains that smart speakers do have a capacity for listening in even when we’re not engaging with them: “by design, the voice assistant will always be monitoring for its call to action and will only start recording once you issue a command. Those recordings are stored in the device’s app along with other information from your Google or Amazon accounts.”

Android Police reported in August 2017 that a batch of Google Home Minis gifted to journalists was found to have been constantly recording conversations around them, while last year an Amazon Alexa managed to record a private conversation involving its owner before sending the audio file to her husband.

All companies that manufacture smart speakers regularly deny allegations of snooping, with Google claiming: “All devices that come with the Google Assistant, including Google Home, are designed with user privacy in mind.” However, given that the company applied for a 2017 patent that noted the “volume of the user’s voice, detected breathing rate, crying and so forth” could help a smart speaker to determine their mood, this seems a little bit too intrusive for comfort.

The culpability of Cortana

Our nosy little helper bots aren’t just limited to living in speaker systems, either. Cortana is Microsoft’s entry into the increasingly congested world of the virtual assistant, and it promises seamless integration with any modern Windows computer.

Cortana has four ways of mining data from users, namely through phone information, location services, other Microsoft-based functions, and 3rd party services. As part of providing an intuitive level of companionship for its users, Cortana stockpiles information both on local computer files and on The Cloud - enabling all Microsoft devices to synchronise.

The way Cortana gathers and stores users’ information has been a source of concern among privacy advocates - and Microsoft has even removed the option customers had of being able to toggle Cortana on and off within their computers (however, it’s worth mentioning that there’s still a way of switching Cortana off, though it involves editing a device’s system registry).

Fortunately, for those uncomfortable with Cortana lurking in the shadows and learning from your actions on your laptop, there’s a way of limiting the amount of information you share with the omnipresent bot. Users are able to prevent the software from collecting any information from your speech and handwriting patterns. Significantly, you can also stop Cortana from sharing your search usage across other Windows devices.

 

Could chatbots have bad intentions?

Another cunning and increasingly popular brand of bot comes in the form of the ever-helpful chatbots that we see regularly when visiting e-commerce stores online. However, unlike your usage with Cortana, there’s very little you can do to filter the information that chatbots take from you.

Fundamentally, chatbots signify a triumph in machine learning. Emotionally intelligent chatbots, known as Emotional Chatting Machines (ECM) employ levels of clever functions akin to that of humans when interacting with others. The ability to perceive, integrate, understand and regulate emotions are all now perfectly workable within a robotic framework, and chatbots are still capable of learning plenty about the people they’re chatting with through the questions they ask and subsequent answers they record. 

The arrival of GDPR around EU member states helps to keep consumers’ information more private at the hands of chatbots, but due to their ability to seamlessly remember all the information you send to them, it’s very much worth avoiding the temptation of oversharing to your robotic counterpart - regardless of how human they may seem.

The snoopers’ charter

The Snoopers’ Charter was the name given to the UK’s Draft Communications Data Bill devised in 2012 by then Home Secretary, Theresa May. The bill would’ve left internet service providers and mobile phone companies obliged to collect masses of data on the public’s internet browsing history - including lists of websites visited, email correspondence, voice calls, internet gaming and SMS messaging services.

Ultimately, the Snoopers’ Charter was shelved, but the widespread fear of our privacy in a world of ever-enhancing AI and machine learning has kept the sentiment alive.

The power of accessing scores of information on large swathes of the public has already been realised if Guardian journalist Carole Cadwalladr’s reports into the relationship shared between Cambridge Analytica and Facebook are further verified.

There are accusations, corroborated by former company employees, that Cambridge Analytica utilised the loopholes within Facebook’s rules on the selling of a user’s information to take the data of 87 million subscribers to the social network and algorithmically tailor-fit political advertisements in order to suit their respective personalities. 

Given that there’s tangible evidence that the legal framework designed to keep us safe from both Artificial Intelligence and Machine Learning is subject to compromises, perhaps it’s time to think carefully at the amount of information we supply our helper bots with.

Decentralised, blockchain-based companies like Irbis Network are already looking at providing supreme levels of confidence to mobile phone users. There’s even an option for users to disguise their voice in order to keep protected from automatic recordings.

Aiming to address the bigger picture is also the SEED Project. SEED is one of the very first projects constructed with the aim of both powering and managing the burgeoning AI and chatbot ecosystem using blockchain. Nathan Shedroff, SEED’s CEO summarised his industry aims thus: “In essence, we are talking about policing bots in accordance to their own licencing agreements — not some arbitrary standard. We don’t care to take an editorial stance; instead, we are ensuring that the bot does what it says it is supposed to do.”

In a world that’s obsessed with supplying better convenience for customers everywhere, it’s safe to say that our robotic buddies are here to stay - and will ultimately help make our lives collectively easier every day. But until a tangible level of policing for this burgeoning technology is rolled out, it might be best to avoid oversharing with Alexa the next time you spark up a conversation.

Dmytro Spilka, Founder and CEO, Solvid