Is your data lost in translation?

null

Anyone learning a language or trying to ‘get by’ in another country will testify it’s no easy feat and so, in our infinite wisdom, free online translation services have become an over-relied upon means to communicate with the wider world around us. The trouble is, the algorithms fuelling automated localisation do not take factors such as context, syntax or dialect into consideration. As a result, online-translated text is often littered with mis-translated words and garbled phrases.

Welsh is a particularly difficult language to learn, and with local councils turning to the world wide web for free localisation, official signage and websites across Wales are filled with erratum. For automated translation services to deliver accurate results, there must be enough language data online for machines to use as the basis for a satisfactory service. And, in recent years, advances made in subsets of artificial intelligence (AI) such as machine and deep learning, have further improved processing ability and precision.

But in practice, algorithms often fail to identify the real meaning of words because they are driven by variable or low-quality data. No matter how intelligent machines are, their output is only as accurate as the data feeding them. And with the fourth industrial revolution soon set to fuel greater cross-sector adoption of automated technology, this is a vital lesson.

To ensure that key insights, marketing messages, and operational instructions are not lost in translation, companies must build a reliable data foundation for AI tools to work from.

So, how can this be done?

Take control of data overflow

The first step towards enhancing accuracy is improving how data is organised. Although this may sound obvious, it’s a task that’s proving surprisingly challenging due to rising data volumes. Increasing adoption of connected devices — recent research from GlobalData shows UK adults own 3.5 devices each — means companies are overloaded with information. And while this data has the potential to be extremely useful, especially for personalised customer experiences, wading through it to find valuable insight isn’t always easy.

Plus, doing so is often made more difficult by the tendency to collate, store and analyse data from different sources separately. With details about individual interactions, preferences and habits held across multiple systems and departments, it is near impossible to gain a complete view of consumers — let alone bring all disjointed datasets into order.

To address this issue, organisations must reconfigure data management from the ground up; ensuring all stages are synchronised and integrated. And the most effective way to achieve that is by employing data orchestration.

Wait, what’s data orchestration?

The term applies to the process of using several tools in unison to bridge silos and create one real-time dataset, frequently a 360-degree view of consumers. Typically implemented via platforms that can integrate with many other internal or external systems, it is most successful when companies take a holistic approach; beginning by simultaneously collecting information from every source, including app, in-store, and social media interactions. Once obtained, data is then cleansed and merged to form actionable and accurate insight: with varied information streams translated into the same language and consolidated into a single data hub.

At the same time, smart tools assess data as it comes in and start to compile unique profiles. Known as stitching and enrichment, this stage is where the pieces of multi-faceted consumer journeys come together. For instance, as the cleansing and blending procedure matches consumers with specific device IDs, data about smartphone activity can be instantly added to profiles.

When both stages are complete, what companies end up with is a real-time addressable picture of individuals that’s constantly updated. The perfect basis for AI-aided systems to use for a myriad of purposes, from delivering instantly tailored advertising messages to offering joined-up customer service experiences through chatbots, social media, or email.

Don’t forget about ethics

While essential, precision and uniformity aren’t all it takes to make data reliable. To produce entirely trustworthy insight, companies also need to keep data practices ethical.

As a result, step two in refining data is dual-focused. Primarily, it’s vital for usage to comply with legislation such as the General Data Protection Regulation (GDPR) and the UK’s Data Protection Act. Specifically, measures must be taken to safeguard consumer privacy and ensure requests for data access can be easily met — incidentally, something that creating a centralised data layer makes easier and faster. It is also worth choosing technology partners wisely; only working with those that provide in-built privacy protection and security.

Next, companies need to guard against bias. AI tools should be free from partiality, but connecting with humans has its hazards. For example, if non-diverse teams create the data that fuels AI tech, unconscious bias could influence algorithms. Or if programming is flawed, human prejudice may be mimicked; such as when YouTube’s autocomplete search function began suggesting unsavoury terms inspired by user activity.

Consequently, deploying internal measures to reduce the risk of bias is key: particularly when there are currently no mandatory guidelines for ethical AI usage. As well covering all bases during programming and setting rules that prevent subjectivity, companies must build diverse teams that bring varied perspectives and produce heterogeneous datasets. In this way, they can be assured the decisions of AI tech are as infallible as possible and powered by objective information.

Where will AI take us next?

Intelligent tech already has a sizeable impact on everyday life and business, and this is only set to increase. McKinsey has predicted AI will have driven a $13 trillion boost for the global economy by 2030, with key categories including computer vision, machine learning, process automation, and virtual assistants playing a key role.

What exactly this will look like is varied. In customer service terms, it’s probable firms and individuals will be more closely connected through permanently accessible touchpoints, such as chatbots. Progress in large-scale, real-time data evaluation is also likely to fuel better and more personal experiences across the digital landscape. There might even be opportunities for companies to reach new levels of convenience by tying services with data recorded by smart products, such as delivering a fresh milk order when stocks are running low. 

But for the ideal AI-enabled future to become a reality, data must be used well. The Google Translate story is a perfect example of what can happen when tools with high potential for utility are mixed with poor quality insight. To avoid losing the integrity and value of data to silos and bias, businesses need to organise their information assets, work to limit any chance of negative influences, and — above all — put the privacy and needs of consumers first.

Lindsay McEwan, VP and Managing Director, EMEA, Tealium
Image source: Shutterstock/alexskopje