Skip to main content

Data privacy must evolve in an AI-centric world

AI
(Image credit: Shutterstock / metamorworks)

Decades ago, before smartphones, clouds, and connected devices became part of everyday life, data had a sole home: the database. This central repository served as a starting point for any type of data analysis and business decision. Today, there are no boundaries. Data resides everywhere. It crosses company lines, streams in from connected IoT devices, and even changes in shape and form as it travels across an ecosystem.

It’s no longer possible to protect an enterprise by locking down a single database. Companies must pull data—often in real-time—from various sources and consolidate everything in a data lake, data warehouse, or other repository that typically resides in the cloud. Regardless of the exact approach, the goal is to drive business value on multiple BI, analytics, and AI use-cases.

Ironically, the uptick in AI adoption across the world doesn’t make the landscape simpler. Data scientists can’t develop machine learning models without data but a growing focus on data privacy and security introduces potential bottlenecks for putting all of this data to work and launching use-cases that leverage both confidential and sensitive data.

In addition, businesses are increasingly exploring ways to share and pool data among a group of organizations for specific use-cases. This makes it possible to build a common AI model and deliver benefits to everyone. The result is new AI methodologies that require privacy and security by design.

A new era emerges – AI on protected data

The introduction of far more advanced algorithms and computing frameworks has changed business in profound ways. It’s possible to gain insights that were once unimaginable. But with the opportunity comes challenges. As organizations adopt more advanced algorithms and computation frameworks, conventional data protection approaches are often inadequate. We have reached a point where it’s necessary to ensure that every person feels safe with AI.

There’s a growing belief that AI should be used in ethical and trusted ways—and that AI technology should be transparent and drive improvements for the world. As a result, a business must warrant that its data adheres to strict ethical standards as well as regulatory requirements. Consumers must feel their personal data isn’t being misused or abused.

At first glance, building a strict framework for data privacy may seem like a way to heap red tape and bureaucracy onto AI and data science practitioners. Already overloaded security and IT teams may perceive that this creates more work. However, the opposite is actually true. The ability to secure data means that an organization can innovate and operate faster and better. It’s actually freed from many onerous tasks.

For example, when an organization has access to protected data it can skip costly and time-consuming verification processes that would otherwise be necessary. It can confidently move forward with projects that involve trade secrets and sensitive customer records. It’s possible to begin a project without detailed discussions about security and privacy concerns.

Putting protection into play

Ultimately, success involves more than simply ticking off boxes on      data security and privacy checklists. It’s critical to develop a clear strategy along with a business plan for moving forward and implementing a Secure AI framework. The first task is to define what techniques are more useful for secure data and secure algorithms and map them to the different business use-cases. This should provide a base for embarking on valuable use-cases but also streamlining them. A secondary step is to determine whether decentralized AI needs to be part of the equation. Cross-border AI within a single company or group is often a reason to adopt a decentralized AI model, which is also referred to as federated learning.

Both components require a cross-functional partnership that involves the AI function—typically the chief AI or data officer—along with the chief security officer and privacy officer. This builds a foundation for Secure AI, including protecting data, using appropriate privacy techniques, and ensuring that models are compliant with regulations and align with an internal ethics department. It’s critical for an enterprise to understand what Secure AI means to the specific organization. There’s also a need to embed key AI metrics and controls into the AI workflows and align them with privacy and security requirements.

This AI framework delivers full transparency around what data was used, what privacy level was applied to data, what techniques were used to apply ethical standards and privacy, and how to build AI systems that can be audited later—even when they reside outside an organization. Those lacking these components often wind up debating who can and who can’t access data. Worse, an organization must repeat this painful process for every project. This slows the organization down and increases the risk of a breach or failure. Ultimately, it translates into lost value.

Good values

In the end, a best practice approach to Secure AI requires an organization to identify and define the end-to-end process for collecting data; building and deploying AI platforms that can use protected sensitivity of data; and developing an IT framework that ensures data in motion can remain protected and anonymized when necessary. This need extends to websites, apps, devices, and other systems. Likewise, it’s vital to keep an eye on what changes as various data sources and models change—and impact one another.

Finally, there’s a need to know that specific tools protect data across an ecosystem. This includes multi-cloud and hybrid-cloud environments (including containers and migrations that occur within clouds); AI protection solutions that anonymize, de-identify, or tokenize data and access; encryption methods such as homomorphic encryption that can hide the actual data even while it’s being analyzed; policy enforcement frameworks that support initiatives like GDPR and the California Consumer Privacy Act; and robust privacy reporting and auditing tools to ensure that systems are performing as expected.

Many organizations are only beginning to grasp the possibilities of Secure AI and assemble teams to create a more secure framework. However, adding and adapting these tools, technologies, and processes to fit this rapidly evolving space is critical. Organizations that get things right construct a working framework equipped for today’s business environment—but flexible and agile enough to respond to rapidly changing requirements in the data analytics space. They’re positioned to unleash innovation and, in a best-case scenario, achieve disruption.

Eliano Marques, Executive Vice President of Data & AI, Protegrity

Eliano Marques, Executive Vice President of Data & AI, Protegrity.