Skip to main content

How to smoothly implement an AI project in your business and avoid AI purgatory

(Image credit: Image Credit: PHOTOCREO Michal Bednarek / Shutterstock)

Humans are creatures of habit. Our brain creates neural pathways from repetitive actions and thoughts to make us more effective. This, in turn, can result in us acting on auto-pilot. And it’s the reason why neurologically forming new habits, such as waking up earlier to do exercise or eating less, can often fail.

And this translates into the workplace too. The reason why most AI projects fail is because forming new habits or implementing new processes is hard. And it gets even harder if the barriers to using it and experimenting with it are too high to even bother with. There are 7 phases that any business needs to get through to escape, what I’ve come to call, AI purgatory.

This is something we learnt a lot about in our journey. In 2006, we released our first product and learnt about the barriers that prevented most organisations from using AI. Since then, we’ve condensed these steps to share with data scientists so they are informed before embarking on an AI project.

The good news is many of the below steps we witnessed companies struggling with can be avoided – with the right tools and platforms doing much of the heavy lifting.

The 7 levels of AI purgatory

Phase 1: Getting up to speed with AI

Firstly, AI should not be seen as a consideration for the IT team alone. It’s a fundamental change that will affect every part of the business. This means you need to get all the key decision makers on board. Without a deep business understanding of AI, companies can end up spending a lot of money solving the wrong problems.

Phase 2: Data acquisition and clean-up

Getting access to data and making sure you have a fresh supply of it is one of the most challenging elements of an AI project. Without good data – and lots of it – AI cannot succeed. After all, it’s only as good as the data you feed it. And unless you’re Google or Apple, data-sourcing can be challenging.

Phase 3: Understanding tools

AI tools are in abundant. This is a blessing and a curse as it can be overwhelming to try to stay on top of them all, never mind choose the best one for your project. Also, many are not yet designed for commercial, industrial or operational use.

AI software and hardware available change on a weekly, sometimes daily basis. Finding a combination that works is tricky and can take a data scientist weeks to conduct research and get lift-off with a tool.

And even once a decision has been made, you’ll find yourself revisiting this painful step over and over as changes are introduced in the software.

The crux of the issue is that most deep learning software is incredibly complex and was born in an academic environment, pandering to a highly intelligent, intellectual audience that speaks a particular language. To apply it to the business world is no job for the faint-hearted. 

Phase 4: Building models and teamwork

So far, the buildout of AI models has been a single-player game. The code that one data scientist or AI expert writes is unique and not easily compatible with what others are doing, which makes teamwork challenging.

Making models is an iterative process, too, so you may quickly find yourself in versioning hell with one user – and versioning apocalypse with multiple users. With changing software and hardware, the lack of tools for AI-configuration management will throw spanners in the works on a regular basis.

Phase 5: Teaching your models

Companies will need to use a lot of time and resources, in order to train, build and rebuild models to achieve the best one. This is a logistical nightmare to handle, exacerbated by the technical complexities of dealing with the specialised hardware used for deep learning. Expect to spend lots of time and resources on building and rebuilding tools for training models and managing trained models.

Phase 6: Putting your AI in production

When a data engineer takes over from the data scientists and AI experts to rewrite the code into production-grade code. The engineer tends to not understand the maths behind the model and optimise away important functions.

At this stage, a data engineer will take over and try to rewrite the data scientist’s spaghetti code into production-grade code. This often fails as the data engineers don’t understand the maths behind the model. Frustrations build, and so does a massive software stack in order to create complex live production environments.

Phase 7: Maintenance

As AI continues to evolve, data will need to be constantly added and changed which furthers the costs even more. It also means the same experts that created the model needs to be kept on as they often the only one who can understand and reproduce what they have built.

There’s never a clear exit point for an AI project when you fear your valiant efforts are failing. Once you’re in, you’re in. And often projects will be stuck in a never-ending loop between building models – and training and re-building models – costing a lot of money and never resulting in delivery. Projects require massive upfront investment with very little guarantee of results.

There are so many moving parts that you’ll constantly find yourself thinking, “Just one little tweak and this model will work!” But you never quite get there, because the reality is that building a production-grade AI system is beyond the reach of all but a few who can throw massive resources at projects.

If, by some miracle, your model goes into production, the cost of ownership and maintenance will perpetuate your life in purgatory.

The good news

The first two levels of AI purgatory are inescapable. If your organisation is not thinking how to integrate AI to the core of its business model, you’ll never identify the areas where it could bring the most value and you cannot avoid maintaining usable data.

But, as deep learning becomes an increasingly important initiative to businesses from startups to big businesses, AI platforms are available on the market that can handle steps 3 to 7. Using an operational AI platform can help you avoid the perplexing and laborious tool research, the coding of an AI model, the training of that model, the practical production, and the endless maintenance.

If the tools available are so complex it means only people who are AI experts can use them to their full effect. This massively limits AI’s potential to become operational. And makes the barriers to enter much higher for firms don’t have AI experts on board. But it doesn’t have to be the way. Many data scientists and developers would be able to start experimenting with how AI can make processes and indeed even entire supply chains more efficient. They just need the right operational tools in place to help them on this journey.

Take Uber and Lyft using machine learning to predict the arrival of their cars, hospitals using the technology to help the efficiency of cancer treatments, banks using AI to decipher fraudulent checks, airlines use it to fly planes – the original self-driving vehicles – there has even been recent discussion from scientists that AI will illuminate the mystery of dark matter. The AI world is a data scientist’s oyster, they just don’t know how easy it can be yet.

Luka Crnkovic-Friis, CEO and founder, Peltarion
Image Credit: PHOTOCREO Michal Bednarek / Shutterstock

Luka Crnkovic-Friis is the CEO and founder of Peltarion, a Stockholm-based company that makes AI and deep learning accessible, affordable, and reliable for everyone. Luka, has more than 15 years of experience with neural networks and their industrial applications. His background is in engineering, specialising in artificial intelligence, primarily deep learning.