Skip to main content

AI and machine learning give new meaning to embedded analytics

(Image credit: Image Credit: Bluebay / Shutterstock)

Data democratization, the concept of empowering any employee to make data-driven decisions for their company regardless of skill set, was supposed to rival the Elysian Fields in its paradise-worthy promise of analytics.    

It’s easy to see why this concept made such a big splash in the enterprise. Since the early 2000’s, companies have been amassing raw data, which has morphed into the $203 billion big data analytics market. (opens in new tab) But this data always lacked transparency. Housed in messy data architectures that led to siloed information, companies struggled to gain a singular big picture into what their data was telling them. IT departments were left to sort through data warehouse integrations and complex extract, transform, load processes to try and create structure so any data analytics tool would single view into solving business problems.    

But then came a new problem: A 1,000-plus-person company couldn’t have every employee ask IT every time they had an analytics-worthy question to solve. This brought the emergence of self-service BI tools — dashboards that were supposed to present a clear picture of what was happening in the organization so each employee could be transformed into a citizen data scientist, capable of answering their own analytics-based questions and speeding time to insight without needing a tech-savvy middleman. However, a single source of truth became the mantra and static dashboards were all a snapshot in time.  

The concept of giving everyone in the enterprise access to mass amounts of data to drive decisions is flawed as long as advanced analytics requires a lot of handholding. It’s difficult for business users to keep up with the changes in self-service analytics tools and predictive monitoring and also keep up with their traditional jobs. Employees don’t know how to interpret data from cumbersome dashboards and want automated, real-time decisions. Giving everyone in the enterprise access to as much data as possible became the equivalent of handing over a stockpile of ingredients with no recipe and expecting everyone to bake a delicious cake. Yes, they have all the pieces of information in their possession to accomplish their goals, but they didn’t have the training or knowledge to drive the best results. And it shows.    

It turns out less than 20 percent of these citizen data scientists use advanced and predictive analytics tools consistently. Only three-quarters of companies committed to big data analytics initiatives report that their revenue or cost improvements from the effort have been less than 1 percent. (opens in new tab) With self-service analytics and visualization tools turning into shelfware, what can save the state of big data analytics in the enterprise?    

Industry momentum points to the fact that 2018 will be the year that embedded analytics coupled with machine learning will begin to replace self-service analytics. 

How AI Can Help  

Embedded analytics augmented by AI are going to drive down the manpower it takes in the enterprise to manage and analyze data. And over the past year, narrow AI — or AI applications that are designed to perform one task — has proven it’s more than capable of handling offloading some of data scientists’ workloads.  

Machine learning, and it’s more advanced cousin deep learning, are allowing companies to not only stay ahead of the massive amount of data they have to analyze, they are letting them analyze more complex data. Suddenly, photos, images, the sentiment of language on social media platforms and so forth are rich with potential insights. This provides a much more holistic picture of the data a company can capture. AI software companies and Gartner are predicting that this ability, especially related to natural language processing, is going to push the evolution of self-service analytics tools (opens in new tab), enabling companies to make more sense of their data than ever before.     

In 2017, the Google AlphaGo (opens in new tab) team outplayed the world’s top player at a game that has 300 times the number (opens in new tab) of strategic possibilities than chess. But what’s even more exciting is that  a few months after that win, AlphaZero (opens in new tab) - the next generation of AlphaGo, was able to beat the world’s best chess program without any person training the model. Instead, the team just fed it the rules of chess — and it was able to dominate the field in only four hours.    

That means unlike the past days of manually sifting through numbers to accurately forecast the best path to victory, AI is now rapidly learning on its own. This process sidesteps the last great task of data scientists — transfer learning — and replaces it with reinforcement learning, where the model trains itself. With new algorithms capable of understanding what to do without being told explicitly how, 2018 could be the year that  we see fewer people analyzing data. This will directly impact self-service analytics platforms, as people will prefer to leverage ML instead of sifting through data. Most visualization tools will turn into shelfware.  

Reinforcement learning could craft real-time advertisements and cater them to the format and messaging each individual desires, based on user behavior. It could be used to evaluate trading strategies on the stock market, coming up with out-of-the-box options by only understanding the parameters of trading, instead of examining past behavior. It could completely personalize a user’s interface with a company’s website — the applications for enterprise go on and on. And each require less time and training of employees that would benefit from AI-powered analytics.     

There’s been a rapid pace of innovation in AI and machine learning in a very short amount of time and it’s finally fulfilling a lot of the promise of big data. 2018 could finally be the year that ML provides the right data to the right people at the right time, which will allow them enough time to focus on their core responsibilities and less time on overwhelming data-driven missions with self-service analytics. Machines are getting better than humans at understanding complex data, and once BI tools can take advantage of these advances in AI and machine learning, companies can get machine-accurate recommendations that finally deliver on data democratization and digital transformation.      

Roman Stanek, Founder and CEO of GoodData (opens in new tab) 

Image Credit: Bluebay / Shutterstock

Roman Stanek is the CEO and Founder of GoodData. Roman is a passionate entrepreneur and industry thought leader with over 20 years of high-tech experience. His latest venture, GoodData, was founded in 2007 with the mission to disrupt the business intelligence space and monetize big data. Prior to GoodData, Roman was Founder and CEO of NetBeans, the leading Java development environment (acquired by Sun Microsystems in 1999) and Systinet, a leading SOA governance platform (acquired by Mercury Interactive, later Hewlett Packard, in 2006).