Skip to main content

Making decisions with data – Connecting people and data for better analytics results

When thinking about making decisions, it’s important to differentiate between those that you make by yourself and those that you make with others. For many IT teams, building collaboration into how their businesses operate means deploying social tools such as chat or supporting real-time editing of documents by groups. However, when it comes to data, it’s a different story.

Part of this comes down to how data and analytics have been used in the past. Business analysts or consultants might crunch data to support a critical decision. The CEO might want sales figures and a competitive-win analysis to present to the board; heads of departments might look at their business performance and budgets in relation to each other. Teams in different geographies may want to see how they perform against each other, but with data sets that are balanced based on their budgets and opportunity levels. While decisions based on these sets of data might be made in teams, the data used would be static and isolated by the time it was submitted to senior management for review and action.

The format for sharing data is to blame: alongside static PDFs and spreadsheets that only enable users to consume data, many teams are operating on their own goals rather than as part of wider business initiatives. Examples here might be the disconnect between sales and marketing – if marketing is targeted on volume of new opportunities alone rather than quality, then sales will get a lot of potential leads with no real qualification that those will be interested or actually decent targets in the first place.

Another example would be in areas like procurement that are tasked with getting the best deal for the company from suppliers. This normally translates into using lower cost providers. However, this may not meet the needs of the rest of the business around having availability of stock for demand levels or to meet specific customer service level agreements.

Without this ability to join up data, the decentralised model around analytics has fallen short. Similarly, collaboration around data is not yet built to scale across a business. If you think about the term “business intelligence” and how the industry, as a whole, has evolved around data, you could argue that we have concentrated on the intelligence part but forgotten about the larger business.   So how can we build more collaboration into our processes around data? More importantly, how can we do this at scale and for everyone?

Stopping the pendulum swinging between central and local 

Collaboration around data and decision-making should help all employees to improve their results. However, much development around BI and analytics focuses on the needs of analysts, rather than everyday employees. Even desktop discovery tools that put visualisation and dashboard creation into the hands of business professionals are aimed more at data-savvy analysts.

In the past, central IT handled BI and analytics requests for the whole business. If you wanted a report, you had to get in line. Over the last few years, the shift to desktop tools that can make data more visually appealing has helped to spread analytics beyond the central IT function and get data out to more people within the business.

Through the proliferation of desktop discovery tools, organisations attempted to empower people to work with data on their own. However, individual and departmental users ended up creating millions of “data silos” that leave people working in a vacuum, completely detached from the rest of the organisation. Further, these tools still rely on business analysts being familiar with data preparation to get the most out of them.

Today, big data processing tools such as Hadoop can save huge volumes of unstructured data into “data lakes.” This represents a switch back to a more centralised approach to analytics, as data scientists have to carry out the work necessary to get information out of these big data sets. This is currently holding back companies from making more use of all this information, as they continue to rely on individuals with advanced data manipulation skillsets to deliver value.   What happens next? A swing to more solutions that promise to get value out of Hadoop in particular? More point tools aimed at freeing up expensive data scientist time from fishing expeditions into company data lakes? Alternatively, rather than looking at central versus local deployments, it’s worth stepping back to consider how we collaborate around data in the first place.

Central IT teams can support large applications that provide critical data that the whole business depends on. However, central IT does not – indeed, should not – run reporting against the data that those applications generate. Instead, each user should be able to make use of the information for his or her purposes. The challenge is how to do this at scale.

Rather than pushing data to people and letting them work with it themselves, it’s better to bring people into a networked analytical environment – an analytic “fabric,” if you will – where data can be made available to them and they can extend the fabric with their own data. Central control and governance over the data can be maintained while each user can generate their own insights.

If individuals need other sources of information – say, external data from a partner or social networking reports – then they can add these to the networked fabric, as well. The important point is that each individual is starting from the same place, can use data where they need it, and can serve his or her own requirements.

Getting collaboration to work

Understanding this balance between central governance and local flexibility is critical to the long-term success of BI projects. Most importantly, it should not be seen as an “either / or” discussion where governance has to be sacrificed for speed and agility. Instead, it’s worth looking at how data sets can be networked together to make the most of them.

This approach can support collaboration, as users can be provided with their own spaces to work with data, rather than relying on reports or visualisations provided to them by others. If you want to share a visualisation, it’s far more interesting to bring someone into the model that you have made – and which is based on data broader view of the business – rather than relying on a disconnected data silo.

This model for sharing data analytics relies on full security and governance over access. For instance, while users can consume live data, this should not mean that they can change or edit those sources for everyone. Using a multi-tenant cloud architecture can help companies meet this challenge. In cloud, multi-tenancy refers to running multiple distinct organisations on the same infrastructure at the same time. These organisations are separated from each other, and their data and applications don’t mix. In analytics, this approach can be used so that different users can manipulate the same data without impacting each other. This is particularly useful for data held within Hadoop implementations, where multi-tenant access is difficult or has an impact on performance.

Multi-tenancy, as it applies to data and analytics, involves letting users have their own space with their own version of all the data they require. These spaces can be used for analytics and then shared with others. This can be useful for sharing data with external contacts, as well. For example, supply chain analytics can be useful for companies to examine their performance. However, not many companies own the entire set of operations across their supply chain. To improve performance, the results of analytics can be shared with external companies so that every organisation can improve their own role in the supply chain.

To deliver this, security and governance over the data has to be maintained, as you are sharing data with external companies. However, the results from supply chain analytics can help improve results for all the organisations involved. By collaborating with partners, companies can get better results for their supply chain, tackling the issues they jointly face while also addressing company-specific objectives.

As collaboration around data becomes more important to businesses, they will have to look at how data gets shared and used across their operations. By connecting people and data through a shared analytical environment, organisations can make analytics accessible to all users and not just data-savvy analysts.

In turn, this should greatly improve decision making. Individual business users will have the ability to use and extend this networked fabric, while IT can eliminate data silos and maintain governance and trust.

Pedro Arellano, senior director of product strategy, Birst (opens in new tab)

Image Credit: Shutterstock/Sergey Nivens

Pedro Arellano is vice president, product strategy at Birst, leading development around networked data and analytics. Prior to Birst, he led marketing at MicroStrategy and hosted the Stereo Gol radio show.