In contrast with pilots, or the more time-consuming project-led approach used in most standard business and IT initiatives, a strategic "fail fast" methodology can represent a less restrictive and more fruitful way of searching for value from new data sources, typically at the core of big data initiatives.
Instead of focusing on a single, pre-defined objective, a fail fast methodology involves carrying out multiple, smaller experiments concurrently and building on the results, in many cases making it easier to maximise successes. From an IT and data science perspective, this can help to create the perfect discovery environment. Crucially, it can also enable a firm to learn quickly and identify a strategy for big data that's right for the organisation, its people and data.
Like a successful business start-up, those carrying out a fail fast approach need to establish the factors that don't work and will hold them back, as well as those that do work. If the evidence shows the project is not going forward, it's important to have the freedom to recognise that, not as a 'failure' but as part of the experimental learning and then move on from it.
For this reason, those involved should document what happens in their experiments and design their analytics projects in such a way that they can measure results. A good example of this is 'blind' A/B testing, a methodology of using randomised experiments with two variants, A and B, which are the control and treatment in the controlled experiment.
Two versions (A and B) are compared, which are identical except for one variation that might impact a user's behaviour. Version A might be the currently used version (control), while Version B is modified in some respect (treatment).
Commercially, such experiments are commonly used in web development, marketing and advertising. In e-commerce, for instance, this could help to identify changes to web pages that increase or maximise an outcome of interest, or explore the path to purchase. The experimental approach means many ideas can be tested in a small low risk way. While many will have no beneficial impact, some will have a negative impact but a few can have a significant uplift in results.
Similarly, even in a traditional, 'bricks and mortar' retailer, it is possible to use this methodology to test out changes across various shops in a chain. Conversely, where there is no benchmark, it is almost impossible to ascertain whether the changes made are a success or a failure.
From a corporate perspective, fail fast makes good business sense: as requirements-gathering isn't always strong amongst IT teams, the chance to involve the business upfront before committing to a potentially costly, long-term IT project can present a distinct advantage. Crucially, this flexible approach can also help stakeholders to define the business logic more clearly, understand where the value lies and pre-define a more specific project goal or outcome, prior to the underlying technology being built and productionised.
Kevin Long is the business development director for Teradata UK.