Skip to main content

Developing the AI future

(Image credit: Image Credit: Sergey Nivens / Shutterstock)

Artificial Intelligence (AI) is starting to change how many businesses operate. The ability to accurately process and deliver data faster than any human could is already transforming how we do everything from studying diseases and understanding road traffic behaviour to managing finances and predicting weather patterns. 

For business leaders, AI’s potential could be fundamental for future growth. With so much on offer and at stake, the question is no longer simply what AI is capable of, but where AI can best be used to deliver immediate business benefits. 

According to Forrester, 70% of enterprises will be implementing AI in some way over the next year. Additionally our recent Evolution report revealed that 40% of IT decision makers in the UK are planning on increasing IT budget for AI and machine learning projects in the next financial year. A further 39% are also planning on investing in AI skills/personnel in the same timeframe.    

For those looking to implement AI or machine learning projects, the compute bottleneck that use to hold back projects like these has largely been eliminated. The application of GPU technology from the likes of NVIDIA, which has successfully made the shift from developing world leading graphics cards to also being a major force in super computing, has played a big part in this. As a result, the challenge for many projects is now reliant on providing the data fast enough to feed the data analysis pipelines central to AI. 

It is critical that organisations also carefully consider the infrastructure needed to support their AI ambitions. To innovate and improve AI algorithms storage has to deliver uncompromised performance across all manner of access patterns, from small to large files, random to sequential, low to high concurrency, all with the ability to easily scale linearly and non-disruptively in order to grow capacity and performance.    

For legacy storage systems, meeting these requirements is no mean feat. As a result, data can easily end up in infrastructure siloes at each stage of the AI pipeline – comprised of ingest, clean and transform, explore, train – making projects more time intensive, complex and inflexible. 

Bringing together data into a single centralised data storage hub as part of a deep learning architecture enables far more efficient access to information, increasing the productivity of data scientists and making scaling and operating simpler and more agile for the data architect. 

Modern all-flash based data platforms are ideal candidates to act as that central data hub. It’s the only storage technology capable of underpinning and releasing the full potential of projects operating in environments that demand high performance compute capabilities such as AI and deep learning.    

UC Berkley’s RISELab is an exemplar of this technique. The team has pioneered one of the fastest, cutting-edge, real-time analytics tools in the world, which is driven by flash to support ADAM, an open-source, high-performance, distributed library for genomic analysis. As a result, the team is making incredible leaps in genomic sequencing, enabling researchers and clinicians to apply the results of genetic sequencing to treat, cure and even prevent thousands of diseases. This incredibly personalised care and treatment based on the individual’s specific genetic makeup, not only optimises treatment, but improves post-op care and rehabilitation. Not only this, but they are doing it all at a faster pace, and more affordably, than traditional methods that do not employ genetic information. 

Man AHL, a London-based pioneer in the field of systematic quantitative investing, is also leveraging  flash storage in order to create and execute computer models that make investment decisions. Roughly 50 quantitative researchers and more than 60 technologists collaborate to formulate, develop and drive new investment models and strategies that can be executed by computer.    

With its all-flash data platform, the investment specialist is able to deliver the massive storage throughput and scalability required to meet its most demanding simulation applications, greatly improving the usability and performance of its technology to run multiple simulations. As a result, the solution offers great potential to be a game-changer when it comes to creating a time-to-market advantage. 

Flash storage arrays are so well suited for these AI projects as they encompass a parallelism that mimics the human brain, and enables multiple queries or jobs to run simultaneously. By building this type of flash technology into the very foundation of AI projects, it vastly improves the rate at which AI and ML initiatives can develop. For years, slow, complex legacy storage systems have been unable to cope with modern data volume and velocity, and have been a roadblock for next-generation insights and progression. Purpose-built flash storage array systems eliminate that roadblock, removing the storage infrastructure as a barrier to customers fully leveraging data analytics and AI projects.    

The automotive industry is one of the best examples of where running an AI project on legacy infrastructure would just never be possible. AI is a critical aspect to make driverless cars a reality, and in doing so make our roads safer. Zenuity, a joint venture between Volvo Cars, the premium car maker, and Autoliv, the worldwide leader in automotive safety systems, is aiming to put the safest autonomous vehicles on the road by 2021. Each vehicle is equipped with sensors like LIDARs and cameras to safely navigate its surroundings. Millions of frames collected from the cars are used to train deep neural networks that are then used to power the software that runs Zenuity’s fleet of self-driving vehicles. It’s flash which provides the scalability and performance needed for a machine learning project of this magnitude. 

Whether AI is central to your company’s core competency or not, it is a tool all organisations should be looking at using to bring efficiency and accuracy to their data-heavy projects. Those who don’t could be leaving their business at a severe competitive disadvantage. However, it’s imperative that project leaders just need to ensure they have the infrastructure in place to support the massive data ingest and rapid analytics evolution inherent to AI deployment.

James Petter, VP EMEA at Pure Storage (opens in new tab) 

Image Credit: Sergey Nivens / Shutterstock

James Petter is VP EMEA at Pure Storage.