LzLabs – the world’s first Software Defined Mainframe

Having launched its keynote product – the LzLabs Gotthard- on July 5th, Thilo Rockmann, Chairman of LzLabs answers a few questions on the aspects and implications of what it calls the world’s first Software Defined Mainframe.

When did you start thinking about a Software Defined Mainframe? Why?

The software defined mainframe has been in development for five years. We wanted to deliver a solution that could liberate customers from COBOL applications that hamper IT innovation and cost them millions of dollars to sustain, without putting them through the long and arduous process of re-compiling these applications.

How long did it take you to develop a deployable solution?

It took us roughly three years to reach a deployable solution which, after some customer sampling, we elected to develop further before we took it to market.

Who did you target initially? What kind of feedback did you receive?

We initially focused on customers with batch workloads, as these are most straightforward to segment and therefore migrate. There was a lot of excitement about this solution, and it proved the fundamentals of the technology, but it was clear to us that there was further opportunity in database and online applications as well, which we quickly incorporated.

How expensive is managing such a solution? And how expensive is it as a user?

The cost of implementation and management will vary dependent on the scale and complexity of migration, but what we can say is that users of the software defined mainframe will deliver an order of magnitude cost saving.

How does it work?

The Software Defined Mainframe works by liberating legacy customer applications from outdated application programming interfaces (APIs), with a seamless shift from mainframe to x86 Linux platforms without changing a single line of application code. The migration process requires no recompilation changes to data formats or job control language, and is capable of transferring all major legacy application standards including online, batch and database. This container-based software solution has the only faithful re-creation of the primary online, batch and database environments, which enables unrivalled compatibility and exceptional performance, dramatically reducing IT costs. We are able to solve the CIO dilemma of keeping the applications running whilst maintaining and improving their ability to innovate.

What are the key differences between software-based “de-mainframisation” offerings, which have been around for a while, and proper virtualisation?

Traditional software-based migration tools have the requirement to re-compile COBOL code so that they can run on other systems such as Linux, Unix, etc. The Software Defined Mainframe enables the applications to be moved without any recompilation of the code, virtually eliminating the risk involved in this process. Historically, risk is one of the reasons that mainframe migrations have not really achieved the success that has been expected. There’s a lot of risk involved in modifying applications to run on different platforms, when the people that originally wrote those applications are no longer around to help. At the same time, if you can get the applications onto new platforms, you give organisations the opportunity to dramatically reduce their cost of ownership, and put themselves in a position where they can move the applications forward more efficiently and support emerging business IT initiatives such as Cloud and open source.

Is it better to migrate to a private datacenter or to the cloud? If the latter, what kind of cloud? Public or hybrid?

We are not in the business of advising organisations on their business and IT strategies. What matters most to us is that companies are able to choose where and how they wish to deploy their applications and data. We are enabling these kinds of strategic conversations to take place in organisations that have traditionally been shackled by mainframe technology.

Mainframe is historically deeply rooted in the banking industry. Will banks migrate their sensitive data to the cloud?

It is completely true that the mainframe has a deep-rooted legacy in the banking industry. After all, 70 per cent of the world’s commercial transactions are performed through mainframe computing. You could argue that the banking industry has long been dependent on the mainframe, which is why many large banks still fork out billions of euros to maintain and invest in new mainframe technology. Now that we have the ability to seamlessly migrate workloads away from mainframes and onto off-the-shelf hardware as well as cloud, banks now have the choice to opt for much more cost-efficient IT infrastructure.

We’re seeing a strong shift in the perception of Cloud in every industry, including finance. This is interesting for us because Cloud embodies many of the RAS (Reliability, Availability, and Serviceability) features that have defined mainframe systems for so many years. We don’t require our customers to move to the Cloud, but it will absolutely be a viable alternative for them now as a result of the software define mainframe.

Due to the fact that the Software Defined Mainframe requires no changes to the application code, the risk associated with recompiling code has virtually disappeared. Coupled with the security systems maintained by major cloud providers such as Microsoft Azure, banks can now decide for themselves whether or not the cloud will be their best bet for hosting their workloads, which given the risk-free element, has not been an option available to the older banks at any other point in history. Banks might also opt to run their applications on open source based commodity hardware. It is about choice - having the ability to break free from a legacy proprietary system, and given the vastly different costs of ownership when comparing mainframe technology and cloud or commodity hardware, we expect a heated discussion amongst bank CIOs on exactly this issue - it is a serious discussion that they are finally able to have.