Skip to main content

Piece by piece: A low-risk process for controlled cloud migrations of business-critical software

(Image credit: Shutterstock / Blackboard)

The best way to minimize risk while moving complicated, business-critical applications from on-premise to the cloud is to follow a three-step process: 

  • In step one, IT should lift and shift applications to a cloud environment as-is, without changing any code.  
  • In step two, IT teams can make copies of the application in the cloud to improve software development and testing while refactoring the original to use cloud-native services piece by piece. Incremental automation for environment builds can also be introduced without using cloud-native services.  
  • Lastly, in step three, much of the application will exist in a cloud-native format with a connection back to on-prem. This process makes for a faster, more secure route to cloud innovation, while reducing the dependency on current on-prem resources at the same time. 

This process allows organizations to take advantage of cloud benefits in cost and flexibility while minimizing the risk of breaking critical applications or causing delays. 

Lift and shift 

First, IT should recreate on-prem applications in the cloud without changing any code. This should be a full clone of the original system without re-engineering any components into cloud-native equivalents. Make sure to create the same number of LPARs/VMs, same disk/CPU/memory allocations, same file system formats, same IP addresses, same hostnames, same network subnets, etc.

Remember that applications running in IBM i or AIX can’t be lifted and shifted directly to AWS, Azure or GCP without specialized solutions. The benefits of adding cloud flexibility to traditional applications generally outweigh the investment in such a solution.

Hand out copies 

Once an application is in the cloud, ephemeral longevity, rapid cloning, software-defined networking and API automation can be applied to it. Furthermore, once a group of LPARs/VMs depicting an “environment” is created in the cloud, that environment can be saved as a template and then used to clone other working environments. Clones are exact replicas of the template, all the way down to the disk allocations, hostname, IP address and subnet. Multiple environment clones may run in concert with each other without running the risk of colliding, though the effort involved in setting this up varies from one cloud provider to another. 

Designing ready-to-use environments from a template is the most important part of this approach. It lets IT create multiple clones for various dev/engineering/test groups, all of which can run simultaneously. Most cloud providers offer built-in access controls so users may only access what has been assigned to them (i.e., an ENG users cannot see an environment solely assigned to QA). Users can also have role assignments allowing them to manage LPARs/VMs defined in an environment assigned to a specific project. 

Use an EVR to duplicate address spaces  

In order to build multiple working environments replicating the same network structure as the final target system, some form of separation must be used in order to mitigate potential roadblocks in your cloned environments. For our purposes, “replicate” means re-using the hostnames, IP addresses and subnets living in each environment. At this point, each unique environment should be in its own software-defined networking space, invisible to other environments running at the same time, resulting in each environment becoming a virtual private data center. By allowing duplicate hostnames and IP addresses, individual hosts don’t have to go through a frustrating, time-consuming “re-IP” process.

There are various ways users can achieve this. Here’s one example using an “environmental virtual router” (EVR). Duplicated environments communicate back to on-prem resources through EVR, which cloaks the lower VMs containing the same host-names and IP addresses and exposes a unique IP address to the larger on-prem network. The EVR working in conjunction with a “jump-host” can be configured to forward SSH requests (via ssh proxy, OpenSSH 7.x and higher), this lets SSH into all the hosts in an environment. From on-prem, users would SSH to any host in the environment (e.g. ssh user@environment-1-host-2), which exposes a unique IP address to on-prem, then delivers to an individual environment’s VM. This results in a simplified and sophisticated way for several cloned environments to exist in tandem without disturbing basic network structures.

Refactor or replatform piece by piece 

While various teams are working with their cloned applications, IT can begin to progressively refactor the original to use native cloud services. There are several established design patterns for this (“Side Car” or “Strangler,” for example). This incremental strategy allows for a progressive approach to transformation as opposed to starting from the beginning. Refactoring R&D is done rationally so the overall application can keep running, preventing the creation of net-new, application-wide development attempts. Starting from the top across several applications moving through the migration process is risky and flies in the face of the Agile principle to “limit work in progress.”

This piece-by-piece strategy also speeds up the overall refactoring project and reduces risk. By recreating the original on-prem application, final working versions of the original application can be shared with Agile teams carrying out short sprints. Dividing the whole application into smaller pieces also reduces the danger of the entire project failing and meets the Agile values of “working software” and “responding to change.”

Developers can gradually build automation into the application even if they aren’t using cloud-native technologies. Alternately, applications can easily be re-hosted (check out Microsoft’s “The 5Rs of Rationalization” article for more details) without majorly altering their original on-prem structures.

Move another application through the “cloud factory” line 

Once most of the application exists in cloud-native formats while maintaining a dedicated connection back to on-prem IT, it’s reached the end of this process. Think of it as a factory where various applications move through the assembly line at different speeds. When one application leaves the factory, other applications on the target list are just beginning. The more you work with cloud-native offerings and products, the more the “factory floor” can speed up, as solutions to the problems found earlier in the transformation process can quickly be applied without needing excessive R&D, re-work, and trial and error. 

Success through agility and security 

This plan allows businesses to take advantage of cloud benefits and matches to many Agile software development best practices. The approach speeds up dev/test, engineering and QA by letting disparate teams work simultaneously, all the while lowering risk by not rearchitecting or re-platforming during migration by splitting up work in small, manageable pieces. Not only is it the safest approach to migration for essential on-prem applications, but it’s also the most likely to succeed. 

Matthew Romero, Technical Product Evangelist, Skytap (opens in new tab)

Technical Product Evangelist

Matthew Romero is the Technical Product Evangelist at Skytap, a cloud service to run IBM Power and x86 workloads natively in the public cloud. Matthew has extensive expertise supporting and creating technical content for cloud technologies, Microsoft Azure in particular. He spent nine years at 3Sharp and Indigo Slate managing corporate IT services and building technical demos, and before that spent 4 years at Microsoft as a program and lab manager in the Server and Tools Business unit.