From the cloud to the edge: Taming application supply chain complexity

null

To keep pace with the rapidly evolving customer demands of a digital world, IT must help their organisations remain several steps ahead (and within budget). For example, so that IT can improve agility and increase the speed and delivery of services and innovation, they are moving an increasing number of workloads to public cloud. Additionally, in the spirit of increasing efficiency and accessibility, companies are abstracting and decoupling applications from Infrastructure-as-a-Service (IaaS) models and re-architecting them for Platform-as-a-Service (PaaS) models. Finally, enterprise architects are busy rebuilding monolithic applications using micro-services and containers.

While a nimbler IT landscape bodes well for application development and deployment, it introduces significant operational challenges. For instance, in the scenario of abstracting applications from IaaS models, if those applications are going to perform well the infrastructure resources they require (CPU, memory, I/O and network) must be readily available at the exact moment the applications need them. Even more challenging is the scale at which this needs to happen. Instead of dealing with thousands of workloads, or one workload per virtual machine (VM), the new IT reality involves supporting hundreds of thousands of containers. This reality is beyond human scale to manage.

Why pin-pointing proper abstraction matters

People can’t solve the IT scalability problem because they can’t see the forest for the trees. In other words, rather than attempting to manage various environments by endless low-level, specific rules, it’s better to abstract the environments to generic concepts and behaviours that are simple and able to scale easily. Abstraction can help hide the messy details of managing environments, while also exposing critical factors that are necessary for controlling and maintaining healthy environments.

Proper abstraction can provide four key benefits:

1. Scalability: By collecting and analyzing only the required information, management platforms can scale within large environments while eliminating big data and all of its challenges. 

2. Simplicity: Abstraction simplifies the management of heterogeneous environments by allowing users to manage them without needing to understand all of the underlying complexities. For example, users can manage Azure in the same way they manage AWS. Additionally, with more distinct environmental resources, more rules need to be defined and maintained, meaning relevant analytics will be more complex. However, with abstraction analytics only need to deal with one resource (such as the disk I/O), rather than many different incarnations of different device models. 

3. Portability: After purchasing a new platform, most users immediately incorporate a myriad of platform-specific, proprietary tools, making it much more difficult to eventually migrate to a new platform. Abstraction can alleviate platform lock-in and allow users to easily migrate from VMWare to OpenStack, for instance.

4. Universality: Proper abstraction provides a way to compare different resources: CPU, memory, IOPS, network and storage latency, response time, TPS, heap size, connection pool size, etc., and make the necessary tradeoffs to assure application performance.

Streamlining application supply chains

Abstraction is particularly helpful when re-architecting applications to make use of micro-services or edge computing. With monolithic applications, it’s fairly simple to guarantee performance by determining what the correct size of the VM should be and on which host and storage the VM should be placed. With microservices, however, where an application is a collection of containers running in multiple VMs or on bare metal, there are many questions that need answering, such as:

  • How many containers are required to satisfy the application demand?
  • What size should the containers be?
  • How many containers can fit in a node?
  • Should a container scale vertically or horizontally?
  • Should a node scale vertically or horizontally?
  • Where should a node be placed?
  • How close should containers be placed to each other?
  • How close should nodes be placed to each other?
  • How much underlying infrastructure is required?

Unfortunately, attempting to answer questions like the above continuously and simultaneously is virtually impossible without proper abstraction. Now, consider edge computing, which involves processing many terabytes of data that is generated by millions of IoT sensors – in real-time! This creates even more difficult questions which require definitive answers, such as:

  • How many processes are required to process the data?
  • How close should the processes be to the data?
  • How close to each other should processes be placed?
  • How many processes run on a node?
  • How much data can be stored on a node?

A unified, autonomic platform with common abstraction and generic analytics can enable scalability when organisations are transforming monolithic applications to microservices and/or edge computing. By providing semantically integrated control of all technology siloes and the required management functions, autonomic technology can help IT teams better understand who consumes what and from whom, and how all environment dimensions impact the quality of service of all running workloads. Unified autonomic platforms can also self-manage and maintain entire environments in a desired state, thereby enabling greater IT efficiency. 

Ushering in a new generation of IT: Powered by autonomic technology

While many IT organisations have modernised their infrastructure assets, many have also stumbled when trying to deliver agility, elasticity and scalability. To effectively monitor, control and optimise today’s complex environments, the new generation of IT must re-orient its thinking and strategy about how it should operate.

More specifically, this re-orientation requires the adoption of a workload automation platform that relies on abstraction to organise the limitless details and automates both decisions and control over an IT estate to assure application performance. By leveraging intelligent analysis, driven by the knowledge captured by abstraction to make continuous, real-time decisions, IT can assure application performance while lowering cost and maintaining compliance with business policies.

IT is facing a defining moment in driving and enabling digital business agendas, while facing a challenge of exponential complexity that is beyond human scale to manage. Pinpointing “proper” abstraction plays a central role today, and in the future. IT teams that embrace and incorporate autonomic technology, powered by real-time analytics, can control any type of workload, on any infrastructure, at any time – continuously. The outcome? IT can seamlessly and securely manage new workloads at scale, while also planning for future infrastructure changes and trends...in the cloud, and beyond.

Shmuel Kliger, Founder and President, Turbonomic
Image source: Shutterstock/Omelchenko