Skip to main content

Serverless computing for enterprises: Start small and think about data

(Image credit: Image source: Shutterstock/Carlos Amarillo)

Serverless computing is an incredibly popular topic right now, but it can get confusing pretty quickly. Even the term “serverless” is loaded with subtext. Some people are quick to point out that there are still servers running underneath that code. Others note that serverless can mean more than just the types of function-based computing popularised by services like Amazon Lambda.

Both are true, but for the sake of this simplicity, I’m going to use functions and serverless interchangeably. However, the confusion stems from far more than just a nebulous label.

While some large enterprises are still just finalising their cloud strategies and adopting IaaS, they’ve also spent the past few years being clobbered over the head with messages about containers and Kubernetes (aka CaaS). Before that, and still today, developers were excited about platform as a service, or PaaS, for launching applications without worrying about servers. More recently, the talk about what’s next has shifted to serverless and function-based services (aka FaaS). 

In my mind, there are two driving factors behind the rush to FaaS. One is simply that some developers will always be attracted to the newest, shiniest thing—especially if it makes it easier to build applications. Although it can be tempting to pass this off as “resume-driven development,” it’s also true that companies need developers who are on the lookout for better, faster methods of shipping applications. That’s their job: if you’re not supportive of innovation and exploration, you’re settling on the status quo as good enough. 

For IT leaders, the goal must be helping developers focus their excitement over serverless on the right applications. Which brings us to the other driving factor behind its popularity: The many legitimate and exciting use cases for serverless computing, ranging from chatbots to data transformation.

Putting serverless in its place

I’ll get to those use cases shortly, but first I want to briefly lay out how I think about FaaS in relation to some of the terms above. While it’s natural to compare FaaS with CaaS because they both hit around the same time and are intrinsically tied to the microservices movement, FaaS actually has more in common with PaaS, which has been around for about a decade. This is because, for all its game-changing benefits, Kubernetes—the foundation of most CaaS platforms—targets operators as well as developers. That means users still need to think about things like operating systems, networking and resource allocation.

PaaS and FaaS, on the other hand, are designed to let developers focus on their code. However, the two have some big differences that make each one better suited to certain types of workloads:

Another thing to consider is that FaaS is not a public-cloud or managed-service-only paradigm, despite a bevy of blog posts suggesting otherwise. In fact, there are plenty of open source projects underway to ensure that companies can do serverless development on the cloud of their choice, or inside their own data centres. The most recent example is Knative, which Google announced in July; others include OpenWhisk, OpenFaaS and Kubeless.

When to use FaaS

Today, FaaS makes the most sense for stateless, short-lived, event-driven workloads that can effectively be scaled down to zero when they’re not running. We’re seeing from customers that these don’t have to be net-new or greenfield applications, but instead can be tasks currently handled by mainframes, ETL pipelines or other systems. Often, these are data-centric workloads.

For example, a bank might currently run certain end-of-day processes on a mainframe, or maintain dozens of web applications for handling certain customer interactions. In a more modern architecture, that bank can Kafka to ingest data in real-time and distribute it to various systems that process the data on their own timelines. Among them might be a serverless function that kicks off whenever a certain event happens (e.g., to send a text message to a customer whenever an online transaction posts), or a longer-term function that runs at the top of every hour (e.g., transforming scanned documents into JSON and feeding them into a document database).

An example of a greenfield serverless application we’ve seen interest in is customer service chatbots. In this scenario, the application might be architected so that the stateless chat-agent UI and natural language processing (NLP) algorithms run as functions when a visitor clicks a “Help” button. Meanwhile, log data is sent to Splunk, and the entire conversation is shipped to HDFS to later help train the NLP model that powers the customer-facing NLP function.

Another up-and-coming application is in an Internet of Things use case, where a company wants to predict whether particular pieces of machinery will fail in the near future. A developer could write a function to process sensor data on a set schedule (say, every 2 hours) and send alerts if certain patterns are present. Alternatively, a function could be programmed to send an alert immediately when events correlated with imminent failure are detected. Just like in the chatbot example, though, the functions are part of a broader strategy that also includes large-scale data storage, machine learning and more on the backend.

Do your homework and run your numbers

There are also few gotchas to look out for when considering going down the FaaS route today. Economics (e.g., the cost of long-running functions versus operating a container or cloud server) and complexity (e.g., the challenge of monitoring interactions among large collections of functions) are probably the most notable, just because of how new the space is.

Another big caveat with FaaS today is that it should be reserved for tasks that can effectively be scaled down to zero when they’re not running. A container instance still needs to spin up to run each function, so FaaS isn’t ideal for workloads that require sub-second latency.

However, like all things involving cloud computing and open source, these are moving targets and what’s true today likely won’t be true (or entirely true) 6 months from now. Vendors will continue to tweak pricing and offerings, and the community will—in fact, actively is—working on solutions for issues like FaaS monitoring, networking and latency.

The best path to serverless is a good plan

As with any sort of effort to introduce new technologies, the key to to success with serverless is probably putting some real thought into why you want to adopt it and where it makes sense to do so. Maybe identify some low-hanging fruit that can provide easy wins and relatively risk-free learning experiences. And make sure your people and processes can handle yet another evolution in the application lifecycle: Throttling FaaS behind legacy IT practices and mindsets kind of defeats the purpose.

Like it or not, the future arrives faster with every turn of the wheel. Agile development, cloud, containers, microservices, Kubernetes, artificial intelligence: they all rose quickly to become corporate imperatives not just for transforming IT, but for transforming how companies do business. Serverless computing and FaaS are next in line, so the time to start strategizing about them is now.

Derrick Harris, Principal Product Marketing Manager, Pivotal (opens in new tab)
Image source: Shutterstock/Carlos Amarillo

Derrick Harris is a product marketing manager at Pivotal. He publishes the Architecht newsletter and podcast and previously covered cloud computing, big data, AI and more at Gigaom.