Skip to main content

Three steps to a successful API initiative

(Image credit: Image source: Shutterstock/Wright Studio)

The Web has matured well beyond just a digital destination for cat videos, recipes, and diatribes. It has become a powerful business enabler, connecting organisations around the world together through electronic highways of extensible software made possible by a simple three-letter acronym—API (application programming interface). Leveraging standardised approaches for authentication, data requests, and data packaging, APIs provide businesses a way to share information and connect disparate systems together. Sometimes, APIs can even generate new sources of revenue. In a 2015 article from the Harvard Business Review titled, “The Strategic Value of APIs,” (opens in new tab) the authors claim that generates 50 per cent of its revenues from API access, Expedia 90 per cent, and eBay 60 per cent. Although many people may not know it, APIs have become the vascular system of digital enterprise transactions, pumping data from one system to another in a constant flow of 1s and 0s.

But not all APIs are created equal. In many cases, businesses or software developers enable API access as an afterthought, thinking that it’s no different to access for website traffic.  They are both transported over HTTP after all, so how hard can it be?

This approach (architecture, security, robustness, etc.) can actually create problems for connected systems and, ultimately, users and customers that depend upon API access. What happens when data output from an API is malformed? What happens when API calls and functionality are poorly documented? And what happens when there’s a breach because one of the API’s dozens of available calls was occurring in the clear? All of this can lead to system break down—data doesn’t flow, revenue generation comes to a halt, and, even worse, the potential for intrusion grows exponentially. This situation becomes even more problematic when there is a cost associated with using the API. In these cases, the API often has specific SLAs which, when violated, can result in lost revenue. It’s hard to guarantee high-levels of API operational performance such as specific rate limits or transactional round-trip times when the design behind the API is poorly conceived. And that design isn’t just about the API itself—it’s about all the underlying microservices and systems, such as a database, that the API interacts with. If the limitations of these other systems aren’t accounted for, even the best designed and programmed API can fail to perform to expectations.

Necessary steps

So how can you ensure that your API doesn’t fall victim to lazy design? Recognise first that API traffic is different to traditional web traffic. Every API request can be a read or a write. They have different HTTP methods and treat URIs differently. Each client API call is independent, so they don't use established techniques to accelerate traffic, such as HTTP keepalives, persistent SSL, or authentication sessions. Understanding and respecting the differences between API and normal web traffic will help you realise the different effects that APIs can have on the existing application environment and infrastructure. Once you understand the inherent difference between APIs and the rest of your applications (even though they may be written in the same language), you need to follow three steps to ensure that your API initiative will be successful.

The first step is assessing impact. Before you code anything, your design documents need to assess how the API will interact with relative systems and software. For example, if the API is carrying out a computational task after retrieving data from multiple data sources, what impact will the projected concurrent user requests have on those systems? Can the API be exploited in a computationally-expensive way? For example, an airline flight routing API might be taken down by an attacker searching for the cheapest route from A to Z with 24 stopovers in between. This is especially important if those systems, like a database, are being employed by other applications. And, what about security? Will API authentication need to be managed by existing security systems (like an LDAP server) or require something new to be deployed (like an oAuth server)? All of these impacts should be clearly documented with failover contingencies. By documenting everything, your programmers will have a clear picture of how to architect the API, and how to use load balancers, caches, API gateways and application code most effectively to meet each requirement.

The second step is to ascertain the organisational impact in building and launching the API. Which team will architect it? Program it? Deploy, maintain, and support it? For example, your organisation may have a corporate policy of a multi-tenant load balancer that handles all traffic, but your API team needs to have control over the rules that govern its software so they can make configuration changes as needed. There may be considerable delay or disruption to API operation if they have to wait for a centralised function to make rule changes. Deciding to deploy APIs may require a new infrastructure approach. Rather than a single application gateway, your organisation may require tiers of devices where the first tier is very simple, with just basic rule sets while a second tier, to which API development teams have direct access, has more sophisticated policies for their applications.

The final step is developing a clear long-term maintenance plan. In many cases, when APIs are deployed reactively (such as a partner or customer demanding programmatic access to back-end systems), there isn’t any sense of how to care for them long-term. Supporting systems, such as databases and operating environments, may change or be upgraded over time. But will such updates break API functionality? Or, perhaps even more importantly, will new system versions provide opportunities for improved API performance or features? The only way to realise those new benefits is for a team to take ownership of the API’s lifecycle. When that happens, teams can plan to assess each API on a regular basis. It’s kind of taking a car in for service—a look under the hood, a check of tire pressure, an analysis of all the electronic systems. It ensures that the car continues running smoothly. And owning the API lifecycle also takes deprecation into consideration. Sure, no one wants to talk about programming something and, in the same breath, about how to turn if off forever, but it may be necessary to shelve an API at some point. Understanding what’s required in doing that will be critical to a graceful migration of those API users to some other means for accomplishing what they need.

Even if you follow these three steps, though, it’s important to understand that there is no one-size-fits-all approach to API development. Some organisations will develop their APIs by employing a centralised team while others will develop them in a distributed format, with individual application teams responsible for their own APIs. Some infrastructure teams may want a single, monolithic application gateway to manage API traffic while others may deploy micro gateways, each responsible for individual, compartmentalised APIs. However you decide to approach your APIs, just remember that proper design, understanding the organisational impacts, and ensuring ownership of the lifecycle will ensure that your APIs can not only be operationalised but flexible enough for future iterations.

Owen Garrett, product and go-to-market strategy, NGINX (opens in new tab)
Image source: Shutterstock/Wright Studio

Owen Garrett leads the product and go-to-market strategy for NGINX’s web acceleration and delivery technologies. Owen uses his technical and management expertise to optimise NGINX products and customer satisfaction.