Why website capacity should be an IT, not a Procurement, decision

null

When websites go down the blame often lands at the door of the IT department. Yet, the cause of the crash may be because Marketing hasn’t properly informed IT of a promotional campaign, one that will likely result in a huge spike in web traffic. Yet, the angry looks will still be directed at the ‘techies’ – and that hardly seems fair.

As much as it is technically possible to prevent websites from crashing in almost all scenarios, the reality is that things are never that simple. There’s a constant risk vs cost analysis taking place around an ecommerce site’s capacity limits – and quite often these conversations are taking place outside the IT department, within the finance or procurement teams.

There are, of course, times when a business may deem it economically prudent to allow a website to crash but, on most occasions, it would be preventable. Especially, if IT were better informed of what Marketing were up to and had the ultimate decision-making responsibility over a website’s capacity.

The power to keep the lights on

Let’s make a comparison with the standard office environment that we can all relate to. No-one would want to work in an office which had a prepaid electricity meter that constantly ran out. It would result in rooms regularly going dark as lights went out, and your computer going off when you were in the middle of a job. How valued would you feel as an employee if this was the case?

Yet, this is what some ecommerce companies subject their customers to. Okay, the website might not always crash entirely, but they are often allowed to reach the brink. And this is almost as bad – as response times will slow down to the point where the website is barely usable.

A carefully managed scalable cloud infrastructure is the obvious solution for ecommerce sites. But many companies continue to restrict this scalability and justify this as a cost control decision. If senior managers in Finance and Procurement viewed their cloud infrastructure as a vital utility – something that should scale in line with the demands placed upon it – this wouldn’t be the case.

No-one is advocating that websites should keep response times at 100 per cent all the time. If you scaled up every time there was a small blip in traffic it would prove extremely costly. But, if IT can anticipate when spikes are likely, they can adjust the auto-scaling thresholds to compensate and ensure that the customer experience isn’t impacted by additional demands on the infrastructure. This can, of course, be dialed back down when it’s no longer needed.

Alternatives ways to protect websites

If IT teams are going to gain control over this, however, a shift in mindset is needed within the Finance and Procurement departments which will result in responsibility being deferred to the IT team. For that to happen, there needs to be trust that IT will only dial up capacity if needed and won’t rack up costs unnecessarily.

We are not fully there yet. There are signs that things are moving in this direction but while we are waiting for this to change, IT will need to look at alternative ways to protect websites when they are hit with unexpected traffic. There are several ways this can be done.

Lighten the load - Deploying a content distribution network (CDN) is an alternative to auto-scaling. A CDN will take the strain off the core infrastructure by caching requests and responding to them locally. This means workloads can be dealt with at a regional hub closest to the end user, without troubling the website’s infrastructure at all. In one stroke, we have significantly reduced the potential strain that could be placed on the website, whenever we experience a spike in traffic.

Timely feature releases – In an era of DevOps, the release of new apps and website features is constant. By using Application Performance Management (APM) tools alongside a DevOps strategy, IT professionals can continuously monitor the impact that new applications are having on response times. If IT teams have a good idea of when a peak period is expected, they can schedule releases so they don’t conflict. This will ensure all features are working and that no new apps create unpredictably at busy times.

Security – Distributed denial of service (DDoS) attacks, an ever-present threat, can easily overwhelm an infrastructure with requests. If we were to use auto-scaling as a tactic to meet the demands that such an attack could place on the infrastructure, it could end up proving to be an expensive business – especially if the attack persists over a significant period.

This is when cost analysis may well be needed and fully justified as, for some businesses, it may not be worth staying online during an attack of this nature. It’s a decision that needs to be made in full knowledge of the facts and in conjunction with IT. However, a CDN can once again prove a cost-effective solution here – allowing business to avoid having to ramp-up capacity. In effect, the cloud service provider would scale up the CDN on your behalf.

The ideal scenario – with the possible exception of a DDoS attack – would be that IT assume responsibility when it comes to managing the capacity of the infrastructure. Especially as this is the department most people will look to, and point the figure at, as soon as there’s a problem. When this is the case, ensuring that a clear communication channel with the marketing department is in place will be crucial to understanding levels of traffic. When this in place, there should be no reason why finance and procurement shouldn’t trust IT to ‘keep the lights on’ when it comes to meeting the ever-changing demands being placed on an ecommerce website.

Rob Greenwood, technical director, Steamhaus
Image Credit: Atm2003 / Shutterstock