DevOps done right: Five tips for implementing database infrastructures

DevOps couldn’t be hotter. To cope with modern customer demands, applications need to be developed, tested and put into production swiftly. Industry experts have been preaching about DevOps for faster, more reliable software development. Gartner expects this development approach will go mainstream by the end of 2016.

But while DevOps can provide many improvements throughout all stages of the development lifecycle, it is imperative to avoid some of the common pitfalls. A fully-firing database is core to any DevOps strategy, because slow data equates to slow results, something the methodology is trying to eradicate. If your organisation is looking to embrace DevOps, here are five top tips on how to optimise databases for DevOps to meet your organisational goals.

Stay in tune

To make the most of this collaborative approach to development and operations, and reap the rewards of a more agile working structure, it’s important to ensure the changes you implement take into consideration the real needs of your organisation. For example, this can be done by tuning your database instances based on your workload and resources. This way DevOps can be ensured to run just right for your operations, producing the full benefits for your organisation.

Standard MySQL and InnoDB optimisation can be applied for buffers, caches, threads, queries and indexing. Benchmarks can be set up to determine if the changes are beneficial. When standalone instances are not performing, consider replication or multi-master cluster setups.

Recover quickly

Downtime is detrimental to any organisation, but there are ways to prevent such occurrences. Getting yourself out of danger is all about knowing your risks and being prepared. How does database downtime affect the business? High availability, yes, but at what cost?

Once in place, make sure to simulate failure scenarios - from servers running out of disk space and crashing, to network or power failures. Do your systems recover automatically? Do you have to apply manual procedures? Document your findings for future reference so you’ll be ready for the real thing.

Not all downtime is unplanned. You can also look into planned events that could potentially be disruptive to your operations, including software and hardware upgrades, configuration changes or schema changes. Rehearse these at least once before you go live to minimise unexpected complications.

Automate an eagle eye view

Monitoring databases, identifying performance problems and understanding symptoms that indicate insufficient or unstable resources are all important tasks for a database administrator (DBA). However, these things can easily be done by a DevOps professional as well with the right kit, which will free up valuable time for the DBA and save operational costs for your business. With the right technology, you can automate performance of host, network and memory, processes, MySQL variables, queries, uptime, flow control, backups and log files.

Monitoring data can also be useful when doing future capacity planning. Get to know your peak hours so you can schedule a less costly maintenance window, and predict when you need to scale your database cluster.

Never lose a byte

With a secure backup system in place you will never have to worry about losing information, should your system experience a glitch. Using logical and binary backups, binary logs and storage snapshots, you can backup all possible kinds of data, including your database’s schema structure.

Consider performing backups on servers with the least load. For instance, some users dedicate a node for ad-hoc reporting and backups. Enable binary logging if you want to do point-in-time recovery. Once you’ve done this, the backups can then be tested by restoring them to a test system, and measure the recovery times. Do this at least once per calendar quarter, and you’ll be able to rest assured your bytes can survive an emergency.

In terms of storage requirements, it is recommended you allocate enough space to store daily backups in the local datacentre for four weeks as well as monthly backup for one year. Encrypt and archive your backups off site to a remote datacentre or cloud provider.

Automate what you manage

DevOps is all about automation, and this applies to your databases if you want them to work more efficiently and free up valuable time. This all starts with the system design. Distributed cluster configurations tend to be complex, but once they are designed, it is possible to duplicate them with minimal variation. Then, automation can be applied to provisioning, upgrading, patching and scaling. For instance, changing a non-dynamic variable in database cluster requires you to repeat the same tasks multiple times, as well as rolling restarts and verifications to check the new configuration has correctly loaded.

As an infrastructure grows, you might need to provision new clusters for upgrade testing, benchmarking and troubleshooting. Automation here could allow DBAs to focus on more critical tasks like performance tuning, query design, data modelling or providing architectural advice to application developers, further elevating the performance of your DevOps.

Vinay Joosery, CEO, Severalnines