Designing for performance is essential, but...

Developers face dozens of “impedance mismatches” every day. The most fundamental is perhaps the reduction of the non-sequential (non-procedural program design) to the sequential (execution). Most software isn't written in machine code. Most truly step-by-step descriptions of interesting systems are unusably inefficient.

This is the magic of Church-Turing, that the dimensional reduction from human-intelligible symbols to machine-usable symbols effectively loses nothing—that any computable function can be computed by a bunch of steps executed in linear order.

But the conceptual mismatch puts the burden of optimal structure mapping squarely in the brain of the developer. In my head, I'm figuring out what Interface Consumer Interceptor is like and what it can do. But javac and the JRE or csc and .NET are doing...well who knows what. The operating system adds another layer of ‘well that's not obvious’ on the face of it, and again the system architecture, and again the NIC, and then every packet-forwarder, until what seemed like truly beautiful code when you wrote it has become...

The epicycles don't end at these higher levels. Modern processors execute plenty of instructions out of order— even on a single core. Techniques as simple (or is that disconcerting?) in principle as branch prediction in practice further fuzzify the developer's sense of what the computer will actually do with their code. Low-level caches, pipelining, and other optimisations also make assumptions about probable execution dependencies, making even machine code less than fully deterministic. And then there's the abstraction of the virtual machine...

Guide to Performance and Monitoring

In short: of course designing for performance is absolutely essential; but runtime is so crazy a variable that we can reasonably blame too-early optimisation for a no negligible chunk of lousy UX and unmaintainable code.

So our latest Guide to Performance and Monitoring covers both the static and dynamic, the verifiable and the unknowable sides of building and maintaining performant applications.

As Tony Hoare notoriously observed, "Premature optimisation is the root of all evil" that is, the benefits of absolutely maximal optimisation are usually much lower than the increased cost of maintenance and debugging that results from the brittleness caused by that optimisation. On the other hand, the natural tendency of OOP to prioritise form over performance can generate a codebase that is highly readable but partitioned such that performance-oriented refactoring may prove extremely difficult.

To help you steer between the Scylla of overeager optimisation and the Charybdis of runtime-indifferent code structure, we've split this publication between ways to design performant systems and ways to monitor performance in the real world. To shed light on how developers are approaching application performance, and what performance problems they encounter (and where, and at what frequency), we present the following points in summary of the most important takeaways of our research.

Application code is most likely to cause performance problems frequently; database performance problems are most challenging to fix:

DATA: Frequent performance issues appear most commonly in application code (43 per cent of respondents) and in databases second most commonly (27 per cent). Challenging performance issues are most likely to appear in the database (51 per cent) and second in application code (47 per cent).

IMPLICATIONS: Enterprise application performance is most likely to suffer from higher-level, relatively shallow suboptimalities. Deep understanding of system architecture, network topology, and even pure algorithm design is not required to address most performance issues.

RECOMMENDATIONS: Optimise application code first and databases second (all other things being equal). On first optimisation pass, assume that performance problems can be addressed without investing in superior infrastructure.

- Parallelisation is regularly built into program design by a large minority (but still a minority) of enterprise developers:

DATA: 43 per cent of developers regularly design programs for parallel execution. Java 8 Parallel Streams are often used (18 per cent), slightly more frequently than ForkJoin (16 per cent). ExecutorService was most popular by far, with 47 per cent using it often. Race conditions and thread locks are encountered monthly by roughly one fifth of developers (21 per cent and 19 per cent respectively). Of major parallel programming models, only multithreading is often used by more than 30 per cent of developers (81 per cent).

IMPLICATIONS: Enterprise developers do not manage parallelisation aggressively. Simple thread pool management (ExecutorService) is much more commonly used for concurrency than upfront work splitting (ForkJoin), which suggests that optimisation for multicore processors can be improved.

RECOMMENDATIONS: More deliberately model task and data parallelisation, and consider hardware threading more explicitly (and without relying excessively on synchronisation wrappers) when designing for concurrency.

- Performance is still a second-stage design consideration, but not by much:

DATA: 56 per cent of developers build application functionality first, then worry about performance.

IMPLICATIONS: Extremely premature optimisation is generally recognised as poor design, but performance considerations are serious enough that almost half of developers do think about performance while building functionality.

RECOMMENDATIONS: Distinguish architectural from code-level performance optimisations. Set clear performance targets (preferably cascading from UX tolerance levels) and meet them. Optimise for user value, not for the sake of optimisation.

- Manual firefighting, lack of actionable insights, and heterogeneous IT environments are the top three monitoring challenges:

DATA: 58 per cent of respondents count firefighting and manual processes among the top three performance management challenges. Forty-nine per cent count lack of actionable insights to proactively solve issues. Forty-seven per cent count rising cost and complexity of managing heterogeneous IT environment.

IMPLICATIONS: Performance management is far from a solved problem. Monitoring tools and response methods are not providing insights and solutions effectively, whether because they are not used adequately or need feature refinement.

RECOMMENDATIONS: Measure problem location, frequency, and cost, and compare with the cost (both monetary and performance overhead) of an additional management layer. Consider tuning existing monitoring systems or adopting new systems (e.g. something more proactive than logs).

For more key findings and tons of data on app performance and monitoring, please download the full research guide here.

John Esposito