It cannot be denied that networks are increasingly becoming software defined. In fact, IDC estimates that the software-defined networking (SDN) market has grown from a $406 million industry in 2013 to more than $6.6 billion in 2017, with further growth predicted in the coming years.
It’s easy to see why this is the case given the benefits that can be reaped from such change; network programmability, network segmentation, improved traffic handling and improved security just to list a few. SDNs are becoming a requirement to deliver the agility and flexibility required to keep up with the increasing use of public cloud computing.
With great flexibility, comes great complexity
The complexity of the network stack though, is higher than ever. An increased number of protocols leads to a more complex architecture, which in turn severely impacts operational efficiency of networks. Combine all this with the looming doom of the upcoming 5G rollouts (which will put even more pressure on network infrastructures), and software failures start to become a real threat.
Security is of the utmost concern to the majority of the networking industry. Software failures can cause a direct breach here, producing potentially disastrous outcomes. So it’s crucial that ensuring a higher quality of software is treated as a serious priority for network solution providers.
If networking equipment manufacturers don’t detect and address software defects in the QA phase before production, it can result in loss of connectivity, service degradation and recurrent networking downtime. This causes disruption for both customers and engineering teams themselves who end up spending weeks, if not months, carrying out patch management to stabilise their systems.
When networking companies ship equipment out containing critical bugs, providing remediation in response to their discovery can be almost impossible. Their engineers back at base often lack the data they need to reproduce the issue as it’s usually kept by clients on premise. An inability to cure a product defect could result in the failure of a product line, temporary or permanent withdrawal of a product, lost customers, reputational damage, and product reengineering expenses, any of which could have a material impact on revenue, margins, and net income.
As traditional networking equipment manufacturers transition to become software vendors in addition to appliance vendors, their software is going to have to run in all kinds of environments - resulting in the emergence of many more corner cases. These cases tend to bring about a growing backlog of undiagnosed failing tests - some of which don’t get addressed in development because their sporadic nature makes them impossible to reproduce.
Telecommunications providers have enough issues on their plate - the least they require are switches and routers that are stable and reliable. And critically, they are no longer short of alternative… SDN vendors and white box manufacturers are threatening to steal business away from traditional equipment manufacturers.
Software, fit for the new age
The networking industry needs to rethink how it handles software quality and may have to consider innovative solutions to debugging. Reinventing networking by developing software the same way it’s been developed over the last ten years will slow things down. And speed is of the essence in this market.
Record & replay technology might just provide the answer they’re looking for. It can go a long way towards helping engineering teams diagnose and fix serious software defects before they wreak havoc in production - thereby reducing the risk of severe software failures turning into serious customer complaints.
Record & replay technology gives full visibility into a programme’s execution. It captures bugs ‘in the act’ through the recording functionality. It turns sporadic failures into 100% reproducible events; recordings can then be used to step backwards and forwards through the code to quickly diagnose the root cause of the problem.
Traditional debugging techniques (e.g. printf, logging, core dump analysis) are general purpose and provide limited information, while static and dynamic analysis tools are deep but can only look at specific instances of bugs. Record and replay technology can capture failure instances across the whole spectrum and therefore plugs in the gaps where other debugging methodologies do not help. By sharing recordings, software engineers can also analyse an identical copy of the original failure while collaborating on a fix.
SAP, a leader in enterprise application software, have already introduced record & replay technology into their testing suite for SAP HANA. In doing so they’re able to keep control of their complex code base, capturing and fixing non-deterministic bugs before they make it into production. A new approach to debugging has enabled them to remain a market leader for in-memory database management systems.
Maximising the value of existing testing suites by letting software engineers directly review the behaviour of software during intermittent failing tests and debug while software is under development is gold dust. This new approach to debugging could help leading network solution providers safeguard their customer relationships, protect their switches and routers business, whilst reinforcing their dominant position in a rapidly changing market.
No-one ever said changing your entire business model was going to be easy. But with cutting-edge software execution record & replay technology, network solution vendors can be better equipped than ever before to tackle disruption in the marketplace and deliver the world class services and solutions that their reputations were built upon.
Dr Greg Law, Co-Founder and CEO/CTO of Undo
Image Credit: Jarmoluk / Pixabay