It is a fact that many organisations believe that by simply choosing the right testing tools and downloading them for free you automatically have a competent test regime in place – we have, unfortunately, experienced this mistake first hand on client visits.
There are many hidden pitfalls to the simplistic product-led approach, most importantly around establishing the correct processes for the tools themselves. As efficiency requirements increase, and the growing interest in DevOps (with even GitLab launching an Auto-DevOps function in the latest 11.0 version) brings teams closer together, there is a risk that the fundamentals of competent testing can be diluted. The resulting pitfalls can prove costly if encountered by the unwary....
1. Get the testing process right, then hunt the tools
One of the big temptations in OSS testing is to assemble an all-star cast of OSS testing tools and then design your testing regime to suit, which is of course the wrong way round. The real time and effort should be focused on designing a robust yet practical testing regime, then translating this into a wishlist for toolsets to deliver. It is essential to remain clear about the pros and cons of both Open Source tools and proprietary offerings, and conduct a robust due diligence process to select the best fit, rather than letting broad ideology influence you, Always look at tools with a critical eye – what benefits will they actually deliver to you and your specific use case?
2. Don’t misuse the Open Source Software community
There’s often a misconception around OSS that it is automatically cost-effective choice, because it is ‘free’. Although a well-selected OSS testing tool or platform may well be the best choice, there are strings attached. The most important question is whether your enterprise has the skills in house to use and operate the tool already, and if not whether investing in training or recruitment is an option. We regularly encounter clients who have either not fully thought through the implications of adopting a particular tool, or who have incurred significant training costs that had not been initially budgeted for. This is especially true of those tools that require very specific technical skills to use and operate, and these skills can carry a significant technical cost to acquire, sometimes being non-transferable – it is important to check the technical requirements first!
There are other potential costs to OSS too, as although you may get access to millions of lines of peer-reviewed code for no fee, there are tasks such as integration, regression testing, and development of feature enhancements to be performed. In addition, long term use of OSS products without contributing developmentally to the project is frowned upon, so considering where you can contribute most effectively is important - during development, at service launch, or after launch.
3. Great OSS tools do not do the work for you...
Delivering efficient and effective testing relies on many components, and the most important ones are - unfortunately for us all - not technology based. For example, having well written test cases is completely vital, and requires time, effort and expertise. Taking the time to fully understand the requirements. Asking questions to clarify any misleading or ambiguous statements in the documentation is time well spent. Equally, using an agreed template or format, including information such as: Descriptive test case ID, a description/purpose or aim of the test, identify test data required, prerequisites/pre-conditions, the test steps to be performed, and the expected results and results is a real boon to rapid, repeatable testing. It is also key to take a company-wide approach where possible, to standardise throughout documentation, test cases and tools. This standardisation will remove many roadblocks in the future, and also help to prevent creating silos of data that cannot be easily interrogated down the track due to interoperability challenges.
4. Don’t forget to design and build a brilliant Test Environment
A well-built Test Environment is critical to testing, as it is relied upon to simulate the conditions that any system-under-test (SUT) will experience.
However, if it isn’t compatible with systems that it must work with then it has fallen at the first hurdle. Whether conducting SIT, OAT or UAT, each test environment needs to be understood in terms or integrations and outputs. The benefits of designing a fully-featured environment takes a little time and effort but will repay the work in spades.
It is also worth remembering that not necessarily every variable can be tested in a Test Environment, so managing expectations and being very explicit in what can be tested – and what cannot – is essential.
5. Leaving it late is always a bad idea!
We often encounter client situations where testing activities have been left very late in the day. Sometimes UAT is used as a catchall driven by shallow Unit and System Testing, for example.
Reliance on late phase testing brings a host of issues that can have serious consequences, including additional costs, additional change control being deployed, the original objectives of the system under test being compromised and potentially the risk of the release being tested in a production environment - a classic route to even deeper difficulties. The rule of x10 is certainly as applicable today is it was when it was coined, in that an ambiguous statement in documentation – if missed at that point – rapidly snowballs in terms of cost to rectify, becoming exponentially more expensive to resolve.
Whether OSS based or not, testing should start as early in the process as possible. Ideally right at the point of requirements definition and review and it is important that each test phase undertakes its specific role in the overall testing lifecycle. Sticking to early testing plans will avoid many pitfalls, as well as many late nights for all concerned.
Iain Finlayson, Senior Technical Test Engineer at Edge Testing Solutions
Image Credit: Wright Studio / Shutterstock