It’s not often that software testing makes the news, but when it does, it’s usually because something within the test automation has gone spectacularly wrong. At 8:05 AM on January 13, 2018, the Hawaii Emergency Management Agency sent out a false missile alert. While there is much that is still unclear regarding exactly what happened, most reports agree that an internal test was treated as a real event. According to The Washington Post, part of the problem may have been caused by a human operator who chose the “Missile alert” instead of the “Test missile alert” from an application menu.
This event revealed a number of design flaws within the emergency alert software. On top of a bad user interface design, the system had no fail-safe mechanisms or human overrides in place. So, once the alert was sent, it couldn’t be retracted. As a result, thousands of people panicked because they believed that a North Korean ICBM, possibly armed with a nuclear warhead, was headed their way. Nearly 40 minutes elapsed between the time the message was sent and the time the initial cause was located. During that time, no message was sent to the public to indicate that this was a false alarm.
To be clear, this event was the result of a system being tested, not software being tested. However, had the system been subjected to rigorous software testing, the flaws that caused the incident could have been found and fixed. Using current technology, much of this testing process could have been handled by automated testing software.
Any organization that relies on computer software should and must perform the appropriate testing. This post describes the role of test automation as part of the software development process, and why everyone in your organization should actively participate in testing. I’ll provide an introduction to software testing and its importance, the rationale behind automating testing processes, best practices, and recommendations on how to proceed.
Who Is Responsible for Software Testing?
In addition to the standard questions of what to test, when to test, and how long to test is the question of who is responsible for testing the software? Does this job belong to a specific person, role, group, or department? Shouldn’t everybody with a stake in a product/throughout an organization be involved in ensuring quality? The answer to all of these questions is “yes”—the role of assuring software quality is the responsibility of developers, testers, managers, stakeholders, and even users.
Much of today’s development and testing practices have been highly influenced by Agile development. As is codified in the first twelve principles of the Agile Manifesto, Agile development prioritizes satisfying customers through early and continuous delivery of valuable software. In Agile projects, quality and testing are part of a wider conversation between creators and consumers. Broadly speaking, the responsibility for testing is shared between project teams and its stakeholders. In addition, tests are written as early as possible, and they run continuously before, after, and during the development process.
However, recent changes to the ways in which we access and use applications have caused a move in the opposite direction, concentrating the responsibility of some testing tasks to a much smaller group of individuals. Due to the widespread availability of broadband internet and the rapid adoption of smart mobile devices (phone/tablets), some computing tasks that once required execution by powerful back-end servers are now performed in a desktop browser or in a native mobile app. As a result, a large part of the responsibility for testing client applications has shifted to the developers who build front-end web apps or client mobile apps.
Developers also perform more of the operational tasks that were once handled primarily by corporate IT departments. Traditionally, IT and operations have also been involved in the testing process. As these separate roles merge together with DevOps, operational testing tasks (e.g., performance, load, and security testing) are performed by developers.
How Software Is Tested
There are many types of software tests that can be used. We review four of the most important ones below.
Unit tests are performed by developers, and they are written before and during the development process. These tests perform a variety of functions in the development process. They are used to validate stakeholder-defined acceptance criteria, and they can also be used to ensure that any changes made by a developer to existing code don’t create new problems. Many developers will test their code by running it after they have written or modified it. Developers that use Test Driven Development (TDD) write unit tests that test individual code blocks (functions) and modules.
Functional testing checks that code work, an application feature, or a specific user action is functioning as expected. Functional tests are based on user stories, each of which includes a persona, task, and reason for performing the task. The functional test validates the assumptions of the story and checks that the expected and actual outcomes are the same. The test also verifies that the developers implemented the feature or scenario according to the requirements expressed in the user story.
Regression tests are used to locate issues after a major change has been implemented. They are designed to test changes that result from writing code for a new feature or fixing existing defects. As the name suggests, they are designed to see if the software has regressed as a result of these changes. Regression tests examine the intended and unintended consequences of changes, such as introducing new defects or reintroducing old ones.
Visual testing involves testing how applications actually look. Most visual testing systems are based on capturing and comparing images. By using a tool or by writing a test script, the tester captures an initial set of screens and their component elements. Whenever the application’s user interface changes, the tests are rerun and new images are captured. After each test run, the latest images are compared with the previous test run. If differences are detected, the test is marked as failed.
When Should Automated Testing Be Used?
Testing that involves uncertainty, active participation, or interpretation is best performed by people. Other tasks—especially those that involve monitoring system behavior and performance—are better handled by computers.
Test automation is often used to perform repetitive, routine tasks. Automated testing makes economic sense, and it relieves testers from having to do these mundane tasks, allowing them to spend their time on more productive activities. Automation advocates also claim that test automation can lead to faster and more frequent releases, and that it lets organizations build smaller, more productive development teams.
However, automated testing doesn’t solve all software quality-related issues. If it isn’t done right, it can make a bad situation worse. To use test automation effectively, you need to know which parts of your development and testing process will benefit from it the most. You need to do the necessary research, find the right tools for the job, and plan your deployment carefully.
You also need to train developers to use the tools properly and provide guidelines and standards to ensure they write effective, relevant, and high-quality tests. Some tasks simply can’t be done by machines, so you have to recognize when humans are necessary to get the job done. This is especially true in cases when the outcome of a test can’t be measured in terms of success or failure.
Test Automation Tools
Today, many automated testing tools are free and open-source. There are also a variety of tools for writing, running, and managing tests.
Automated tests are written by people and executed by machines. A number of test-specific languages have been created to describe and organize testing tasks. For example, Jasmine is used to write specifications, which are human-readable blocks of text that translate user stories’ acceptance criteria into machine-executable tests. Each test specification includes an assertion that tests the feature and sets a condition to be passed or failed. In most cases, the specifications are written in as a single file that is used to test a specific module and/or application functionality.
Once you’ve written a test, you need a tool to execute it. This is handled by test runners, which are tools that read the test specifications and assertions for the targeted module. They read these specifications, run the assertions, collect results, and display them to the user. Test runners also handle common configuration and cleanup tasks that are performed before or after a test run. Popular test runners include Jest, Karma, and Protractor.
When you run tests against a web browser, you need a tool to interact with them. Tools like Puppeteer and Chromeless provide a headless browser that lets your test perform the same interactions as a human user. Puppeteer also allows for easy automation of browser tasks, and it captures a timeline trace that is useful for diagnosing performance issues. Some frameworks offer support for visual testing and can generate screenshots and PDFs of web pages.
Conclusion: Test Automation Is for Everyone
Test automation may seem complicated, but at this point, it’s actually within the reach of most organizations. There will be a learning curve, but since most of the tools are free and open-source, the types of automation covered in this post are available for anyone to use.
Before you automate your testing processes, you need to understand the different types of testing and determine which of your processes do and don’t need to be automated. Once you’ve done that, you must train your development teams, testers, and support staff, and choose the tool that best fits your organization’s needs. If you follow these steps, you’ll be able to reap the full benefits of test automation.
Gil Tayar, Senior Architect at Applitools
Image Credit: Startup Stock Photos / Pixabay