Author
Blogs
Article & Observations About Diverse Teams Working Better Together
Efficient Exploratory Testing for a Large Enterprise Software Project
Iterators LLC has worked on a variety of software testing projects ranging from small projects that require a few days to enterprise projects that require several months. In a series of case studies, we will delve into the following topics:
- Manual Testing
- Exploratory Testing vs. Test Cases vs. Agile
- Test Cases in TestRail with Modular Organization
- Automated Testing, Selenium, ID tags, Locators
- Enterprise Projects
One link to explore for more information is:
When working with clients on testing projects, Iterators LLC follows guidance from two key sources:
- ISTQB Certification: https://www.istqb.org/
- Elisabeth Hendrickson's Explore It! Link on Amazon
The following case study summarizes some key points that your project should consider when you are evaluating the use of manual or automated testing techniques.
Manual Testing
It is well-known that manual testing is faster to get started, faster to identify bugs and an important part of an agile development process. Perhaps less well-known are some best practices regarding how to implement these techniques. The following outlines the progression of this particular project with more details given below:
- Existing test cases were provided that were out of date;
- Exploratory testing procedures were implemented; and
- Test cases were updated in a test management system (TestRail) and made more modular.
A later case study will examine the automated testing phase of this project.
Existing Test Cases
A common goal for using test cases is to identify the requirements that are covered by a given set of tests. In theory, a team could list all the software requirements and then provide acceptance criteria for those requirements. In practice, this can be challenging as requirements and implementation details change which results in extensive maintenance of the test cases themselves.
An example of this is given in the following test case example that was stored in Excel or Google Sheets:
Step |
Instruction |
Expected Result |
1 |
Login to the enrollment module as a Dept. Admin. |
The Enrollment screen appears |
2 |
Click Manage Course Offering |
The Course Offering menu screen appears |
3 |
Click the pencil icon next to the term in the header |
The Select Term and Set of Courses (SOC) modal appears |
4 |
Start typing a term and select from the drop down, then click the SOC box |
The term is added, and you will only see SOCs that you have permission to select, e.g., your department |
5 |
Select one of the SOCs and click the OK button |
The SOC is changed in the Manage Course Offering screen |
6 |
Click the Display Course Offerings button |
A list of all Course Offerings for the term and SOC selected are displayed |
7 |
From the Action button next to a Course Offering, select View Detail |
The Course Offering View modal appears with the Course Offering details |
8 |
Click the Close button |
The modal closes |
9 |
Click the Action button next to a Course Offering and select Course Offering Sections |
The Course Offering information, the Format Offerings and the Sections are listed |
10 |
Click the Back button |
The Course Offering screen reappears |
11 |
Click the Action button next to a Course Offering and select Registration Groups |
If there are any, a list of registration groups for the Course Offering appear |
12 |
Click the Back button |
The Course Offering screen reappears |
13 |
Click the Action button next to a Course Offering and select Change Log |
The Course Offering Version modal appears with a list of all changes made to the Course Offering |
A few challenges were found during the initial phase of these trials.
- The initial steps included the login sequence and then navigating to the page to be tested. However, these steps were out of date because those modules had been updated.
- Additionally, these initial steps were repeated in other test cases, so the maintenance effort was compounded.
- The steps for the page being tested had similar maintenance issues, but also, they were very specific to do an exact sequence of operations. As we will see below, this level of specificity is useful and necessary when creating automated tests but can be detrimental for manual testing.
In practice, the testers were spending considerable time in decoding the overall intention of each test case, which button to click, and when to evaluate the expected results. As we will discuss later, many of these issues were addressed by improving the test cases to increase their maintainability and utility.
Benefits of Exploratory Testing
Test cases are useful in some circumstances, but other manual testing strategies can also be utilized as long as the goals for the project are clear. For example, specific test cases are useful for final acceptance testing, having precise control of the coverage of testing, or ensuring that regression testing performs a specific sequence of tests.
Exploratory testing is an alternative approach that Iterators has success with on both large and small projects. Iterators starts with the goals and principles articulated in the ISTQB Foundation Level Software Testing training. In particular, our goal is to find significant bugs and help developers to fix them. Exploratory testing has several advantages over creating and executing test cases.
- Test cases tend to focus on the “happy path” of users following the intended path through the software, while there are an infinite number of non-happy paths that actual users will follow.
- This leads to the benefit of testers using their creativity and judgment to find areas that are likely to contain significant bugs or new areas that need testing. Also, diverse testers can find additional areas to test using their unique creativity and judgment in order to improve test coverage.
- We have also found that test cases tend to constrain testers as they focus on executing the exact steps in the test case and not exploring important aspects of the software.
We use Explore It! by Elisabeth Hendrickson as an excellent reference for the techniques and benefits of this approach. We have found the Explore It! approach is more representative of what actual users will do when they are trying to use the software, because they do not follow the exact sequence expected by developers (or test cases). A classic example is as follows:
Requirement:
A college course can have more than one instructor and these instructors can contribute at different levels that add to 100%, such as 50:50, 60:40, etc.
Test Case Approach:
There are at least 20 test cases that could be written for this requirement. For example, create zero, one, two, or three instructors and try to enter values that do not add up to 100%. Or, try to enter in negative numbers or a value of 0% and verify that the software correctly warns the user when the data are validated.
Bug Found with Exploratory Testing:
- Create two instructors and try to enter values that do not add to 100%. It was verified that the dialog could not be left without valid entries that lead to 100%, such as 60% for the first instructor and 40% for the second.
- Go back into the dialog and then delete one of the instructors and leave the dialog. In this case, that leaves one instructor with 60% of the course. BUG: This was allowed while the expected result would be that the 60% should be adjusted to 100%, either manually or automatically.
It is very time consuming to create test cases that cover all eventualities, and the exploratory approach allows test cases to be created ad hoc as the software is examined. There is a time and place for both approaches, and Iterators applies the most cost-effective and appropriate strategies to find bugs to support the current stage of development.
Future case studies will provide details of the next stages of this project with the following examples:
- Automated end-to-end testing of the user interface using Selenium with a Python/Excel framework
- Updated test case management using TestRail with modular test cases
Relevant Resources:
About the Author
Jill Willcox has worked on accessibility issues for most of her professional career. Iterators is an inclusive women-owned small business (WOSB) certified by the Small Business Administration and WBENC. We provide software testing services for websites, mobile apps, enterprise software, and PDF remediation services, rendering PDFs ADA compliant.