At some point during your software testing career, you’re going to get asked – “how long is testing going to take?”
Depending on your seniority, it may be a daily occurance!
“How long is testing going to take?”
Being able to answer the question with some degree of confidence, and with information to support your estimation will constitute a critical factor in the planning for the project you’re contributing to, whether it be a waterfall, agile or continuous delivery based methodology.
Experienced testers will often have a good feel for how much testing needs to be carried out for a given feature or release based on their experiences of having worked on similar deliverables. Supplying a “finger in the air” estimate will be sufficient in many cases. For more complex pieces of work though, or for more demanding stakeholders, a more thorough treatment of the testing activities and effort may be required.
First up, let’s think about the factors you’ll want to take into account when calculating how much effort a software testing activity will need.
Heuristics (rules of thumb) for thinking about software testing effort
- Do you have some documented requirements for the software to be delivered? Are there some undocumented requirements? What resources are available to help you learn more about the software requirements?
- How complex is the software to be tested? What is the software architecture? How will it be built, deployed and supported in a production environment?
- Does the software integrate with some other systems and do those integrations need to be included in the scope of your testing effort?
- How experienced are your testing resources? What skills can they bring to bear on the project? How many of them do you have? Who else might be willing to support the testing effort (business users for example)?
- Does the software to be tested require specific domain knowledge and do your testing resources have experience in the specific domain?
- Do you have the necessary environment(s) available in which to test the software? Are those environments reliable, performant, and well controlled with respect to data and variables?
- Have any specific risks been identified which should be addressed by the testing approach? If not, should there be?
- What tools, techniques and technology have been used to develop the software? Can those same tools, techniques & technologies be used to test the software? If not - what additional tools and approaches will be needed?
Once you’ve done your due dilligence and have answers to as many of the questions above as possible (bearing in mind the answers may raise further questions as you go along!) - you can start to drill down into specific test estimations.
Work breakdown structure (WBS)
The work breakdown structure is a classic project management technique for (the clue’s in the name) breaking workdown into more manageable components, and then structuring its execution.
To formulate a WBS of your testing approach, you need to consider the following questions:
 Scrutinise the requirements. If they’re not documented, speak to the stakeholders and product owners/managers. Create a requirements document of your own, identifying all of the problems that the software is intended to solve for the business and its users.
 Once you understand the requirements, figure out what needs to be tested in order to [dis]prove whether the software that has been delivered fulfills those requirements. Identify any specific risks that should be called out and addressed with the testing as you do so - e.g. performance and security risks.
Pro-tip: Use a mindmap for planning out your testing coverage and activities
 While you’re working on understanding the requirements, start thinking about what environment(s) are needed to execute the software testing. What kinds of software testing will you perform and how do those environments need to be configured in order to carry out that testing successfully? Are there some specific platforms you should test? Do you need specific data? Are there any other special requirements (such as the ability to change the system time)?
 Start thinking about what specific tests will be carried out. Some people call these test cases; others call them sessions, charters, or ideas. The name doesn’t matter (much). The key point is that you’re starting to get an idea about what the testing actually looks like and breaking that thinking down into some sequence of SPECIFIC activities; items of work that will constitute the WBS. For some more complex tests, you may wish to add specific detail about what needs to be done to execute the test.
 If some of your tests are to be automated, these will need to be considered in more detail - since the automation of your testing will have specific environment, configuration, tooling and resource requirements.
 By now you should have a good outline of the various testing activities. You may even have a document that can be shared with your peers and other stakeholders for feedback. I highly recommend doing so, ideally in a presentation or workshop, so that people who may wish to have some input in the testing have an opportunity for doing so. Garnering this kind of feedback in a meeting or dialogue is vastly preferable to just having things signed-off via email. It helps people to buy-in to the approach being recommended, or to point out problems or risks you may not have identified in the plan.
 Once the approach has been agreed, you think about the remaining preparatory activities; further identification of test cases, scripting or supplementing those tests with additional information where needed, developing your automated test framework, writing any performance test scripts etc., etc.
 Now think about the execution of your testing. How long will it take to run all of your manual tests? If you apply more resources, will it take less time? Are there some dependencies on the environment, deliverables or the tests themselves that mean they can’t be executed until a certain point in time, or until those dependencies have been satisfied?
 Again, once you’ve answered the questions above, and reduced your testing to the level of specific granular activities, you should be in a good position to estimate the amount of effort and time required for each of them. After that, it’s just a matter of communicating that information to the people that need it.
In a nutshell, when following a WBS style approach to estimating your testing - you need to identify all the activities and dependencies, document them all, and estimate for them.
Considerations while estimating your software testing
Irrespective of what specific methodology you’re fallowing – whether it be waterfall, agile or continuous, being able to follow this process will be helpful not only to you – the test lead or manager, but to whomever within your team/project/organisation that needs the information.
For a waterfall project, the level of test planning will be high, and therefore the granularity of the information provided will be accordingly high.
For a more agile or continuous project, the need for this level of planning will be reduced. Agile projects tend to operate on a Just-In-Time (JIT) basis. You can most likely jot down your test plan in some bullet points on a post-it, or on a Jira Epic or something. The process that you follow will be very similar though.
Keep in mind the following during any test planning, and you won’t go too far wrong:
- The requirements/stories/features to be delivered can form the basis of your estimation
- Make sure you know what resources are available to you – testers, environments, tools, time
- Add a buffer in case things don’t go to plan – 20% is normal
- Document your estimate and keep it up to date
- Identify and track any risks you or others identify during the course of delivery
- Talk through your plan and estimates with your peers and team, solicit feedback and revise if necessary
- Track your estimates against what happened in reality, so you get better next time!