The first question I usually need to handle when coaching a test management client is “Why do we need to test the software our developers are going to deliver?” – the underlying assumption being that, the software will get delivered to them packaged in some kind of neat bundle, and that all their requirements of the software will be met, with everything working exactly as they expect it to.

Of course, that is very rarely (if ever) the case, and it is usually to dispel this belief that a client has engaged me as a test management coach in the first instance; because there is some risk that the delivered software will NOT work exactly as expected and will NOT meet all the end users requirements.

So that’s where I start.

Software testing is all about identifying RISKS associated with the delivery of a given application, and then exploring and mitigating those risks by exercising the software in various ways and using various techniques.

That’s a bit of a mouthful though, so I’m going to expand on the key concepts (what software testing is, and some common approaches to it) in this guide.

What’s the big idea behind software testing?

The key thing we’re trying to address by carrying out software testing is the prevention of some kind of failure once the software goes live and starts being used by potentially a great many users. There are a variety of contexts into which software can get released; for some of those contexts, the risks associated with failure can be very high indeed (e.g. in the case of software used to control a medical device and monitor someone’s health, or in the case of guidance controls on a missile).

For other contexts, the risks may be somewhat less critical; we may be talking about a warehouse control system for example where the worst thing that can happen is a pallet of goods gets lost somewhere.

Nevertheless, those risks, no matter how great or small, will have some impact on somebody, somewhere. And the realisation of some risk can be gauged in terms of its level of impact and severity.

Thus, we have a few key terms to define:

Risk

Risk is a situation that exposes some kind of danger. In my first example (that of a medical device) we can identify a couple of dangers:

  1. A danger to the life of a patient, from the device not monitoring the status of their health correctly.
  2. A danger to the healthcare provider, from the device failing to warn them of a patients deteriorating health, for example.
  3. A danger to the device manufacturer, of the patient (or their relatives), or of the healthcare provider suing them in the event of the device failing.

As the software developer or provider, we should attempt to identify these kinds of risks ahead of delivery so that we can address them, through more software development practices, or through testing; most likely, both.

Impact

Once we have identified some risks, we can start to think about the impact of those risks. Following on from the example above:

  1. In the event of failure, the patient may be impacted.
  2. The healthcare provider may also be impacted.
  3. And the developer or manufacturer of the medical device would be impacted.

How much they would they would be impacted is answered by the question of severity.

Severity

Finishing off with the medical device example still, we can easily imagine the severity of the impacts I have already cited:

  1. In the worst situation, we may have a dead patient on our hands. That’s pretty severe, so any issues identified as a result of testing that were likely to have this impact would be highly prioritised (priority and severity are often used interchangeably). If during the course of some testing I identified some kind of bug that may have the impact of killing a patient, I would rank that issue as being of HIGHEST severity.
  2. Having a healthcare provider sue my hypothetical company is not quite as serious as killing a patient; but it’s still serious. I would rate issues that may result in litigation as HIGH.
  3. And for issues relating to the manufacturers reputation, or that would have a financial impact (e.g. less sales) - I would rank my issues accordingly.

In a nutshell:

We identify risks and test for the presence of issues that would realise those risks. In the event that we find bugs, issues or other problems with the software that would lead to those risks being realised, we gauge their impact (on the end user, the purchaser, or on the manufacturer or developer of the product being tested) – and we assign some kind of severity to the issue, based on how serious the impact is.

What are some common types of software testing?

Once we’ve gotten to grips with what the problems or risks are that we’re trying to address with the testing, it’s generally a matter of designing some suitable tests. Those tests need to exercise the software being tested such that the issues we are trying to catch (before a patient does, for example) manifest themselves in some way that can be observed by the software tester.

A huge number of different approaches can be employed for this purpose. There are some common ones however, which I will list here:

Usability testing & A/B testing

Before we even start building a piece of software, it can be (and usually is) valuable to develop a prototype of the application, system or device - so that we can show it to some potential users and garner feedback as to its suitability for their purposes.

Normally this would be carried out by a designer or design team, and would be based on some minimal set of mockups or low-fidelity design material. The prototype is shown to a volunteer user in a timeboxed session, during which they will have an opportunity to work through various objectives and see whether they can be completed successfully.

If they cannot be completed successfully, or the user has strong negative feedback, then it may be back to the drawing board for the design team.

In some scenarios, the designer may have alternative designs or paths through the application; design or path A, and design or path B – which can be varied by user. Hence, A/B testing also.

Unit testing

Unit testing is carried out by the developer(s) of a piece of software. It is not generally carried about by a separate team.

The objective of unit testing is to focus on and validate that a specific component (or unit) of the code which constitutes the software to be delivered, is working correctly. As such, it is the job of a unit test to isolate a specific piece of the code (often involving the mocking or stubbing of interrelated pieces of code) - and verify that with some specific inputs, it produces the expected outputs.

Integration testing

Integration testing is that next kind of testing you would likely want to perform after unit testing. Instead of focusing on a single component or unit, your testing turns to interrelated components, and verifies that those components perform correctly together, under various conditions.

Since the number of interactions between different components (and therefore tests to be performed) tends to increase substantially over the number of interactions (or tests) for a simple unit, this is typically an area where dedicated testers or QA’s will become more involved.

Functional testing

I’d usually lump functional testing in with acceptance testing, since basically what you’re looking to validate with functional testing is that the output of some function meets the specified requirements of that function. So – is it acceptable to the person who wrote the original requirements.

That’s not to say that functional testing is simply a matter of comparing each function to the requirement document. Often, there may not even be a requirements document.

Functional testing can often be an exploration of the functionality in question, since the tester should be aiming to exercise all of the boundary and edge cases of the functionality, as well as the easier (and perhaps specified) use cases.

It is the testers responsibility during a functional testing cycle not only to question whether the application works as intended, but whether it works under a range of conditions, in a variety of states, and when subjected to erroneous and even malicious inputs.

The tester should constantly be asking themselves; “How can I break this?”

UAT testing

User Acceptance Testing (UAT) is the process of having the [mostly] finished software tested by some set of end users.

It’s often the final stage of testing before releasing software to a production environment. If it works for the business users (AKA Business Acceptance Testers [BAT]), then the requirements should mostly have been fulfilled and the recipients mostly happy.

As a test manager, I’d hope that the UAT phase of a given project is largely ceremonial. I’m aiming to get the blessing of the users who will have to live with the software. Often, that doesn’t happen – and if you get to this phase of a project and there are still lots of issues with your product, it can be a real nightmare.

Managing the UAT phase generally can be hugely challenging for testers, simply because of the logistics of ensuring that a potentially large number of business people are doing the testing you need them to do, and have sufficient knowledge of the system, and testing processes & tools, to be able to do so successfully.

Exploratory testing

The thing that differentiates exploratory testing from any of the other styles or approaches to testing we’ve discussed so far (or have still to discuss) - is that the tests aren’t necessarily designed up-front, and further testing will rely on the results of prior testing.

Often with functional, UAT or acceptance testing for example, tests are designed in advance and written in the form of scripts (with varying amounts of detail).

An exploratory approach to your testing means that your tests will be unscripted. That’s not to say they won’t be specified up-front to some extent. In a Session Based exploratory approach for example, the exploratory tests to be executed can be meticulously planned in advance. However, as mentioned above, further testing activities should be determined by the results of prior testing.

What this basically means is, you take a more iterative approach to testing. You explore and test, see what the results are, then explore and test some more based on those results.

Regression testing

Regression tests are tests that we will want to run frequently during the course of a software project, to determine whether changes to the code under development have resulted in unintended and unexpected changes; regressions.

Ideally, these tests will be run each time we get a new build of the software. For this reason, it’s preferred that most of the regression testing should be automated. Depending on the maturity of the team and their practices however, this may not be possible.

The reason we would want to automate as much of the regression testing as possible is because the amount of effort involved in continually running regression tests can be very high. If you have a set of tests comprising of the tens, hundreds or even thousands in some cases – you don’t want to be in a position where all those tests have to be run manually if you can possibly help it.

Smoke/sanity testing

It’s helpful to have a layer of smoke or sanity testing before you carry out most other kinds of testing, to prevent wasted effort in the event there’s some kind of [relatively] easily detectable problem.

The name smoke testing comes from a form of electronics testing that used to be carried out when making circuit boards. The circuit boards would be turned on while still on the assembly line; if any smoke came out of the circuit board, it was obviously defective and would not continue down the line to be used as part of a complete device.

The principle is the same for software testing. We run smoke tests (sanity is just a derivative term) to see whether the software is smoking, such that further testing is not necessary.

Performance testing

Hugely important, but often not adequately tested for - is performance. When we’re thinking about the performance of a given piece of software, there’s a couple of key considerations:

  1. How many people do we expect to use it at the same time (concurrently) – what’s the expected workload?
  2. Under that workload, how quickly does the software need to respond?

There’s a lot more detail to those questions, but these are the key considerations. As a rule of thumb, the sooner you can start thinking about the performance of your software, and testing to prove that it meets your stated performance requirements (or not), the better.

Security testing

Ditto for security testing.

Where performance testing deals with questions of reliability (at scale), security is concerned with availability and integrity. Security testing is often outsourced to specialist teams (particularly when it relates to network security) - but the passionate and skilled software tester can definitely seek to include application security within the remit of other areas of testing.

Automated testing

And finally we have automated testing, wherein we do all the above, automatically – using test automation code to test product code.

There’s a few things we can try to accomplish with an automated test approach:

  • Increase the breadth of testing (do more testing with less effort)
  • Increase the depth of testing (by iterating through different variables, data etc)
  • Increase the speed of testing (reduce duration)
  • Increase frequency of testing (run the same tests more often as part of a build process, e.g.)

And there are a great many ways of accomplishing those things. See the linked posts for more details.