Software QA FYI - SQAFYI

Reducing the Test Automation Deficit

By: Henrik Kniberg

Many companies with existing legacy code bases bump into a huge impediment when they want to get agile: lack of test automation.

Without test automation it is hard to make changes in the system because things break without anybody noticing. Defects aren't discovered until the new release goes live; worse, they are discovered by the real users, an embarrassment that results in expensive hotfixing, or even a chain of hotfixes, as each hotfix introduces new, unanticipated defects.

These risks make the team terribly afraid to change legacy code, and therefore relucant to improve the design of that code, which leads to a downward spiral in the quality of code as the system grows. What to do about it

Your main options in this case are:
* Option 1: Ignore the problem. Let the system decline into entropy death and hope that nobody needs it by then.
* Option 2: Rebuild the system from scratch using test-driven development (TDD) to ensure good test coverage.
* Option 3: Start a separate test automation project where a dedicated team improves the test coverage for the system until it is adequate.
* Option 4: Let the team improve test coverage a little bit each sprint.

Guess which approach usually works best in my experience? Yep, the last one - improve test coverage a little bit each sprint.

The third option may sound tempting, but it is risky. Who's going to do the test automation? A separate team? If so, does that mean the other developers don't need to learn how to automate tests? That's a problem. Or is the whole team doing the test automation project? In that case their velocity (from a business perspective) is zero until they are done. So when are they done? When does test automation "end?"

No, it's better to improve test coverage a little bit each sprint. The question is, how is that accomplished?

How to improve test coverage a little bit each sprint

Here's an approach that I like. In summary:
1. List your test cases.
2. Classify each test by risk, how expensive it is to do manually, and how expensive it is to automate.
3. Sort the list in priority order.
4. Automate a few tests each sprint, starting from the highest priority.

Step 1: List your test cases

Think about how you test your system today. Brainstorm a list of your most important test cases--the ones that you already execute manually today, or wish you had time to execute. Here's an example from a hypothetical online banking system:

First classify your test cases by risk. Look at your list of tests. Ignore the cost of manual testing for the moment. What if you could throw away half of the tests and never execute them? Which tests would you keep? To answer these questions you would have to consider both the probability of failure and the cost of that failure.

Highlight the risky tests, the ones that keep you awake at night.

Now think about how long each test takes to execute manually. Which half of the tests take the longest? Highlight those.

Finally, think about how much work it is to write an automation script for each test. Highlight the most expensive half.

So, which test do you think we should automate first? Should we automate "Change skin" which is low-risk, easy to test manually, and difficult to automate? Or should we automate "Block account" which is high risk, difficult to test manually, and easy to automate? That's a fairly easy decision: "Block Account."

But here's a more difficult decision. Should we automate "Validate transfer" which is high-risk, hard to test manually, and hard to automate? Or should we automate "Deposit cash" which also is high-risk, but easy to test manually and easy to automate? That decision is context dependent.

To sort the list, you need to make three decisions:
1. Which do you automate first? The high risk test that is easy to test manually, or the low risk test that is difficult to test manually?
2. Which do you automate first? The test that is easy to do manually and easy to automate, or the test that is hard to do manually and hard to automate?
3. Which do you automate first? The high risk test that is hard to automate, or the low risk test that is easy to automate?

Those decisions will give you a prioritization of your categories, which in turn lets you sort your list of test cases by priority. In my example I decided to prioritize manual cost first, then risk, then automation cost.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Reducing the Test Automation Deficit