Software QA FYI - SQAFYI

Transitioning to Agile Testing

By: Johanna Rothman

Some test teams may be stumped on how to transition to agile. If you're in such a team, you probably have manual tests for regression either because you never have had the time to automate them or because you are testing from the GUI and it doesn't make sense to automate them. You probably have great exploratory testers who can find problems inside complex applications, yet they tend not to automate their testing and need a final product before they start testing. You know how to plan the testing for a release, but now everything has to be done inside a two-, three-, or four-week iteration. How do you make it work? How do you keep up with development?

This is a common problem. In many organizations, developers think they have transitioned to agile while testers are still stuck in manual testing efforts and unable to "keep up" at the end of the iteration. When I explain to these people that they are receiving only partial benefit of their agile transition, the developers and testers both explain that the testers are just too slow.

The problem isn't that the testers are too slow but that the team does not own "done," and, until the team owns "done" and works together to achieve it, the testers will appear too slow.

Know What "Done" Means
Agile teams can release a working product every iteration. They may not have to release, but the software is supposed to be good enough to release. That means that testing—which is about managing risk—is complete. After all, how can you release if you don't know the risks of release?

Testing provides information about the product under test. The tests don't prove that the product is correct or that the developers are great or terrible, but rather that the product does or doesn't do what we thought it was supposed to do.

That means the tests have to match the product. If the product includes calls to another system, some set of tests have to call that other system. If the product includes a GUI, the tests—at some point—have to use the GUI. But, there are many ways to test inside a system. From under the GUI, the way is to build the tests as you proceed, so you don't need to test only end to end and you will still receive valuable information about the product under test.

If the developers are only testing from the unit-level perspective, they don't know if a feature is done. If the testers can't finish the testing from the system-level perspective, they don't know if a feature is done. If no one knows if a feature is done, how can you call it done for an iteration? You can't. That's why it's critical for the team to have a team-generated definition of done. Is a story done if the developers have tested it? Is a story done if the developers have integrated and built it into an executable? What about installation? How much testing does a feature need in order to know if it's done or not?

There is no one right answer for every team. Each team needs to look at its product, customers, and risks, and say, "OK, we can say it's done if: all the code is checked in, reviewed by someone, or written in a paired way; all the developer tests are done; and all the system tests have been created and run for this feature under the GUI. We'll address GUI-based checking every few days, but we won't test through the GUI."

I have no idea if that is a reasonable definition of done for your product. You need to assess the risks of not doing frequent GUI tests for your product. Or, maybe you don't have a GUI for your product, but you have a database. Do the developer tests need to access the database? Maybe, or maybe not. Do the system tests need to access the database? I would think so, but maybe you have a product I can't imagine, and maybe they don't need to all the time. Maybe you need more automated tests that test schema upgrades or transitions before anything else. "Done" depends on your product and its risks. Look at the risks of releasing the product without certain kinds of tests, and you'll see what you need in an iteration to get to releasable product.

Create a Just-enough Test Framework

Once you know what you need in an iteration, you'll probably need testing. Then, you'll encounter the "Give a Mouse a Cookie" problem. In a delightful children's book of the same name, if you give a mouse a cookie, he wants a glass of milk to go with it. Then, he needs a napkin to wipe the milk off his mouth and a broom to clean the crumbs off the floor. The need for more and more continues until the mouse is tired and wants another cookie, which starts the whole cycle again.

This is what happens when a test group wants a "perfect" test framework for their product. It's a reasonable desire. Unfortunately, you can't always tell what the perfect framework is until the product is complete. If you wait until the product is complete, the testing is tacked onto the end of the project—that's too little, too late.

Instead of a perfect test framework, try developing a just-good-enough test framework for now, and plan to refactor it as you proceed. That provides the test team enough automation to get started and increased comfort with the automation as the iteration and project proceed. It doesn't lock you into a framework that no longer fits because you've spent so much money and time developing it.

Remember, testers are just like product users. Just as your product users can't always tell what they want or need until they see the product, the testers can't tell what test framework they want or need until they start using one.

Everyone Works on a Story Until the Entire Story is Done If you need to know what "done" means and you create a just-good-enough framework for testing, how does the test team "keep up" with development? By making sure the entire team works on a story until the story is done.

Say you have a story that requires three developers and one tester. The developers work together to create the feature. At the same time, the tester refines the tests and creates enough automation or installs the test into the existing automation framework. But, what happens if you are transitioning to agile and have no framework? Then, one (or more) of the developers works with the tester to install a reasonable framework and add the tests for this feature to that framework.

There is no rule that says developers can't help testers install test frameworks, write test frameworks, or even write tests to help a story get to done. Since you have a team definition of "done," doesn't it make sense that the team members help each other get to done?

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Transitioning to Agile Testing