Software QA FYI - SQAFYI

Approaching Agile Testing

By: Arijit Sarbagna

As we know, "Agile testing" is not a completely different testing procedure but a software testing practice, following the principles of the Agile life cycle. How? Its most salient aspect is that it emphasizes testing and close work with the end users (or at least with the story owners) throughout the project.

Agile testing involves testing as early as possible. Testing early is one of the success factors for any Agile development, as long as the development setup supports this by providing a successful build-to-testing team. With its ever-growing maturity, Agile testing is becoming more and more integrated throughout the project lifecycle, with each feature being "fully tested" as it's developed, rather than most of the testing coming at the end of development.

To elaborate: I've worked in situations in which each development team is comprised of quality-assurance (QA) member(s) at 4:1 ratio. So a typical eight-member team will have two testing members. We've used CruiseControl.Net to run a continuous integration (CI) and ensure that the QA members on the team get a buildable (in Agile terminology, a "potentially shippable product") solution to test—even if we're still in a development environment. So the development work flowed like this:

Now, assume the above situation repeating itself for every sprint (this could mean every two to four weeks). Agile testing will need to address validation of one or more of the "new software modules" from the customer perspective during each of these individual cycles. It will also need to take into consideration how and when to handle the regression before the eventual release. Thus, testing is no longer a phase; rather, it blends with development, and "continuous testing" becomes the mantra, and the only way to ensure continuous progress and eventual success.

Under such a challenging environment (imagine in particular a multiteam/cross-location situation), where new requirements are implemented and K-LOCs are checked in, demanding both ad-hoc and regression testing, a ten-to-twelve-hour day often seems insufficient. This may eventually lead to churning among resources. How do we handle this?

1. One simple solution could be to increase the head count—add more resources to manage QA requirements. Perhaps under T&M projects, this would be an interesting proposition. But obviously the client won't be fond of such an option, and it's definitely not a worthy solution for the vendor either in a fixed-price project.

2. A better alternative would be to simplify things by streamlining a few processes so that life becomes relatively simpler for testers. A few important considerations:

* Involve QA at the beginning of requirement finalization, so that QA members get the maximum possible visibility of the requirements.

* Introduce accountability from quality perspective. That is, introduce an in-development test lead, test-case writing lead, story owner, and a business analyst (who works in sync with the story owner to define acceptance criteria).

* As QA starts working on test-case preparation, involve the customer for review. This helps to ensure the completeness of test cases, and additional review also prevents redundant cases and steps.

Full article...

Other Resource

... to read more articles, visit

Approaching Agile Testing