Software QA FYI - SQAFYI

One suite of automated tests: examining the unit/functional divide

By: Geoffrey Bache , Emily Bache

ABSTRACT
Extreme Programming (XP) as written [1] prescribes doing and automating both unit and functional testing. Our experiences lead us to believe that these two sorts of testing lie at two ends of a more or less continuous scale, and that it can be desirable to instead run a XP project with just one test suite, occupying the middle ground between unit and functional. We believe that this testing approach offers most of the advantages of a standard XP testing approach, in a simpler way. This report explains what we have done, and our theory as to why it works.

1. INTRODUCTION
When we introduced XP at Carmen Systems, the worst problem with our development process was not our testing procedures being out of control. We already had automated testing, though not along the lines outlined by Beck, Jeffries et al [1, 2]. Following the advice to “Solve your worst problem first”, we began introducing other aspects of XP, expecting that at some point testing would become our “worst problem” and we would start needing separate unit and functional test suites. That never seemed to happen - we have been doing all the other XP practices in 2 projects for 18 months or so, and our style of automated testing has not only not become a problem, but in fact a great success that seems to fit very well with the rest of XP.

The automated tests we have are perhaps best explained as “pragmatic acceptance tests” - we run the system as closely as possible to the way the customer will run it, while being prepared to break it into subsystems in order to allow fast, easily automatable testing. The overall effect is that the tests are owned by the customer, while being just about fast enough to be run by the developer as part of the minute by minute code-build-test cycle.

2. THE CARMEN TEST SUITE
What we have created is an application independent automatic testing framework written in Bourne

shell and Python. The framework allows you to create and store test cases in suites, runs them in parallel over a network, and reports results. For each test case the framework provides stored input data to the tested program via options or standard input redirects. As it runs, the tested program produces output as text or text-convertible files. When it has finished, the testing framework then compares (using UNIX “diff”) this output to versioncontrolled “standard” results. Any difference at all10 is treated as a failure. In addition, the framework measures the performance of the test, and if it strays outside pre-set limits, (for example if it takes too long to execute) this is also recorded as failure.

New tests are added by providing new input options and running the system once to record the standard behaviour against which future runs will be measured. This behaviour is carefully checked by the customer, so that s/he has confidence the test is correct. Once verified, the new test case (ie input and expected results) is checked into version control with the others.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

One suite of automated tests: examining the unit/functional divide