Software QA FYI - SQAFYI

Tracking Test Scrip

By: traqsoftware

Introduction It’s a seemingly simple task to track the test scripts run for the different builds of a product under test. Yet when you consider that even on a small test project you might have 20 different builds of a product and 30 different test scripts to track that quickly turns into the potential for 600 test script instances. 20 builds X 30 test scripts = 600 possible test script instances Add to that, 5 different testers running the scripts, and different versions of the same test script. Then attempt to pin point exactly which test scripts have been run, which versions of the test scripts were used and which testers ran them. Then, finally attempt to link all of this to the builds of the product! Suddenly this seemingly simple task quickly explodes into a real problem. “Help! Which scripts have been run against which builds?” Not an uncommon cry from someone in a QA team trying to keep track of it all. So how can QaTraq help you gain control in such a scenario?

In Control Before we describe how to take control lets just take a quick look at what it’s like to be in control. We’ll use a small example with 9 different test scripts and 7 builds (or versions) of the product under test. With this small sample we can already see that we have the potential for 63 test script instances (9 test scripts x 7 builds).

Yet with one simple report from QaTraq (see Diagram A) it’s easy to see that in fact we’ve decided to run just 24 instances of our test scripts (24 instances of the 9 different test scripts. E.g. Test Script 1 is run for Build 1, Build 4 and Build 7. This makes up 3 of the 24 total test script instances)

What else can we tell from this report? Well, this report shows many important points about our test script execution strategy. We’ll come onto these points in a moment. First though, we need to understand that a Test Script has three important identifiers (It’s crucial to understand that these three identifiers are not just common to QaTraq. Any test process should have test scripts with Titles, unique identifiers and clear version control):

1. Unique ID: every instance of a test script has a unique identifier. This unique numerical identifier is in the format of ‘TSCnn’. For example ‘Test Script 1’ assigned to Build 1 is identified as ‘TSC14’.

2. Version: every instance of a test script has a version. This version identifier is in the format of ‘TSCnn-V.v’, where ‘V’ is the major version number and ‘v’ is the minor version number. For example Version 0.1 of Test Script 1 (TSC1-0.1) is assigned to Build 1. Version 0.3 of Test Script 1 (TSC23-0.3) is assigned to Build 4.

3. Title: the test script title should describe the area of functionality that the test script and the associated test cases cover. The Test Script title is used as a common identifier for different instances and different versions of a similar test script (e.g. the title ‘Test Script 1’ refers to the three instances, TSC14, TSC23 and TSC32). The title ‘Test Script 1’ also encompasses the different versions of the same test script (e.g. v0.1, v0.3 and v1.0 shown for ‘Test Script 1’ in Diagram A).

can see that Test Script 1 has evolved from version 0.1 (which was run on Build 1), to version 0.3 (which was run on Build 4). It is possible that this change in versions is because we’ve added test cases to Test Script 1 prior to the script being run against Build 4 (you could see exactly what the changes are by clicking on the two links for TSC14-0.1 and TSC23-0.3).

In Diagram C we can see that we didn’t run Test Scripts 7, 8 or 9 for the final Build (Build 7). Perhaps functionality tested in scripts 7, 8 and 9 hadn’t changed between Build 6 and Build 7. Perhaps though, there were big changes in functionality between Build 6 and Build 7 and we’ve just identified a hole in our testing.

In Diagram C we can also see that Test Scripts 7, 8 and 9 are all still in draft (i.e. version numbers are at 0.3, 0.3 and 0.2 respectively). Whereas Test Script 6 (TSC38-1.0) which was run on the final build, Build 7, was at version 1.0. If we were running a formal review and sign off process for our test scripts it could cause us some concern that Test Scripts 7, 8, and 9 were not formally reviewed and moved to version 1.0 for the final test script runs. This supposition is largely based on the assumption that you have implemented a review and sign-off process (we’ll come on to implementing a review process usingQaTraq in one of our later So, we’ve seen how QaTraq can give us control of our Test Scripts and how QaTraq can help pin point where we are with running those scripts against different builds. So how do we build up our test script repository to produce the kind of information seen above?

Taking Control There are three key steps (and 1 optional step) when it comes to building up and taking control of your test scripts with QaTraq: 1. Define the product and the builds to be tested 2. Create a set of ‘placeholder’ test scripts (optional step) 3. Develop the test scripts 4. Copy, delete, modify and create new test scriptsWe’re assuming here that you know how to create, modify, delete and copy test scripts in QaTraq. If you need detailed instructions on how to create test scripts please refer to the User Guide (links to the User Guide can be found at the end of this article).

1. Define the product and the builds to be tested Before you start assigning test scripts to product builds you need to define the product and the builds associated with the product. QaTraq always uses the term ‘Version’ to specify a release of a product but there’s nothing to stop you using this field to specify ‘Builds’ if you so wish (see the Definitions section of a description of Versions and Builds). With qatraq_6_2 you can also specify dates for builds, which helps to order the versions in the reports correctly.

2. Create a set of ‘Placeholder’ Test Scripts (optional step) You can define a number of test scripts before the testing even begins (or even after the testing starts) just to identify areas that need to be tested. These placeholder test scripts will not be run and will never have any test results entered against them. However, these placeholders, identifying areas for testing, can be used as templates for creating other test scripts as the project progresses.

So to start with define an arbitrary test build, say ‘Build 0’. Then create a number of test scripts (these test scripts don’t even need any test cases for now). Each test script will identify an area for testing (using a descriptive test script title).

From Diagram D you can see that we’ve defined 3 test scripts. These test scripts identify 3 different areas for testing. At the beginning of the project none of these scripts contain any test cases, although we will be adding test cases as the project progresses.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Tracking Test Scrip