Software QA FYI - SQAFYI

Testing: A Sample Test Design

By:

1. Introduction

The primary goal of this document is to establish a plan for the activities that will verify [Product Name] as a high quality product that meets the needs of the [Product Name] business community. These activities will focus upon identifying the following:

" Items to be tested
" Testing approach
" Roles and responsibilities
" Release criteria
" Hardware
" Project plan and budget
" Risks and contingencies

1.1 Background

Write one or two sentences that describe the system to be tested.
Example:
The Defect Tracker System is a sophisticated bug-tracking tool that allows clients to significantly increase the quality of their software deliverables. This is a new product, so no backwards compatibility is necessary.

1.2 References
List all reference material you used in creating this plan.

Example:
1. Functional Specification, Program Management, xxx 1999
2. Testing Computer Software, Second Edition, Kaner / Falk / Nguyen, 1993
3. Detailed Design, Program Management, xxx 1999


1.3 Code Freeze Date
Production Code for [Product Name] will be frozen on MM/DD/YY. Our assumption is that any production code changes made after that date is outside of the responsibility of this development project.

1.4 Change Control
After baseline, all changes must be approved and documented by the change control board. If it is agreed that the change is necessary, the impact to development and testing must be agreed upon by the test lead, development lead and project manager. This may (or may not) affect the planned completion date of the project.

2. Items to Be Tested

2.1 Level of Testing
Below is a list of services that testing may provide. Next to each service is the degree of testing that we will perform. Below are the valid level desired:

High - High risk area, test this area very hard
Medium - Standard testing

3. Testing Approach


The system test team will begin designing their detailed test plans and test cases, as the development team is designing and coding. Defect Tracker will be used to enter the test cases and to track the defects. It can be accessed from http://www.DefectTracker.com.

The builds will be delivered to system test via Visual Source Safe (see cover page for VSS location) drops coordinated by the development team. The development team will be responsible for installing the partial new builds into the existing structure of the system test environment, and updating the client machines if necessary. Build notes with all changes since the last drop and a list of all files to be delivered will accompany each build drop.

Once the build is dropped by the development team, a series of scripts, called the Smoke Test, will be run to ensure that the shipment from development is in a state that is ready for testing. The Smoke Test scripts will test the basic functionality of the system. These scripts may be automated once they are successfully performed manually. If an excessive number of Smoke Test items fail, the product will be shipped back to development and no testing will begin until the Smoke Test passes.

Once the first drop begins, triage meetings will be held to discuss the bug list with the Project/Development Manager. Triage meetings are used to prioritize, set priority and severity and assign bugs. Each week following the first drop, additional drops will be delivered to system test to test the bugs fixed from the prior drops.

Defect Tracker will be used to track, report and analyze bugs. Prior to triage, a Defect Tracker protocol document will be distributed to the project and development manager to ensure that everyone understands how to use Defect Tracker and how to effectively enter bugs.

3.2 Defect Tracker Setup
The Test Lead will create a project for Defect Tracker so that bugs can be tracked. The project name in Defect Tracker will be [ProjectName].

4. Release Criteria
4.1 Test Case Pass/Fail Criteria
The feature will pass or fail depending upon the results of testing actions. If the actual output from an action is equal to the expected output specified by a test case, then the action passes. Should any action within a test case fail, the entire feature or sub-feature fails. The specific criteria for test case failure will be documented in Defect Tracker. If a test case fails, it is not assumed that the code is defective. A failure can only be interpreted as a difference between expected results, which is derived from project documentation, and actual results. There is always the possibility that expected results can be in error because of misinterpretation, incomplete, or inaccurate project documentation.

Pass criteria:
" All processes will execute with no unexpected errors
" All processes will finish update/execution in an acceptable amount of time based on benchmarks provided by the business analysts and documented by the development team

4.2 Suspension Criteria for failed Smoke Test
The system test team may suspend partial or full-testing activities on a given build if any of the following occurs:
" Files are missing from the new build.
" The development team cannot install the new build or a component.
" The development team cannot configure the build or a component.
" There is a fault with a feature that prevents its testing.
" Item does not contain the specified change(s).
" An excessive amount of bugs that should have been caught during the component/unit test phase are found during more advanced phases of testing.
" A severe problem has occurred that does not allow testing to continue.
" Development has not corrected the problem(s) that previously suspended testing.
" A new version of the software is available to test.


4.3 Resumption Requirements
The steps necessary to resume testing:
" Clean previous code from machines.
" Re-install the item.
" The problem encountered resulting in suspension is corrected.

Resumption of testing will begin when the following is delivered to the system test team:

" A new build via Visual Source Safe.
" A list of all bugs fixed in the new version.
" A list of all the changes to the modules in the new version and what functionality they affect.

4.4 Release to User Acceptance Test Criteria
The release criteria necessary to allow the code to migrate to User Acceptance Testing are as follows:
" There are no open bugs with a severity 1 or 2
" Test cases scheduled for both Integration and system test phases have passed.
" Successfully passes the final regression testing.
" There are no discrepancies between the master setup and the version used during the final regression testing.

4.5 Release to Production Criteria
The release criterion necessary to allow the code to migrate to Production is as follows:
" There are no open bugs with a severity 1 or 2
" Test cases scheduled for both Integration and system test phases have passed.
" Successfully passes the final regression testing.
" There are no discrepancies between the master setup and the version used during the final regression testing.
" The User Acceptance Test was successfully completed
" The User Acceptance Criteria was met.


5. Hardware

5.2 Server Configuration:

SysTest:
SQL Server service pack 2
NT service pack 5
UAT:
SQL Server service pack 2
NT service pack 5
Prod:
SQL Server service pack 2
NT service pack 5

6. Project Plan
The project plan is created using Project 98 and is linked into the project manager's project plan as to eliminate the need to keep a separate copy updated for the project manager.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Testing: A Sample Test Design