Software QA FYI - SQAFYI

Inside the Software Testing Quagmire

By: Garbaczeski, CIO

Software testing reveals the human failings behind the code. That's why it can become a never-ending exercise in denial. Here are five questions that you can ask to help you cut through to testing's root problems.

There are few things worse than being responsible for a software project mired in testing. To those waiting to use the software, the project seems done. But it isn't. The software needs to be tested to ensure it functions properly and is stable and reliable. And the project manager's frustration mounts as days turn into weeks, weeks turn into months, and - heaven forbid - months turn into years.

For best practices for running your testing organisation read this article.

This process is doubly frustrating for CIOs removed from the action. Testing managers - who may not be skilled at communicating with CIOs - can distract attention from the real problems by being overly detailed or focusing on irrelevancies.

CIOs must assess the situation for themselves, asking the testing manager the following five questions face to face and observing how wide his pupils dilate.

Question #1: Is the software's functionality clear, complete, documented and subject to a formal change process?

You're really asking: Are we trying to hit a moving target?

You're trying to determine: If the problem is that the software is poorly defined or that the project's scope has changed.

Interpreting the response: If the software's functionality is not fully documented or is not clear, testers will have difficulty determining whether it meets the project's goals. When functionality is subject to interpretation, test cases might not reflect what was originally intended. If functionality changes because the organization continually adds, modifies or deletes functions, testers will have difficulty keeping up. Only changes critical to the integrity of the software should be allowed.

A related symptom to check: Intense debate about requirements and test results.

Question #2: Is development complete?

You're really asking: Are the testers essentially starting over with each new release because there are so many changes?

You're trying to determine: If the software has been released for testing prematurely, or if changes are uncontrolled.

Interpreting the response: Software released prematurely will differ markedly from the previous release. With all the changes, testing performed on a previous release might no longer be relevant to the new one. If testing of one release is not completed before the next one arrives, there will be no comprehensive understanding of release defects.

After each release, the software will change due to user feedback. But problems will occur if developers and testers do not agree about which changes will be made. If developers decide to implement sweeping design changes or to improve software already functioning correctly, the testers will be the dubious beneficiaries of releases that behave very differently from previous ones. Again, testing efficiency will be very low.

A related symptom to check: Complaints about the frequency of releases, about releases being delivered without notice or about significant changes in a release.

Question #3: Are test cases comprehensive and repeatable; are they executed in a controlled environment?

You're really asking: Is testing ad hoc or disciplined?

You're trying to determine: If testing is effective.

Interpreting the response: There should be a set of repeatable test cases and a controlled test environment where the state of the software being tested and the test data are always known. Without these, it will be difficult to discern true software defects from false alarms caused by flawed test practices.

A related symptom to check: If temporary testers are conscripted from other parts of the organization to "hammer" the software without using formal test cases, it means the organization is reacting to poor testing by adding resources to collapse the test time, rather than addressing the problem's root causes.

Question #4: Is there a process being followed to evaluate each defect and prioritize its resolution?

You're really asking: Is the organization tackling the most severe problems first and agreeing on the contents and timing of the next release?

You're trying to determine: If the organization is making good decisions about where to apply its assets.

Interpreting the response: Defects vary in severity. For example, a defect in the cosmetics of a screen form is less severe than a defect that stops the software cold. A defect that impacts many users is more severe than one that impacts few users. The order in which the development team resolves defects should be in line with their severity.

Trouble occurs when the development and test teams do not communicate about which defects to remedy and in which order. To ensure improvement of the software and for the test phase to move toward completion, the development and test teams must collaborate.

A related symptom to check: The number of highest-severity defects does not diminish over time; friction exists between development and test organizations.

Question #5: Does the organization collect testing metrics at regular intervals? The total number of test cases? The number that passed and failed? The number of defects - by degree of severity - in the process of being fixed?

You're really asking: Can the organization quantify the state of testing?

You're trying to determine: Can the organization measure progress?

Interpreting the response: Metrics enable informed testing decisions. If metrics are not recorded and published on a regular basis, progress will remain uncertain.

Metrics relating to test cases and defects must be captured, published and tracked. With these metrics you can determine whether defects are climbing, cresting or diminishing, and whether the most severe defects are being attacked first. You will see trends and be able to make corrections.

A related symptom to check: There are differing opinions about the state of testing, open defects and trends.

Because software testing ultimately exposes human failure, it's difficult to know whether the process is achieving its goal of creating the best software. People don't like to admit mistakes. They can go to extraordinary lengths to hide mistakes or take unilateral steps to try to remedy problems before others can discover them. "Busy-ness" is no guarantee of progress - indeed, it may indicate the worst kind of testing failure. CIOs can provide a critically important perspective on the process to get testing back on track and keep it there.


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Inside the Software Testing Quagmire