Software QA FYI - SQAFYI

Software Quality Unpeeled

By: Dr. Jeffrey Voas

The expression software quality has many interpretations and meanings. In this article, I do not attempt to select any one in particular, but instead help the reader see the underlying considerations that underscore software quality. Software quality is a lot more than standards, metrics, models, testing, etc. This article digs into the mystique behind this elusive area

The term software quality has been one of the most overused, misused, and overloaded terms in software engineering. It is a generic term that suggests quality software but lacks general consensus on meaning. Attempts have been made to define it. The Institute for Electronics and Electrical Engineers (IEEE) Standard 729 defines it as:
… totality of features of a software product that bears on its ability to satisfy given needs and … compos- ite characteristics of software that determine the degree to which the software in use will meet the expec- tations of the customer.
However, this attempt and others are few, and not precise. In fact the second edition of the Encyclopedia of Software Engineering [2] does not have it listed as an entry; the encyclopedia skips straight from “Software Productivity Consortium” to “software reading.” And worse, books with software quality in the title never give a definition to it in their pages
. If you review the past 20 years or so, you will find an abundance of other terms that have been employed as pseu- do-synonyms for software quality. Examples include process improvement, software testing, quality management, the International Organization for Standardization 9001, software metrics, software reliability, quality modeling, con- figuration management, Capability Maturity Model® Integration, bench- marking, etc. In doing so, the term soft- ware quality has wound up representing a family of processes and ideas more than it represents good enough software. In short, software quality has become a cul- ture and community more than a techni- cal goal
. In this article, I will avoid the quick- sand associated with trying to come up with a one-size-fits-all definition. Instead, I will expose how software quality is composed of various layers and, by peeing off different layers, it allows us to have a rational discussion between a typ- ical software supplier and end user such that an agreement can be reached as to whether or not the software is good enough.

We will begin dissecting software quality by first looking at the multiple viewpoints behind the term certification. This will pro- vide us with a look into our first layer
The term is often used to refer to cer- tifying people skills. For example, the American Society for Quality (ASQ) has a host of certifications that individuals can attain in order to demonstrate com- petence in certain fields, e.g., they can become an ASQ Certified Software Quality Engineer. An individual can also become certified in specific commercial software packages, e.g., a Microsoft Certified Software Engineer.
For the purposes here, I employ a dif- ferent perspective that comes from three schools of thought. The first school deals with certifying that a certain set of devel- opment, testing, or other processes applied during the pre-release phases of the life-cycle were satisfied. In doing so, you certify that the processes were followed and completed. (Demonstrating that they were applied correctly is a trickier issue.) In the second school, you certify that the developed software meets the functional requirements; this can be accomplished via various types of testing or other analyses. For the third school, you can certify that the software itself is fit for purpose. This third school will be the most useful, and throughout this article, it will be consid- ered software to be good enough if it is fit for purpose.

In this article, the term purpose sug- gests that two things are present: (1) exe- cutable software; and (2) an operating environment. An environment is a complex entity: It involves the set of inputs that the software will receive during execu- tion, along with the probability that the events will occur [6]. This is referred to as the operational profile [6]. But it also involves the hardware that the software operates on, the operating system, avail- able memory, disk space, drivers, and other background processes that are potentially competing for hardware resources, etc. These other factors are as much a part of the environment as are the traditional inputs; they have been termed invisible or phantom users. In some instances phantom users more heavily determine whether the soft- ware is fit for purpose than the tradition- al inputs. In short, it is environment that gives fit for purpose context. By more completely defining and thus bounding the environment to include phantom users, we gain an advantage in that we can reduce the set of assumptions needed to predict whether the software is good enough. Understanding the distinction between traditional inputs and phantom users is one ingredient needed to argue that fit for purpose has been achieved. Further, note that rarely will there be only one environment that software, and in particular general purpose software, will encounter during operation. That offers a key insight as to why general purpose soft- ware is not certified by independent labo- ratories; such laboratories could not be

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Software Quality Unpeeled