Software QA FYI - SQAFYI

Testing, testing, one to three, testing

By: ITworld

Imagine the brain of a software developer in which the left hemisphere is devoted to pragmatism and the right hemisphere is devoted to intellectual rigor and correctness. Let's travel through that brain, surfing on a wave of thought, from left earlobe to right earlobe. The thought we are surfing on is this: "How do I know my software is correct?".

As we travel from left to right, neurons fire in all directions. Out of the synaptic snowstorm that ensues, three answers solidify from different sectors of the brain. They are, in order:

Sector 1 Answer: Working software is software that gets the job done before my boss fires me.

Sector 2 Answer: Working software is software that passes all my tests including the test of time. (i.e. not only has it passed all the tests I have put it through but it has been running error free for a long time.)

Sector 3 Answer: Working software is software that I can prove to be bug free using a formal, mathematical theorem proving mechanism.

As an industry, we have spent a lot of time in sector 1. Software development, especially commercial software development, is a high pressure activity in which engineering requirements need to be traded off with a heady concoction of requirements from other aspects of the business. We need to get the stuff to work before the customer walks down to our competitors, before the boss blows a gasket and so on.

As software development has matured over the years, we have codified more and more best practice. This best practice lore is pushing us more and more towards sector 2. This is good. Anything is better than the 'ship it, then worry', realities of sector 1.

Few would argue that in an ideal world, the science of computer science would allow us to skip the empirical world of sector 2 and head straight for the logical nirvana of formal correctness over in sector 3. After all, as Edsgar Dijkstra famously observed, testing can only show the *presence* - but not the *absense* of bugs. We can never say that we know our software works, we can only say that, so far, we have failed to show that it does not.

Personally, I think we will stay in sector 2 for a long, long time. Perhaps even for ever. You don't need to spend much time contemplating the enormity of the problem of formal correctness to realize that we have a long way to go to get there. Indeed, we may never get there. Formal correctness as a field of endeavor appears to have a history akin to that of artificial intelligence. After a period of considerable activity and excitement in the mid to late Eighties, activity has died down and expectations seem to have been lowered as to what can realistically be achieved.

So, it would appear, testing is where it is at. Certainly the field is a hive of activity at the moment which phrases like 'extreme programming' and 'test driven development' on many a software developer's lips.

I have always been a fan of testing. In the XML and EAI worlds where I spend a lot of time, we make extensive use of test suites to test system modules in isolation before plugging them together. XML is a big help here as we can formally capture the structure of information to be processed using a machine readable contract language known as a schema. Common examples of XML contract languages include DTD, Relax NG or W3C XML Schema. We can then instruct the software at runtime to check that the data meets the expectations expressed in the contract. Note that I said 'at runtime'. Put another way, we check things dynamically.

A testing focus has a way of veering your thoughts towards a dynamic model of software construction. A model in which adding running tests is a continuous activity, not an activity performed solely as part of a beta test programmer. In such a model, it is the most natural thing in the world to think of software itself as stuff that is as changeable as the tests are. The concept of fixed, pre-compiled objects begins to grate against the ear in this world. We don't want to have to drop our testing tools to re-run some longwinded compilation process for the system. We want to turn around software changes quickly. The quicker we can turn them around, the more quality time we can devote to tests. The more tests we have, the higher quality our software will be.

The combination of the desire for dynamic, rapid coding/testing with XML's emphasis on run time validation of data can color your assessment of programming languages for projects quite significantly. Simply put, there are programming languages that get you to 'declare' lots of stuff in advance, putting instructions about the shape of data, for example, directly into your programs which are then "compiled" into an unchangeable, machine readable form. These are broadly known as static programming languages. C++, Java, C#, Delphi fall into this category (more or less). Then there are programming languages that let you do this sort of stuff when your software is running and/or allow you to change the software after it has been deployed. They are broadly known as dynamic or interpreted languages. Python, Jython, Smalltalk, Scheme are examples (more or less).

Today, there is a significant imbalance between the worlds of static and dynamic programming language use in commercial IT. The balance has firmly been on the side of the static languages with the occasional pockets of dynamic language usage. I suspect that the increasing emphasis of testing in/during and after software development is beginning to tilt the balance the other way.

It would be wrong I think to attribute the recent growth in interest in Python/Jython, Smalltalk, PHP, Scheme, Ruby etc. purely to the emergence of testing as a key weapon in software development. These languages have many other attributes that appeal to software developers too. However, I believe that testing is a key contributor and will be an even stronger contributor in the future.

Helping to tilt the balance is the growing realization that the extra formalisms in statically compiled programming languages, designed to help you feel confident in the correctness of your programs, come at a price that may not be worth paying. The details are beyond the scope of this article but I would recommend Bruce Eckel[1] and Tim Bray[2] on this subject.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Testing, testing, one to three, testing