Software QA FYI - SQAFYI

Totally Data-Driven Automated Testing

By: Keith Zambelich

Professional History and Credentials: I have been involved in Software Testing and Software Quality Assurance for the past 15 years, and have tested a number of software applications on a variety of platforms. I have also been involved in some form of automated testing or another during this period.

At First Interstate Services Corporation, I was involved in testing Electronic Funds Transfer (EFT) applications written in ACP/TPF. Here I developed the "Transaction Simulation Test Facility", which allowed testers to simulate transactions to and from all of First Interstate’s connections (VISA, Cirrus, Great Western, etc.). This consisted of over 400 programs written in ACP/TPF, and enabled testers to verify all application modifications, including the front-end switching (Tandem) software.

At the Pacific Stock Exchange, I was in charge of testing all Exchange software, including their new PCOAST trading system. I developed and implemented a method of simulating Broker transactions, eliminating the need for live testing with Brokers. This resulted in a greatly improved implementation success rate.

During my employment at Omnikron Systems, Inc. (a software consulting company based in Calabasas, CA) I successfully implemented Automated Testing solutions using Mercury Interactive’s WinRunner® test tool for a variety of companies, including Transamerica Financial Services, J. D. Edwards Co., IBM Canada, PacifiCare Health Systems, and Automated Data Processing (ADP). While at Omnikron Systems, I developed a totally data-driven method of automated testing that can be applied to any automated testing tool that allows scripting.

I have been certified as a WinRunner® Product Specialist (CPS) by Mercury Interactive, Inc. Currently, I am the President and CEO of Automated Testing Specialists, Inc., a consulting firm specializing in the implementation of automated software testing solutions.

--------------------------------------------------------------------------------

Introduction: The case for automating the Software Testing Process has been made repeatedly and convincingly by numerous testing professionals. Most people involved in the testing of software will agree that the automation of the testing process is not only desirable, but in fact is a necessity given the demands of the current market.

A number of Automated Test Tools have been developed for GUI-based applications as well as Mainframe applications, and several of these are quite good inasmuch as they provide the user with the basic tools required to automate their testing process. Increasingly, however, we have seen companies purchase these tools, only to realize that implementing a cost-effective automated testing solution is far more difficult than it appears. We often hear something like "It looked so easy when the tool vendor (salesperson) did it, but my people couldn’t get it to work.", or, "We spent 6 months trying to implement this tool effectively, but we still have to do most of our testing manually.", or, "It takes too long to get everything working properly. It takes less time just to manually test.". The end result, all too often, is that the tool ends up on the shelf as just another "purchasing mistake".

The purpose of this document is to provide the reader with a clear understanding of what is actually required to successfully implement cost-effective automated testing. Rather than engage in a theoretical dissertation on this subject, I have endeavored to be as straightforward and brutally honest as possible in discussing the issues, problems, necessities, and requirements involved in this enterprise.

--------------------------------------------------------------------------------

What is "Automated Testing"? Simply put, what is meant by "Automated Testing" is automating the manual testing process currently in use. This requires that a formalized "manual testing process" currently exists in your company or organization. Minimally, such a process includes:

Detailed test cases, including predictable "expected results", which have been developed from Business Functional Specifications and Design documentation A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application If your current testing process does not include the above points, you are never going to be able to make any effective use of an automated test tool.

So if your "testing methodology" just involves turning the software release over to a "testing group" comprised of "users" or "subject matter experts" who bang on their keyboards in some ad hoc fashion or another, then you should not concern yourself with testing automation. There is no real point in trying to automate something that does not exist. You must first establish an effective testing process.

The real use and purpose of automated test tools is to automate regression testing. This means that you must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.

An "automated test script" is a program. Automated script development, to be effective, must be subject to the same rules and standards that are applied to software development. Making effective use of any automated test tool requires at least one trained, technical person – in other words, a programmer.

--------------------------------------------------------------------------------

Cost-Effective Automated Testing Automated testing is expensive (contrary to what test tool vendors would have you believe). It does not replace the need for manual testing or enable you to "down-size" your testing department. Automated testing is an addition to your testing process. According to Cem Kaner, in his paper entitled "Improving the Maintainability of Automated Test Suites" (www.kaner.com), it can take between 3 to 10 times as long (or longer) to develop, verify, and document an automated test case than to create and execute a manual test case. This is especially true if you elect to use the "record/playback" feature (contained in most test tools) as your primary automated testing methodology. Record/Playback is the least cost-effective method of automating test cases.

Automated testing can be made to be cost-effective, however, if some common sense is applied to the process:

Choose a test tool that best fits the testing requirements of your organization or company. An "Automated Testing Handbook" is available from the Software Testing Institute (www.ondaweb.com/sti) which covers all of the major considerations involved in choosing the right test tool for your purposes. Realize that it doesn’t make sense to automate some tests. Overly complex tests are often more trouble than they are worth to automate. Concentrate on automating the majority of your tests, which are probably fairly straightforward. Leave the overly complex tests for manual testing. Only automate tests that are going to be repeated. One-time tests are not worth automating. Avoid using "Record/Playback" as a method of automating testing. This method is fraught with problems, and is the most costly (time consuming) of all methods over the long term. The record/playback feature of the test tool is useful for determining how the tool is trying to process or interact with the application under test, and can give you some ideas about how to develop your test scripts, but beyond that, its usefulness ends quickly. Adopt a data-driven automated testing methodology. This allows you to develop automated test scripts that are more "generic", requiring only that the input and expected results be updated. There are 2 data-driven methodologies that are useful. I will discuss both of these in detail in this paper.

--------------------------------------------------------------------------------

The Record/Playback Myth Every automated test tool vendor will tell you that their tool is "easy to use" and that your non-technical user-type testers can easily automate all of their tests by simply recording their actions, and then playing back the recorded scripts. This one statement alone is probably the most responsible for the majority of automated test tool software that is gathering dust on shelves in companies around the world. I would just love to see one of these salespeople try it themselves in a real-world scenario. Here’s why it doesn’t work:

The scripts resulting from this method contain hard-coded values which must change if anything at all changes in the application. The costs associated with maintaining such scripts are astronomical, and unacceptable. These scripts are not reliable, even if the application has not changed, and often fail on replay (pop-up windows, messages, and other things can happen that did not happen when the test was recorded). If the tester makes an error entering data, etc., the test must be re-recorded. If the application changes, the test must be re-recorded. All that is being tested are things that already work. Areas that have errors are encountered in the recording process (which is manual testing, after all). These bugs are reported, but a script cannot be recorded until the software is corrected. So what are you testing? After about 2 to 3 months of this nonsense, the tool gets put on the shelf or buried in a desk drawer, and the testers get back to manual testing. The tool vendor couldn’t care less – they are in the business of selling test tools, not testing software.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Totally Data-Driven Automated Testing