|
TestTalk: A Comprehensive Testing Language
By: Chang Liu
Abstract Software tests are intellectual assets that are as valuable as source code. Over
the long term, maintainable software tests significantly lower a project's cost. It is very
difficult, however, to write maintainable software tests, especially executable ones.
Existing approaches - including natural languages, tabular formats, scripting and
programming languages, and several test description languages - are all problematic, as
briefly discussed in this paper. The solution we propose is TestTalk, a test description
language that provides mechanisms to specify software tests while separating the various
concerns of automated software testing. TestTalk is designed for testers. Software testing
concepts such as application states, scenarios, boundary-based test input selection, and
category-based test input selection, are explicitly supported by TestTalk. TestTalk tests
are automated by a transformational approach.
The Problem
To develop high-quality software products, software teams invest significant portions of their resources on
designing, implementing, and sometimes automating software tests. It is desirable but difficult to extend the
life spans of these software tests. Changes in application implementation, platforms, or testing
environments can all cause existing software tests to become invalid.
Software tests are traditionally described in natural languages, tabular formats, scripting or programming
languages; each approach presents problems. Tests in natural languages or tabular formats are difficult to
automate. Tests in general purpose scripting or programming languages are automated but sensitive to
application implementation changes, especially user interface changes [2]. They are also difficult to port to
other platforms. Tests in special scripting languages provided by testing tools are usually tied to testing
tools and usually can not be used in other testing tools.
The root of the problem is that software test design and implementation is mixed with software test
automation. The final software test description has not only description of what the test is but also
inseparable automation details. The inseparable automation detail part causes all kinds of maintenance
problems.
Our Solution: TestTalk
Overview
To solve the software test description problem, we designed a comprehensive testing language, called
TestTalk [1]. TestTalk is a platform- and testing tool-independent language that allows testers to specify
software tests in domain-specific terms. Tests are described only in terms of what they do, not how they are
automated. Automating TestTalk tests is dealt with in a separation section using transformation rules. The
benefit of this separation is that later change in platforms or testing tools will not force test descriptions to
change.
When using TestTalk, testers still have full access to the underlying testing automation tool. They can even
work in the same fashion. They switch to TestTalk only when they need access to TestTalk functionality. In
this sense, TestTalk is a big plus with little negative impact.
Design goals
Our goal is to let TestTalk tests live as long as their applications. No matter what application
implementation change, platform change, or testing environment change happens, as long as what a test
does does not change, its test description should not need any modification. We want to provide language
mechanisms to facilitate changes so that individual test descriptions do not need modifications. When
changes are inevitable, for example, when application requirement changes, we want test descriptions to be
easier to change. In other words, we want better maintainability of test descriptions. Our approach is to
separate test design and test automation and to enable testers to describe software tests in terms of what the
tests do, not in terms of how to automate them. All platform-, application implementation-, and testing
environment-dependent parts of a software test are described in separate sections other than test case
description sections.
In preparation for designing TestTalk, we conducted a survey on software test description techniques [3].
We discovered that the most expressive techniques are the most popular. To make TestTalk acceptable by
all testers, we tried to ensure expressiveness of TestTalk by allowing testers to define their own TestTalk
dialects, including native support for software testing concepts, and not imposing any constraints on
underlying testing tools. Concepts in software testing, such as application states, boundary-based testing,
test input categories and sampling, are explicitly supported by various TestTalk language mechanisms.
TestTalk is a software test description language. It does not provide test automation by itself. TestTalk
works with other testing tools to perform automated testing. We have been careful to not impose any
constraints on the power of the underlying testing tools. All features of the underlying testing tools remain
accessible when TestTalk is used.
Testers who work on test automation usually pick a testing tool and often become used to it. To increase
their acceptance of TestTalk, we want to minimize changes in testers' work style should they decide to
adopt TestTalk. For this purpose, we introduced several mechanisms to allow testers to work in their own
styles. These mechanisms will be introduced in the following sections.
Key features
A TestTalk description consists of several sections. Major sections are setting sections, dialect sections, test
scenario or action list sections, test suite sections, transformation rule set sections, and variable sections.
Setting sections and transformation rule set sections contain all non-portable information. Setting sections
have platform- or testing tool-specific parameters. Transformation rule sets define how actions are
implemented on a particular platform with a particular testing tool. Test scenario or action list sections and
test suite sections have individual test descriptions that are portable and maintainable. Dialect sections
define dialects that can be used to define test scenarios and test cases. Variable sections define how
different test variables can vary in which directions. This information helps test case generation.
The following sections have examples for several key features of TestTalk. A secure web browser will be
used as an example application-under-test to illustrate the language features. The secure web browser
maintains a user profile for each user. Users store certificates in their user profiles. A certificate contains a
pair of public/private key that can be used to authenticate the user to a web site.
Dialects allow test descriptions in domain-specific terms
TestTalk users can choose a set of terms that are most suitable to their application domains and describe
software tests in these terms. For example, to test the secure web browser, a tester can define her own
dialect for this particular application. As shown in Figure 1, this dialect has four words: CreateProfile,
OpenSecureSite, DeleteProfile, and ImportCert. Usages of these four words are also defined in the Dialect
section. For example, word CreateProfile always takes one parameter, which is a profile name. With this
dialect, the tester can then write her test scenarios in this dialect. One simple scenario is shown in Figure 1.
As we can tell from this example, the scenario is very readable. The fact that dialects are defined in the
context of application domains ensures readable test descriptions.
Full article...
Other Resource
... to read more articles, visit http://sqa.fyicenter.com/art/
|