Test Automation: An Architected Approach
By: Dan Young
Abstract
In the world of automated testing, everyone throws around buzzwords like “data-driven,”
“data flows” and “error handling,” but what does it take to produce automation
that is efficient, maintainable and usable?
Successful automated testing requires a considerable financial investment. Simply
installing an automated testing tool and recording scripts may jumpstart the initial
automated testing effort, but this approach will become difficult to maintain and
therefore more expensive.
A more cost-effective solution is an architected solution. Providing the right
architectural framework for automation development means that the automation code
can be used for longer periods of time with less maintenance than a simple
record/playback solution. This translates to a significant savings over the course of
longer projects, and the ability to more thoroughly test an application with less
employee overhead.
The particular architecture championed here is based on the idea of automation code as
an application in its own right. Code reuse, encapsulation (on many levels), recursion,
object-oriented concepts, testing maturity and usability (of automation by non-technical
business analysts) are covered. The result of this architecture is reliable automation
code with scripts that can last the entire life of the product (not just the project) and that
can be used and enhanced by business analysts who have little to no knowledge of
automated testing.
Introduction
Assumptions:
This document begins with the assumption that the reader will have some knowledge of
automated testing and/or programming.
Overview:
Automated testing can be a very valuable tool to gauge and improve the quality of any
software product. If it were as simple as recording and playing back some test scripts,
every company would have a vast array of in-depth test suites that covered the testing of
their products from front to back, but they don't. This document will attempt to explore
some of the pitfalls of automated testing as well as present an architectural framework
that has produced proven results.
When Good Automation Projects Go Bad
There are many things that can derail an automation project. To start with, I blame the
companies who make commercial automation software. When an automated test tool
vendor comes to your company, they want, first and foremost, to sell you on their
software. They seem to believe that the best way to do this is to make you believe that
by purchasing and installing their software, and then pressing a few simple buttons,
you'll have a robust test suite that will meet your needs.
The second group of people that must bear a large part of the burden for this type of
derailment is the management that buys in to the sales pitch. Certain managers hear
how simple something can be, and they want it to be true so badly that they start
believing it. If you're one of those managers, be very clear, if anyone tells you that you
can have your automated test suite up and running in a couple of days, don't believe
them.
The third group that has to take responsibility for the derailment of potentially good
automation projects is people like me, the ones who use the tools directly. Often we are
so caught up with deadlines and management expectations that we fail to research and
implement the tools to their maximum benefit.
Perhaps the most difficult obstacle to good automation is the mindset that automation is
merely a part of the overall development cycle. On some level this is true, but if you
want automation that will be stable and maintainable through the life of the product and
not just something that you throw out when the current project is over, automation must
be approached with a broader view. Automation that will last through the life of the
product needs to be approached as a software development project in its own right.
This paper could take the whole Rational Unified Process and correlate it piece by
piece, but that would be tedious and out of scope. Instead I'm going to focus on a few
key ideas that will go the farthest toward insuring a successful automation project.
Planning an Automation Project
The best automation is accomplished when approached and justified as a software
project in its own right. I often describe my job to friends and family as “I write a
program within a program that tests another program.” This is usually the point where
people roll their eyes and decide to give up understanding what I do. Anyone who has
read this far probably understands the above statement and realizes that automation is a
form of development. Below are a few key elements of a software project that are
essential to a good automation project.
Requirements:
No successful software company would think of asking a developer to just sit down and
create a software tool that does what the customer needs; and yet this approach is often
taken with automated testing. A lot of times the business analysts who plan and
conduct the manual tests also double as automation engineers. As a combined
analyst/engineer, that person is expected to simply understand what his or her customers
(e.g. product managers, project managers and development staff) need, and provide
automated testing that meets those needs.
The same processes that are used to generate requirements for the software project that
your company is engaged in should be used to generate automation requirements. The
automation requirements can be as formal and specific as the software requirements, or
they can be as informal as agreeing what modules or portions of the application should
and should not be automated and which paths particular test cases will take through the
application.
Balancing Personnel:
With the exception of very small companies, or very small projects, development tasks
are generally divided and assigned to the most qualified persons. A company may have
a system architect who directs coding standards and designs the application. There may
be specialists who bring certain knowledge that serve mainly as resources, and then
there may be developers who use the artifacts created by the specialists and implement
the designs. In some cases portions of the coding are outsourced to other companies.
Generally, it seems that the Quality Assurance staff is just expected to wear all available
hats. Even in companies with larger QA departments, it's not uncommon for a single
person to be expected to design the tests, write the test plan, manually test the
application and write the automation.
A more logical and successful approach, even in QA departments as small as three
people, is to assign some staff to focus on manual testing (or the business-analyst-oriented
tasks) and assign others to focus on automated testing (or the more technically
oriented tasks) .
This approach offers several advantages. First, it gives management better tools to
manage resources in a multi-project environment. Since the automation staff is not
bound to a particular project, a manager can balance needs between projects. By
leveraging this balance, a manager can insure that the growth of automation is
consistent with the priority of each project, and not just left to a given staff member's
propensity to automate his or her work.
Second, this approach increases employee satisfaction and productivity by allowing
people to focus in areas of interest. I have rarely met anyone who truly enjoys both
Full article...
Other Resource
... to read more articles, visit http://sqa.fyicenter.com/art/
|