Software QA FYI - SQAFYI

Secrets to Automated Acceptance Tests

By: Jeff Patton

Summary: Has your team been on the search for a fully automated acceptance test? Before you set out on that adventure, check out some of the accomplishments and perils behind the quest for complete automation, as explained by Jeff Patton in this week's column. Fully automated acceptance tests may seem like the solution to many problems, but you should know that it comes with a few problems of its own.

In the years that I've been involved with agile development, I've noticed an ongoing crusade for a holy grail that teams have been striving to reach: fully automated acceptance tests. There are some technical reasons why this crusade is difficult, but more importantly there are good reasons why automating some types of tests too early can be a bad idea.

Unit Tests Verify Code Modules
Agile developers generally divide automated tests into two types: unit tests and acceptance tests.

It's becoming more common for developers to write unit tests as they write code. They usually write unit tests to the internal interfaces of the code modules being created. Writing test first helps to specify the module being developed. Once they've written the code and basic unit tests have passed, developers commonly beef up those test cases with boundary conditions, incorrect input, and error handling.

This practice generally works pretty well. When all developers write tests, most every line of code becomes supported by a body of automated unit tests that is executed against the entire code base. Running those tests regularly with every code check-in confirms that the code behaves as expected when the tests were written.

Unit Tests Help Avoid Bugs Getting into the Code Base
When a developer adds or changes an existing code module, he sometimes sees unexpected failures in test that seem unrelated to the code he's working on. If you're a developer, then these are the times when you probably just can't help grinning. You’ve been investing heavily in this unit-testing practice, and, in one quick run of a testing framework, that investment pays a dividend by showing the failing test and with it the approximate location of the bug you've unintentionally injected. The failed test in the module you weren't working on pointed out a code dependency you weren't aware of--one you can address before you check in the new code you've just written. Thus, you avoid injecting a bug that may or may not have been found later through other forms of testing.

Well-written unit tests cover a single module of code. Unit tests help with design changes or improvements to code by verifying that things still work correctly in all other parts of the code after the change is made. Unit tests are pretty easy to write. And, when making changes, the impact to the test code base is usually limited to the tests that verify the code you're changing.

But unit tests have their limits. They work at a granular level--code module by code module. They're not easily readable by business people and aren't written at a high enough level to test full business-process flows.

Acceptance Tests Often Verify System Use
In an ideal agile-development world, the person specifying the software would write acceptance tests as executable requirements, and running those acceptance tests once the software is built would confirm that the requirements were met. Automating all acceptance tests has been a goal for many teams for years now. Some have even succeeded.

But things start to get a bit sticky when tests are written to control the software from the user interface. These sorts of tests may start up the application, log in as a test user, and drive through a series of screens, entering information and pushing buttons just as a user would. They're amazing things to watch, and having a body of these tests exercising your system does give you confidence that the system does indeed work.

Acceptance Tests are More Expensive
These acceptance tests can be more time consuming and difficult to write. Because they simulate use, they often duplicate a lot of routing user activities, like logging in or opening a file. They often execute lengthy scripts that take the same actions a user would take at the keyboard and mouse.

Automated acceptance tests are often written by different people with different skills, such as technical testers. They often write the tests at a different time than the code is written--often long later, after the code is done.

Running these tests isn't quick either. In many organizations these tests take hours to run, so they're often only run at night or at fixed times during the day. And they're often run on code after it's checked into the code base, so they often don't prevent bugs from getting into the code, just point out that they're there.

Failed Acceptance Tests Often Deliver News Too Late and to the Wrong Person
For all these reasons, we don't get that happy feeling the developer gets when he sees a unit test fail. When an acceptance test fails, it's usually a long time after the offending code has been checked in. In fact, a lot of code may have been checked in. This makes finding the offending code difficult. Also, it's not always clear who should be finding and fixing the issue. It's not the person who wrote the test, if his is a role that writes tests and not code. It's not clear which developer should fix the code, and, even if it was, that developer probably already has moved on to doing something else, so now it's an interruption.

I see many organizations struggling to keep their acceptance-test code bases working. It's common to have many tests failing every day. When some tests are fixed during the day, more things are broken the next day. I've seem many teams just give up.

Acceptance Tests Can't Confirm That We'd Accept the Software
There’s a simple reason why acceptance tests break more often than unit tests. Just like unit tests, they only verify what we understood when we wrote the tests. And herein lies the problem.

In my last column ("An Uncomfortable Truth about Agile Testing"), I invoked the old definitions of verification and validation, where verification confirms the product was built as specified and validation confirms the product is fit for use or delivers a desired benefit to its user. This last bit, validation, is often subjective. The decision of whether it's delivering benefit isn't something that can be asserted in an automated test; you need to see it and use it.

But that's not all.

For much of software--especially commercial software--it needs to be easily learnable, efficient to use, and aesthetically pleasing for it to deliver its desired benefit. Even if it's not commercial software--for example, something like your company's handmade, internal, time-tracking system--you'd still like it to have these qualities. But again, these are qualities that can't be tested by automated tests.

The corner I continue to see teams paint themselves into is one in which the team tries to automate acceptance tests before the software has been validated.

Acceptance Tests Written Early Break When You Do the Right Thing A common pattern I see is acceptance tests--particularly those running the application through the user interface--being written along with the code. They verify that the code was written as specified. Everything's checked in, and then some time later the system is shown to end users or business stakeholders. As you’d expect, they see opportunity for change and for improvement. The change may be to move a couple fields around the screen, to relocate links or buttons from one screen to another, or to completely change the workflow in hopes that the application will be more efficient. There may be no underlying business-rule or system-behavior changes. But, these are the sorts of software changes that cause a cascading break in potentially dozens of acceptance tests.

You did the right thing by showing the software to your stakeholders, and your reward is both code to change and lots of tests to fix. No good deed goes unpunished.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Secrets to Automated Acceptance Tests