Software QA FYI - SQAFYI

Test Driven Development (TDD) Traps

By: Kenny Cruden

An area of constant debate in the software industry revolves around automation of tests and who takes part in their design, creation and maintenance. There are many blogs on the subject about who within the team should do this, and different scenarios have varying degrees of success for the team, product and company.

Overall, I believe there are core guidelines that should be considered and adhered to when approaching this topic.

The Current Approach from Industry

A common scenario I see is where a team, working diligently using Test Driven Development (TDD), has developers writing production code and building a test infrastructure at the unit and lower-level integration layers to prove their code and receive fast feedback. At the same time, and in order to complete the testing pyramid, QAs write test code to verify higher level integration scenarios that are driven via the UI, e.g. using webdriver to exercise page content and verifying that the user ends up on the right page with the right text/message/values.

This can progress well whilst the app is small and there isn't much functionality or related tests. As it grows (even past a couple of pages on the UI), very real and difficult questions that the team comes across are

"Do we have duplication of test coverage?" and "Is our test suite structured in a cohesive manner throughout the whole stack?". Duplication

For the former question, if there are effectively two groups of people owning some tests but not all, a natural divide evolves. The developers trust that the QAs know what they have to test at the higher level and the QAs trust that the developers are testing 'the right stuff' at the lower level.

In theory this seems plausible, however what happens is one of two things :-

1) QAs spend most of their day reading and writing automated tests in order to fully understand what to add to the framework. While this may seem to be a good thing to some (the testers are writing tests all day after all!), it impacts the rest of the essential tasks in the role e.g. assisting in shaping stories with BAs, performing structured exploratory testing sessions, engaging with the Product Owner and ensuring their feedback is taken onboard by the team. The QA spends most of their time to the right of the wall and much less to the left. This approach to the role loses a lot of the benefit that a QA brings to a team working iteratively, described here.

2) QAs attempt to create a layer of verification without the required in depth knowledge of the lower level tests. This creates duplication and brings about the ice-cream cone anti-pattern.

An example is a standard client/server web app, with a story to add a page of account details for a customer to enter their name, address, DOB etc. Developers write unit tests to verify javascript validation of inputs in the browser (allowed number of chars, numbers, illegal chars), and integration tests to check server-side validation of content and that it is persisted. QAs write tests driving the browser for a full stack validation, checking various combinations of inputs and asserting the account details are also saved.

What would have sufficed from that written by the QAs is a single webdriver test to ensure that the client is talking to the server, not rechecking field validation, nor the data being persisted in a database.

Cohesion

Moving on to the latter question, "Is our test suite structured in a cohesive manner throughout the whole stack?", the scenario above leads to effectively two teams committing to one codebase. If both sets of people have a sufficiently high level of skill and experience in working with code, and there are other people able to take on the other duties described in 1), then problems can be minimised (they never completely go away though).

What does a sufficiently high level of skill and experience mean?

The team needs to be confident that code is being written in a maintainable format. Consideration has to be paid to existing tests, any refactoring of them based on the work in progress, the testing framework, libraries and patterns used, naming conventions etc. Referring to the example above, a more cohesive team would be aware of the need to have the single webdriver test at the top of the testing stack and that each layer has to contain different types of test. Four Principles for Test Automation

Building a testing infrastructure is a complex process as it is continuously changing and evolving as the product under development does. This test code should be treated as a first class citizen, in the same manner as the production code.

With this in mind, and acknowledgement of the discipline that is required in software development, I believe the following four statements are guidelines the team should follow when approaching test code :-

Only work on test automation if you have the skills to work on production code if required.

Only work on production code if you have the skills to work on test automation if required.

In case of 1) or 2) not being true, pair with someone who meets the requirements of 1) and 2) until it is.

The team must own and contribute to test code, not certain individuals or disciplines within it.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Test Driven Development (TDD) Traps