Software QA FYI - SQAFYI

Automated Tests and Continuous Integration in Game Projects

By: Dag Frommhold

Many game projects are either significantly delayed or shipped in a rather buggy state. Certainly, this situation isn't unique to the games industry - for instance, according to the infamous "Extreme Chaos" report released by The Standish Group in 2001, more than 70% of all software projects are either cancelled or significantly exceed their planned development time and budget. However, since games represent a very complex case of software development where people skilled in rather different disciplines have to cooperate, one might argue that the development risks inherent in game projects are particularly high.

The reasons for delayed, bug-infested or even failed software projects are manifold, but it seems that, besides feature creep and shifting priorities, testing and quality assurance are recurring themes. In our experience, a large number of development studios entirely rely on manual testing of the underlying game engine, their production tools, and the game code itself, while automated processes are only rarely adopted. Similarly, in the 2002 GDC roundtable "By the Books: Solid Software Engineering for Games", only 18 percent of the attendees said that the projects they were working on employed automated tests.

We were first confronted with the notion of automated testing when, in the year 2000, customers of our back then still very young middleware company complained about stability issues and bugs in our 3D engine. Until that time, we had relied on manual tests performed by the developers after implementing a new feature, and on the reports of our internal demo writers who were using these features to create technology demos for marketing purposes. After thoroughly analyzing the situation, we came to the conclusion that our quality problems were mostly related to the way we were testing our software:

* Manual testing wasn't performed thoroughly enough, because it simply took too much time. Whenever some code was changed, or new code was added, it would have been necessary to execute a defined set of manual tests to make sure the modifications hadn't introduced problems anywhere else. Manual testing took more and more time, which lead to frustration on the side of the developers and reduced their motivation to actually execute the tests. Additionally, the amount of work involved in testing made developers reluctant to improve or optimize existing code.

* When developers manually tested their own code, they often showed a certain (subconscious?) tendency to avoid the most critical test cases, so the scenarios a problem was most likely to occur in were also the situations least likely to be tested.

As a result, we decided to adopt automated testing, starting with a new component of our SDK which we had just started to develop. The results were encouraging, so we finally expanded our practice of automated testing to all SDK components.

Test Frameworks
Automated tests have become popular with eXtreme Programming, a collection of methodologies and best practices developed by Kent Beck and Martin Fowler. Generally, automated tests refer to code or data that is used to verify the functionality of subsets of a software product without any user interaction. This may range from tests for individual methods of a specific class (commonly called unit tests) to integrated tests for the functionality of a whole program (functional tests).

In order to facilitate the creation of automated tests, there are a number of open-source unit testing frameworks, such as CPPunit (for C++ Code) or Nunit (for .NET Code). These testing frameworks provide a GUI to select the tests to run and to provide feedback about the test results. Depending on your project, it might be necessary to extend these frameworks with additional functionality required for your game, such as support for multiple target platforms.

In the context of such a testing framework, a single unit test corresponds to one function, and multiple unit tests are aggregated in test classes along with methods for initialising and de-initializing a test (e.g. loading and unloading a map). These test classes can in turn either be located in a separate executable - for instance, when the code to be tested resides in its own DLL - or in the main project itself. Regardless of this, test classes should always be stored in different files than your production code, so they can conveniently be removed from builds intended for deployment.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Automated Tests and Continuous Integration in Game Projects