Non-Regression Test Automation
Automated testing is undoubtedly an ideal solution for having the capacity of replaying test for each version of an application, and thus avoiding regressions. First steps with automation tools are generally awesome, but…
A reality not so bright
It appears that this “ideal” is not always fulfilled. Depending on projects and contexts:
* Application’s functionalities and UI are not stable enough. They change with each version and thus require manual intervention on automation scripts. Maintenance costs of test scripts become too important.
* Data are not stable on a long term basis and it is hard to control results. Analyzing every result requires as much time as executing the test.
* Visibility on real test coverage is poor. We’ve got automated tests, but knowing if they are relevant regarding to regression risks related to the evolutions is very different. Basically, you play the tests, but it doesn’t avoid regressions.
So, here is the reality as seen with our clients: there sometimes are automated test. When there are, it’s no more than 20% to 40% of tests overall!
For manual tests remaining, an efficient test strategy is essential. Available resources and time are impeding testers from executing all the tests.
How to define a non-regression strategy
Identification of regression risks
It is essential to have a clear vision of risks in order to define an efficient test strategy. Otherwise, selecting relevant tests is extremely hard.
However, this clear vision is not easy to obtain.
As explained in a previous article about misunderstanding between developers and testers, developers are failing to provide test teams trustable information. Thus, no way to surely identify impacted functionalities nor risks.
Our technology is taking a picture of each version of the application. It then compares each picture to precisely identify changes and their functional impacts. Thanks to that, we you can get a clear picture of risks to cover for each version, whether major of minor.
Identifying the right test cases
Beyond risks identification, it’s necessary to select test cases that effectively cover those risks. And selecting them becomes more complex with an increasing number of tests.
Our technology is unique. For each test execution, its real footprint on the application is recorded. We accurately identify what’s tested in the application (executed code) by one test. We called it “Test Learning System”, and it’s automated.
By correlating this footprint with version analysis, our platform can identify which scenarios are relevant for covering the regression risks.
Prioritizing tests regarding business issues
Identifying tests is a good point, but it can also be necessary to prioritize them.
When time runs out and you can’t play all of the tests, better to be sure you started with the good ones. This is Risk Based Testing.
By identifying each functional module of the software, Kalistick’s platform allows to associate a business criticality level. It is then crossed with footprints and modifications in order to select and prioritize test scenarios.
An integrated wizard also allows polishing the strategy until the number of scenarios can be played in the allocated time with the available testers.
Trustworthy test coverage indicators
During the test stage, Kalistick’s platform brings a detailed vision of the risk coverage progress by identifying tested and untested elements. Thus, it becomes possible to identify the eventual falls through the net and to stitch them up before going live to production.
This vision of coverage highlights complementarity between automated and manual tests. After an automated test campaign, it allows to identify remaining parts to be tested as well as the most relevant manual tests.
In conclusion, it becomes possible with Kalistick to define more efficient regression test strategies that are balancing time, costs and risk coverage, and to adapt them to every business context, project or version.
... to read more articles, visit http://sqa.fyicenter.com/art/