Software QA FYI - SQAFYI

Complete SOA Testing Defined

By: Services-Oriented Architecture

Complete SOA Testing Defined
This year, Services-Oriented Architecture (SOA) will move from the whiteboard to the diving board. Gartner estimates that 60% of enterprise development groups are planning or in the midst of an SOA initiative for 2006, and by 2009, more than 80% of development and integration budgets will be dedicated to applications delivered as an SOA.
Services-based delivery of business applications will be advanced over the next 5 years by the huge benefits in customer responsiveness, business agility and development cost reduction an SOA enables. By allowing companies to decouple business logic, and consume both legacy apps and new components of logic at runtime as workflows, new applications can be delivered faster, and cheaper. But can the business rely on these complex new SOA apps for mission critical business functions?
Application Quality will become a primary governor on the enterprise's success in successfully achieving the promise of SOA. With so many interconnected parts making up applications that can be delivered virtually anywhere, testing no longer becomes a mere matter of finding bugs within the developer's code, or problems that occur on a giver user interface. Software quality processes must evolve with the architecture, to genuinely test a business process and maintain context across the entire workflow.
There are still many differing opinions and concepts on what constitutes an effective test strategy for all of the distributed components that make up SOA applications. To date, most testing efforts have not kept pace with the needs of SOA.
To deliver quality SOA applications, enterprises need to achieve three key capabilities:
1. Test every technology layer. Test a business workflow across every heterogeneous technology of the SOA, at both the system and component level.
2. Involve the whole team in quality. Enable both developers and non-programmers to test continuously.
3. Return value from testing. Attain reusable and efficient testing practices that ensure business requirements are met, and contribute real value back to the organization.

Challenges of SOA testing
Why is SOA testing such a different beast than previous forms of browser, client/server and mainframe testing? Many of the benefits of SOA become challenges to testing an SOA application.
1. SOA is distributed by definition
Services are based on heterogeneous technologies. No longer can we expect to test an application that was developed by a unified group, as a single project, sitting on a single application server and delivered through a standardized browser interface. The ability to string together multiple types of components to form a business workflow allows for unconstrained thinking from an architect's perspective, and paranoia from a tester's perspective.
In SOA, application logic is in the middle tier, operating within any number of technologies, residing outside the department, or even outside the company
Think of today's service components as "headless" applications (most with no user interface) that may both rely on other services, or be consumed by other services to make up any number of business workflows within an SOA. You can rigorously test many of these components as they are developed in isolation, but what happens when they interact at deployment time? It becomes much harder to predictably find the source of a problem when you cannot get direct insight into why two or more disparate technologies do not create a cohesive application.
The post-SOA world offers a vast array of options for how you assemble or consume business workflows across multiple technologies, both inside and outside your core team. In an SOA, more points of connection = an exponential increase in possible failure points.
2. Need to ensure high service levels and excellent exception management
Quality has become a governor on the enterprise's success in delivering SOA applications. Ensuring quality in a singularly developed application was difficult enough to create an entire discipline around QA. With an SOA, application "stress points" can be anywhere, and will change as individual services are added to the workflow or changed.
There is a quality chasm between Unit and Acceptance Testing. Finding the root causes of problems across the middle tiers of SOA applications is difficult. Testing a front end user interface becomes irrelevant when it provides no insight into what is actually happening on the back end. And expecting developers to find missed requirements by conducting more unit testing at the code level doesn't get the team there either - it may find some bugs in the component-level code, but it won't demonstrate why a business requirement isn't being met.
Services "wrappers," for instance SOAP/WSDL around an existing RMI object, promise better interoperability by presenting a common set of controls, allowing legacy systems and custom components to be pulled together as steps in an SOA workflow. However, these wrappers may not map every aspect of the original component, making them very opaque to testing. If we are automating unit testing ("white box" testing) and acceptance/system testing ("black box" testing) as above, we are missing the area where most potential errors occur: the unpredictable interaction space between components.
3. Prioritizing new design vs. component reuse efforts
Companies don't implement an SOA strategy to try out the latest technology. They do it to attain new business capabilities. Complexity is driven into software by the natural process of competition, which forces the evolution of new business rules and logic into business systems. According to the 2005 Aberdeen report, "It's no surprise that the top factor for implementing SOA, which 50% of survey respondents cited, is development of new capabilities."
Timeline and budget both constrain quality, creating serious limits to the scope of functionality that can be tested using conventional means. In addition, the business must prioritize functionality as the proposed scope expands, so the project may not fall together in the expected order.
No SOA is a "flip the switch" single technology change
In Selecting an SOA approach there are components that are simply not worth the money and effort to bring into the SOA world - for instance a data feed that supplies a relatively unchanging piece of information to the business workflow. Whether the answer on some lower priority technologies is "if it's not broke, don't fix it" or "it's just not worth changing," you will likely find yourself supporting and testing some relics in any SOA.
We know that to test SOA, you need to go far beyond merely testing a user interface or browser screen. Web services (WSDL/SOAP) are an important component for many SOAs, but if you're only testing web services, you are not likely testing the entire technology stack that makes up the application. What transactions are happening at the messaging (JMS) layer? Is the right entry being reflected in the database? In fact, many perfectly valid SOA applications house business logic entirely outside of web services - for instance a Swing UI talking to EJBs connected with messaging applications.
Are you ready to test? SOA offers great implementation advantages, but to ensure quality, you must deal with:
* a continuous work-in-progress,
* comprised of heterogeneous components,
* developed by multiple teams or partners,
* and consumed by or delivered to multiple parties.

How can you consistently test, when you are trying to hit a moving target with fragile manual tests? The only way to overcome SOA project uncertainty is through highly reusable test automation that can talk to every middle-tier layer - whether your team has built it according to your overall strategy or not.

Other Resource

... to read more articles, visit

Complete SOA Testing Defined