SOA Driven Testing?
SOA Driven Testing?
By now, I suspect that most folks who are involved with designing, writing,
maintaining and/or supporting software have at least heard of the newest
addition to the industry's “buzz-acronym alphabet soup." It's not XP
(Extreme Programming), OO(Object Oriented) or even TDD (Test Driven
Development). This time the buzz-acronym is SOA (Service Oriented
Architecture). And in fashion with many of the more recent buzzacronyms,
the expanded phrase sheds little, if any, light on what the term
really means. When I checked in with a few friends of mine to make sure I
had my terminology straight, one of them pointed me to Martin Fowler’s
Blog where he writes…
“…one question I'm bound to be asked is "what do you think of
SOA (Service Oriented Architecture)?" It's a question that's pretty
much impossible to answer because SOA means so many different
things to different people.
· For some SOA is about exposing software through web
· For some SOA implies an architecture where applications
· For some SOA is about allowing systems to communicate over
some form of standard structure… with other applications…
· For some SOA is all about using (mostly) asynchronous
messaging to transfer documents between different systems…
I've heard people say the nice thing about SOA is that it separates
data from process, that it combines data and process, that it uses web
standards, that it's independent of web standards, that it's
asynchronous, that it's synchronous, that the synchronicity doesn't
I was at Microsoft PDC a couple of years ago. I sat through a day's
worth of presentations on SOA - at the end I was on the SOA panel.
I played it for laughs by asking if anyone else understood what on
earth SOA was. Afterwards someone made the comment that this
ambiguity was also something that happened with Object
Orientation. There's some truth in that, there were (and are) some
divergent views on what OO means. But there's far less Object
Ambiguity than the there is Service Oriented Ambiguity…”
Service Oriented Ambiguity?!? No *WONDER* I got confused sometimes
while reading the articles in CRN, SD-Times, CTO Source, TechRepublicand others about the technologies behind SOA! This is just one more reason I’m thrilled with my
choice to be a tester. Compared to figuring out all of the enabling technologies, testing SOA is a piece
of cake. Don’t get me wrong, testing SOA has its challenges, but we at least have some experience
with the concepts. Allow me to explain.
Let's start by taking a look at what SOA is conceptually all about from a tester’s point of view –
without creating a paper that is certain to win us a round of “Buzzword Bingo." Ignoring the hype
around the phrase and the acronym, SOA is nothing more than the most recent step in the natural
evolution of software and software development that started with the GOTO statement. I'm not being
cynical; I'm serious! The dreaded GOTO statement started the evolution of abstraction of code decades
ago. Some of you may remember being chastised for using the GOTO statement in line numbered
BASIC and upgrading to GOSUB-RETURN to save face before becoming empowered by Functions,
Procedures and eventually Java Beans or Objects. Even if your programming background doesn't
include first-hand experience with that evolution, you probably recognize all of these programming
concepts as methods of minimizing code redundancy and abstracting sections of code to maximize
This concept of abstraction and code re-use (the basic concept behind what Fowler called the Object
Ambiguity) is what paved the way for the software industry to think in terms of not just reusable
segments of code but eventually entire mini-applications that can be used in many different contexts to
provide the same, or similar, functionality. Possibly the most well known of this breed of miniapplication,
as I think of them, are those that process credit card purchases over the web.
I'm sure that it's no surprise to anyone reading this article that once you get beyond the service
providers and resellers, there are really only a small handful of organizations that actually publish and
maintain the vast majority of credit card processing software. In fact, virtually all of us with Web Sites
that sell products (often referred to as B2C or Business to Consumer sites) simply “plug-in” to one of
those pieces of software (for a small fee, of course) to make our once-innocent web site into an ECommerce
web site! Of course, this particular type of mini-application has its own buzz-term – it's a
called a Web Service. Web Services have been around for several years and are actually the direct
predecessors, or maybe the earliest adopted subset, of SOA.
For years I struggled with the question of “What’s the difference between a Service and an Object on
Steroids?” It took me almost four years to navigate my way through the implementation technologies
and coding patterns to figure out that the fundamental difference is that Objects are programmer-centric
abstractions of code and Services are user- or business-centric abstractions of code. Basically, a
programmer may write code to reference a number of objects that the user is completely unaware of
while that user performs an activity, like logging into a secure web site. If, instead, the “log into a
secure web site” activity were to be written as a Service, it would be a single entity that accepted
certain input and responded with certain output. Not only is the user unaware of the Service, but the
programmer writing the application need only be aware of the format and contents of the input and
output parameters. In fact, SOA is really nothing more than building software applications in such a
manner as to be able to take advantage of Services, whether they are available via the web or the next
server down on the rack. Independent of all the ambiguity about technologies, protocols and degrees of
abstraction, that is really all there is to SOA.
That said, there are several things about SOA that are going to present challenges that many testers are
not used to facing, at least not in the volumes and combinations that we will see with SOA. First,
testing Services in an SOA environment is fundamentally different from testing the Objects that
inspired them in at least one significant way. Objects were (and are), as we mentioned, programmercentric
segments of code that are likely to be used in more than one area of one or more applications.
An object is generally tested directly via unit-tests written by the developer and indirectly by useracceptance
and black-box testers.
Services however, require a different testing approach because they encompass entire business
processes and can call dozens of objects and are unlikely to have been developed or tested by anyone
you will ever meet or speak to. As testers, we have little choice but to validate the service as a blackbox,
probably through some kind of test harness, focusing on input values, output values and data
format. Sounds a lot like a unit-test, doesn't it?
A New Approach
The next challenge we testers face is that with SOA, we can no longer get away with thinking about
applications exclusively from just unit and black-box perspectives. We absolutely must think about
SOA applications in (at least) three logical segments: the services themselves, the user interface, and a
communication or SOA interface segment (sometimes referred to as a “service broker”). Sounds easy
enough, but here's the kicker: we need to test each of these segments both independently and
collectively and we need to test each of these segments at both the unit-level as well as a black-box.
This means more testers pairing with developers, more testers writing test harnesses, and more
automation via API versus UI.
The testing challenges that SOA present that I am most excited about (yes, I am aware that makes me a
geek) are the challenges related to performance testing. We as an industry already have enough trouble
finding the time and/or the money to performance test the way we'd like, even when we are lucky
enough to be part of an organization that thinks about performance testing at all. Now we're
intentionally building applications so we can plug in code that we will likely never see that was
probably written and is certainly hosted elsewhere on some machine we are unlikely to have access to,
that takes our data in and magically spits out the “answer” (we'll assume it's even the “correct answer”).
How, exactly, are we to trust that this magic service is going to handle our holiday peak? Even more
frightening, how are we going to trust that a whole bunch of these magic services are going to all
perform well together like the “well-oiled machine” we'd have built (or would like to believe we'd
build) on our own?
Why am I excited about this you ask? No, not because I think these challenges are going to make me
rich (though, that would be nice). I'm excited because I think that SOA is going to force the industry to
bridge the gap between top-down (black-box/user-experience) performance testing and bottom-up
(unit/component/object-level) performance testing that has needed to be bridged for as long as I've
been involved with performance testing.
Performance Testing as it was Meant To Be
Rather than having to figure out the logical segments for decomposition and recomposition, they have
already been defined for us. Rather than having to build test harnesses exclusively for performance
testing that no one else has thought of, we can piggy-back on the test harnesses used by the functional
and unit testers. Rather than starting from a belief that “We wrote it, therefore it will perform well," we
will be starting from a position of “Someone else wrote it and we need validate their performance
claims and make sure that it actually works that well with our UI/data/configuration.”
... to read more articles, visit http://sqa.fyicenter.com/art/