Software QA FYI - SQAFYI

The Need for Functional Security Testing

By: C. Warren Axelrod

Abstract. Despite extensive testing of application functionality and security, we see many instances of software, when attacked or during normal operation, performing adversely in ways that were not anticipated. In large part, this is due to software assurance staff not testing fully for “negative functionality,” that is, ensuring that applications do not do what they are not supposed to. There are many reasons for this, including the relative enormity of the task, the pressure to implement quickly, and the lack of qualified testers. In this article, we will examine these issues and suggest ways in which we can achieve some measure of assurance that applications will not behave inappropriately under a broad range of conditions.

Introduction
Traditionally, software testing has focused on making sure systems satisfy requirements. Such functional requirements and specifications are expected to, but may not necessarily, accurately depict the functionality actually wanted by prospective users, particularly those aspects users may not be aware of or may not have been asked to consider.
In this article we examine the issues and challenges related to ensuring applications do not do what applications are not supposed to do. Such testing, for which we use the term Functional Security Testing (FST), is often complex, extensive and open-ended. And yet it is key to the secure and resilient operation of applications for the applications not to misbeha

The Evolution Of Testing
Programmers test applications that they are developing to ensure the applications run through to completion without generating errors. Programmers then usually engage in some rudimentary tests for correctness, such as ensuring that calculations correctly handle the types of data the programs process. In general, programmers seldom think “out of the box.” This attribute was, to a large extent, the root cause of the Y2K “bug,” where programmers frequently did not anticipate their programs would be running after Dec. 31, 1999 and so did not include century indicators in the programs, opting for two-digit year depictions. While it is true that programmers were motivated to abbreviate the year field by the need to stay within strict limitations in the amount of data that could be stored and transmitted, they failed to look to the future when their programs would crash.

During this early period, programs were thrown “over the transom” to the Quality Assurance or Software Assurance departments, where test engineers would attempt to match the functioning of the programs against the functional specifications developed by systems analysts. In general, such testers would limit their scope to ensuring the programs did what was intended, and not consider anomalous program behavior.1

Over the past decade, there has been greater focus on what might be called technical security testing, where security includes confidentiality, integrity, and availability. The usual approach is to assess the adherence of systems (including applications, system software and hardware, networks, etc.) to secure architecture, design, coding (i.e., programming), and operational standards. Often such testing includes checking for common vulnerabilities and programming errors, such as those specified by the Open Web Application Security Project (OWASP)2 and SysAdmin, Audit, Network, Security (SANS) 3 respectively.

However, there are aspects of security testing that are different. For example, McGraw [1] refers to “anti-requirements,” which “provide insight into how a malicious user, attacker …

can abuse [a] system.” McGraw differentiates anti-requirements from “security requirements” in that the security requirements “result in functionality that is built into a system to establish accepted behavior, [whereas] anti-requirements are established to determine what happens when this functionality goes away.” McGraw goes on to say “anti-requirements are often tied up in the lack of or failure of a security function.” Note that McGraw is referring to the adequacy or resiliency of security functions and not functions within applications. Merkow and Lakshmikanth [2] refer to security-related and resiliency-related testing as “nonfunctional requirements (NFR) testing.” NFR testing, which is used to determine the quality, security, and resiliency aspects of software, is based on the belief that nonfunctional requirements represent not what software is meant to do but how the software might do it. Merkow and Lakshmikanth [2] also stated that “gaining confidence that a system does not do what it’s not supposed to do …” requires subjecting “… a system to brutal resilience testing.”

In his book [3], Anderson affirms the importance of resilience testing with his comment that: “Failure recovery is often the most important aspect of security engineering, yet it is one of the most neglected.” He goes on to note that “… secure distributed systems tend to have been discussed as a side issue by experts on communications protocols and operating systems …” The author believes FST is another key area of testing that has received little attention from application development and information security communities and is not specifically mentioned in [1] or [2] or other publications.4 Using FST, applications are tested to ensure they do not allow harmful functional responses, which might have been initiated by legitimate or fraudulent users, to take place. It should be noted here that testing for responses that do not derive specifically from functions within applications, such as when a computer process corrupts data, are not included in the author’s definition of FST (see Introduction section).

It is important to differentiate among functional testing5 of applications, which attempts to ensure that the functionality of applications matches requirements; security testing, which aims to eliminate the aspects of systems that do not relate to application functionality but to the confidentiality, integrity, and availability of applications and the applications’ infrastructure; and FST, which is designed to ferret out the malfunctioning of applications that might lead to security compromises.

In this article, we examine FST and how it relates to other forms of testing, look at why it might have received so little attention to date, and suggest what is needed to make it a more effective software assurance tool.

Categories Of Testing
Various types of testing are key for successful software development and operation, as discussed in [1], [2], and [3]. As described previously, software testers (or test engineers) most commonly check that computer programs operate in accordance with the design of an application and consequent functional specifications, which in turn are meant to reflect users’ functional requirements. This form of testing is termed “functional requirements testing.” When applications are tested for functionality in isolation, rather than in an operational context, the activity is called “unit testing.” However, testers do need to ensure applications work correctly with the other applications with which they must interact. This latter form of functional requirements testing is known as “integration testing.” And testers must further check that individual programs function appropriately in the various particular contexts to which they might be subjected. This further form of functional requirements testing is known as “regression testing,” which is done to assure that changes in the functionality of an application do not have a negative impact on other components, subsystems and systems.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

The Need for Functional Security Testing