Software QA FYI - SQAFYI

Software QA/Testing Technical FAQs

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  

Should I take a course in manual testing?
Yes, you want to consider taking a course in manual testing. Why? Because learning how to perform manual testing is an important part of one's education. Unless you have a significant personal reason for not taking a course, you do not want to skip an important part of an academic program.


To learn to use WinRunner, should I sign up for a course at a nearby educational institution?
Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools.
In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in community colleges, tend to be inexpensive.


Test Specifications
The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain "how" to implement the test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:
Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number.
Title: is the title of the test.
ProgName: is the program name containing the test.
Author: is the person who wrote the test specification.
Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
Expected Error(s): Describes any errors expected
Reference(s): Lists reference documententation used to design the specification.
Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.
Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001

Script: (Pseudo Code for Coding Tests)
SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001


Formal Technical Review
Reviews that include walkthroughs, inspection, round-robin reviews and other small group technical assessment of software. It is a planned and control meeting attended by the analyst, programmers and people involve in the software development.

  • Uncover errors in logic, function or implementation for any representation of software
  • To verify that the software under review meets the requirements
  • To ensure that the software has been represented according to predefined standards
  • To achieve software that is developed in a uniform manner.
  • To make project more manageable.
  • Early discovery of software defects, so that in the development and maintenance phase the errors are substantially reduced. " Serves as a training ground, enabling junior members to observe the different approaches in the software development phases (gives them helicopter view of what other are doing when developing the software).
  • Allows for continuity and backup of the project. This is because a number of people are become familiar with parts of the software that they might not have otherwise seen,
  • Greater cohesion between different developers.

Reluctance of implementing Software Quality Assurance
Managers are reluctant to incur the extra upfront cost
Such upfront cost are not budgeted in software development therefore management may be unprepared to fork out the money.
Avoid Red - Tape (Bureaucracy)
Red- tape means extra administrative activities that needs to be performed as SQA involves a lot of paper work. New procedures to determine that software quality is correctly implemented needs to be developed, followed through and verified by external auditing bodies. These requirements involves a lot of administrative paperwork.


Benefits of Software Quality Assurance to the organization
Higher reliability will result in greater customer satisfaction: as software development is essentially a business transaction between a customer and developer, customers will naturally tend to patronize the services of the developer again if they are satisfied with the product.

Overall life cycle cost of software reduced.
As software quality is performed to ensure that software is conformance to certain requirements and standards. The maintenance cost of the software is gradually reduced as the software requires less modification after SQA. Maintenance refers to the correction and modification of errors that may be discovered only after implementation of the program. Hence, proper SQA procedures would identify more errors before the software gets released, therefore resulting in the overall reduction of the life cycle cost.


Constraints of Software Quality Assurance
Difficult to institute in small organizations where available resources to perform the necessary activities are not present. A smaller organization tends not to have the required resources like manpower, capital etc to assist in the process of SQA.

Cost not budgeted
In addition, SQA requires the expenditure of dollars that are not otherwise explicitly budgeted to software engineering and software quality. The implementation of SQA involves immediate upfront costs, and the benefits of SQA tend to be more long-term than short-term. Hence, some organizations may be less willing to include the cost of implementing SQA into their budget.

SOFTWARE TESTING METRICS
In general testers must rely on metrics collected in analysis, design and coding stages of the development in order to design, develop and conduct the tests necessary. These generally serve as indicators of overall testing effort needed. High-level design metrics can also help predict the complexities associated with integration testing and the need for specialized testing software (e.g. stubs and drivers). Cyclomatic complexity may yield modules that will require extensive testing as those with high cyclomatic complexity are more likely to be error prone.
Metrics collected from testing, on the other hand, usually comprise of the number and type of errors, failures, bugs and defects found. These can then serve as measures used to calculate further testing effort required. They can also be used as a management tool to determine the extensity of the project's success or failure and the correctness of the design. In any case these should be collected, examined and stored for future needs.


OBJECT ORIENTED TESTING METRICS
Testing metrics can be grouped into two categories: encapsulation and inheritance. Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states have to be tested.
Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes.
Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and thus violation of encapsulation
Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies.
Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.
Number of children (NOC) and depth of the inheritance tree (DIT) - For each subclass, its superclass has to be re-tested. The above metrics (and others) are different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance metrics, etc.).

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  

Software QA/Testing Technical FAQs