Software QA FYI - SQAFYI

Software QA/Testing Technical FAQs

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20   21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47 

Can you give me an example on reliability testing?
For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without any failure in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. Therefore we describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.
Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks (into dummy resistor loads). We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers' reliability requirements.


Need function to find all the positions?
Ex: a string "abcd, efgh,ight" .
Want break this string based on the criteria here ever found the..


Answer1:
And return the delimited fields as a list of string? Sound like a perl split function. This could be built on one of your own containing:
[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing things
[ ] //but this is with the cobling together of several built in functions
[-] LIST OF STRING Split(STRING sDelim, STRING sData)
[ ] LIST OF STRING lsReturn
[ ] STRING sSegment
[-] while MatchStr("*{sDelim}*", sData)
[ ] sSegment = GetField(sData, sDelim, 1)
[ ] ListAppend(lsReturn, Trim(sSegment))
[ ] //crude chunking:
[ ] sSegment += ","
[ ] sData = GetField(sData, sSegment, 2)
[-] if Len(sData) > 0
[ ] ListAppend(lsReturn, Trim(sData))
[ ] return lsReturn


Answer2:
You could use something like this.... hope I am understanding the problem
[+] testcase T1()
[ ] string sTest = "hello, there I am happy"
[ ] string sTest1 = (GetField (sTest, ",", 2))
[ ] Print(sTest1)
[ ]
[ ] This Prints "there I am happy"
[ ] GetField(sTest,","1)) would Print hello, etc....

Answer3:
Below is the function which return all fields (list of String).
[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)
[ ] INTEGER iIndex= 1
[ ] LIST OF STRING lsStr
[ ] STRING sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] if (iIndex == 1 && sToken == "")
[ ] iIndex = iIndex + 1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] while (sToken != "")
[ ] ListAppend (lsStr, sToken)
[ ] iIndex = iIndex+1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ] return lsStr


What is the difference between monkey testing and smoke testing?
Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.
Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.


It's a good thing to share test cases with customers

That's generally a good thing, but the question is why do they want to see them?
Potential problems are that they may be considering changing outsourcing firms and want to use the test cases elsewhere. If that can be prevented, please do so.
Another problem is that they want to micro manage your testing efforts. It's one thing to audit your work to prove to themselves that you're doing a good job, it's an entirely different matter if they intend to tell you that you don't have enough test coverage on the activity of module foo and far too much coverage on module bar, please correct it.
Another issue may be that they are seeking litigation and they need proof that you were negligent in some area of testing.
It's never a bad thing to have your customer wanting to be involved, unless you're a large company and this is a small (in terms of sales) customer.
What are your concerns about this? Can you give more information on your situation and the customer's?

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20   21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47 

Software QA/Testing Technical FAQs