Software QA FYI - SQAFYI

Software QA/Testing Technical FAQs

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32   33  34  35  36  37  38  39  40  41  42  43  44  45  46  47 

Looking for a tool whcih can do bulk data insert to various tables in the test database and also that tool which work with DB2, SQLServer and Oracle.


Answer1:
First copy the existing data to an excel file by DTS import/export wizard in SQL server 2000
Export the contents of the table to an excel file . In the Excel change the integrity constraints. for example the table had one primary key column. So using excel you just changed the values of the priamary key by using linear fill option of Excel. Then save it.
Now import data from this excel sheet to the table.

Answer2:
Using Perl and their DBI modules. You will also need DBD modules for the specific databases that you want to test with. In theory, you should be able to re-use the scripts and just change DBD connections or possibly create handles to all three RDBMSs simultaneously. Ruby and Python have similar facilities.
You will just have to have access to the data files somewhere and then you can then read the data and insert the data into the database using the correct insert statements.
There are other tools, but since they cost money to purchase I have never bothered to investigate them.
Scripting is the most powerful (and cheapest) way to do it. preferred method is to use Python and its ODBC module. This way you can use the same code and just change the data source for whichever DB you're connecting to. Also, you could potentially have the script generate random data if you don't have any source data to begin with.
need to have the proper ODBC client drivers installed on the box you're running the script from for the ODBC module. There's also a PyPerl distribution that will let you use the Perl DBI module with Python. It's really up to personal preference on what you're comfortable scripting in.


What makes a good QA/Test Manager?
QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.


Need to shut down network connectivity mid tansaction. How to do this programmatically via windows interface?

From the command line, IPCONFIG /RELEASE, should do it. or do the old fashion way. remove the cable on your machine. if u r using a wireless connection, it is better to use ipconfig then.


What can be done if requirements are changing continuously?
Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...
* Ensure the code is well commented and well documented; this makes changes easier for the developers.
* Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
* In the project's initial schedule, allow for some extra time to commensurate with probable changes.
* Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version.
* Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
* Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job.
* Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
* Design some flexibility into automated test scripts;
* Focus initial automated testing on application aspects that are most likely to remain unchanged;
* Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
* Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
* Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.


What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

Part:   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32   33  34  35  36  37  38  39  40  41  42  43  44  45  46  47 

Software QA/Testing Technical FAQs