You are on page 1of 54

Software Testing

Tutorial for Beginners

Collected by: Eng. Moataz Abd Elkarim

Software Testing Tutorials for Beginners


This topic is especially for beginners. It bridges the gap between theoretical knowledge and real world implementation. This article helps you gain an insight to Software Testing understand technical aspects and the processes followed in a real working environment. Who this tutorial is for ?
Fresh graduates who want to start a career in SQA & Testing Software engineers who want to switch to SQA & Testing Testers who are new to Software Testing field. Testers who want to prepare for interviews.

What is Software Testing?


Definitions of Software Testing It is the process of Creating, Implementing and Evaluating tests. Testing measures software quality Testing can find faults. When they are removed, software quality is improved. Testing is executing a program with an indent of finding Error/Fault and Failure. IEEE Terminology : An examination of the behavior of the program by executing on sample data sets. 1. Why is Software Testing Important?

1. To discover defects. 2. To avoid user detecting problems 3. To prove that the software has no faults 4. To learn about the reliability of the software. 5. To avoid being sued by customers 6. To ensure that product works as user expected.

7. To stay in business 8. To detect defects early, which helps in reducing the cost of defect fixing?

Why start testing early? Introduction : You probably heard and read in blogs Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? very practically.

Fact One Lets start with the regular software development life cycle:

When project is planned

First weve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project. After that analysis will be done, followed by code build. Now its your turn: you can start testing. Do you think this is what is going to happen? Dream on. This is what's going to happen:

This is what actual happened when the project executes

Planning, analysis and code build will take more time then planned. That would not be a problem if the total project time would prolonger. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days. The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline. Fact Two The earlier you find a bug, the cheaper it is to fix it.

Price of Buggy Code If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper (!!) than when you find the same bug in testing. It will even be 100 times cheaper (!!) than when you find the bug after going live.

Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted. Conclusion: start testing early! This is what you should do:

Testing should be planned for each phase Make testing part of each Phase in the software life cycle Start test planning the moment the project starts Start finding the bug the moment the requirements are defined Keep on doing that during analysis and design phase Make sure testing becomes part of the development process And make sure all test preparation is done before you start final testing. If you have to start then, your testing is going to be crap! Want to know how to do this? Go to the Functional testing step by step page. (will be added later)

Test Design Techniques


Black Box Testing White Box Testing (include its approaches) Gray Box Testing

Black Box Testing


What is Black Box Testing?

Test the correctness of the functionality with the help of Inputs and Outputs. User doesnt require the knowledge of software code. Black box testing is also called as Functionality Testing. It attempts to find errors in the following categories:

Incorrect or missing functions. Interface errors. Errors in data structures or external data base access. Behavior or performance based errors. Initialization or termination errors.

Approaches used in Black Box Testing The following basic techniques are employed during black box testing:

Equivalence Class Boundary Value Analysis Error Guessing Equivalence Class:

For each piece of the specification, generate one or more equivalence Class Label the classes as Valid or Invalid Generate one test case for each Invalid Equivalence class Generate a test case that covers as many Valid Equivalence Classes as possible

An input condition for Equivalence Class:

A specific numeric value A range of values A set of related values A Boolean condition

Equivalence classes can be defined using the following guidelines:

If an input condition specifies a range, one valid and two invalid equivalence class are defined. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined. If an input condition is Boolean, one valid and one invalid classes are defined. Boundary Value Analysis

Generate test cases for the boundary values. Minimum Value, Minimum Value + 1, Minimum Value -1 Maximum Value, Maximum Value + 1, Maximum Value - 1 Error Guessing.

Generating test cases against to the specification.

White Box Testing

Testing the Internal program logic White box testing is also called as Structural testing. User does require the knowledge of software code. Purpose

Testing all loops Testing Basis paths Testing conditional statements Testing data structures Testing Logic Errors Testing Incorrect assumptions

Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops. Logic Errors and incorrect assumptions most are likely to be made while coding for special cases. Need to ensure these execution paths are tested. Approaches / Methods / Techniques for White Box Testing Basic Path Testing (Cyclomatic Complexity(Mc Cabe Method)

Measures the logical complexity of a procedural design. Provides flow-graph notation to identify independent paths of processing Once paths are identified - tests can be developed for - loops, conditions Process guarantees that every statement will get executed at least once.

Structure Testing:

Condition Testing - All logical conditions contained in the program module should be tested. Data Flow Testing- Selects test paths according to the location of definitions and use of variables. Loop Testing: Simple Loops Nested Loops Concatenated Loops Unstructured Loops

Gray Box Testing:


It is just a combination of both Black box & white box testing. It is platform independent and language independent. Used to test embedded systems. Functionality and behavioral parts are tested. Tester should have the knowledge of both the internals and externals of the function If you know something about how the product works on the inside, you can test it better from the outside. Gray box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless you understand the architecture of the Net, your testing will be skin deep.

Equivalence Class Partitioning Simplified


WHAT IS EQUIVALENCE PARTITIONING?

Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
WHY LEARN EQUIVALENCE PARTITIONING?

Equivalence partitioning significantly reduces the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING

To use equivalence partitioning, you will need to perform two steps 1. Identify the equivalence classes 2. Design test cases

STEP 1: IDENTIFY EQUIVALENCE CLASSES

Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class). Following are some general guidelines for identifying equivalence classes: a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if an item in inventory (numeric field) can have a quantity of +1 to +999, identify the following classes: 1. One valid class: (QTY is greater than or equal to +1 and is less than or equal to +999). This is written as (+1 < = QTY < = 999) 2. The invalid class (QTY is less than 1), also written as (QTY < 1) i.e. 0, -1, 2, so on 3. The invalid class (QTY is greater than 999), also written as (QTY >999) i.e. 1000, 1001, 1002, 1003 so on. Invalid class 0 -1 -2 -3 -4 So on

Valid Class 1 2 3 4 5 up to 999

Invalid class 1000 1001 1002 1003 1004.. So on

b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are too many inputs. For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the

invalid class (no. of purchase orders> 4) the invalid class (no. of purchase orders < 1) c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set. Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer: - One partition: number of inputs - Classes x<4, 4<=x<=24, 24<x - Chosen values: 3,4,5,14,23,24,25

Software-Testing-Techniques-II >>> Chart Next Page

Test Design Techniques:


The purpose of test design techniques is to identify test conditions and test scenarios through which effective and efficient test cases can be written.Using test design techniques is a best approach rather the test cases picking out of the air. Test design techniques help in achieving high test coverage. In this post, we will discuss the following: 1. Black Box Test Design Techniques

Specification Based Experience Based 2. White-box or Structural Test design techniques

1- Black-box testing techniques


These include specification-based and experienced-based techniques. These use external descriptions of the software, including specifications, requirements, and design to derive test cases. These tests can be functional or non-functional, though usually functional. Tester needs not to have any knowledge of internal structure or code of software under test. Specification-based techniques:

Equivalence partitioning mentioned before Boundary value analysis Use case testing Decision tables Cause-effect graph State transition testing Classification tree method Pair-wise testing

From ISTQB Syllabus: Common features of specification-based techniques:

Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components. From these models test cases can be derived systematically.

Experienced-based techniques:

Error Guessing Exploratory Testing Read Unscripted testing Approaches for the above.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Unscripted testing Approaches


Error Guessing

Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called Error Guessing. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach. An example of how to use the Error Guessing method would be to imagine you had a software program that accepted a ten digit customer code. The software was designed to only accept numerical data.

Here are some example test case ideas that could be considered as Error Guessing: 1. Input of a blank entry 2. Input of greater than ten digits 3. Input of mixture of numbers and letters 4. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases, is to think about what could have been missed during the software design. This testing approach should only be used to compliment an existing formal test method, and should not be used on its own, as it cannot be considered a complete form of testing software.

Exploratory Testing
This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it.

Ad-hoc Testing
This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider: 1. Take some time to think about what you want to achieve 2. Prioritize functional areas to test if under a strict amount of testing time 3. Allocate time to each functional area when you want to test the whole item 4. Log as much detail as possible about the item under test and its environment 5. Log as much as possible about the tests and the results

Random Testing
A Tester normally selects test input data from what is termed an input domain in a structured manner. Random Testing is simply when the Tester selects data from the input domain randomly. In order for random testing to be effective, there are some important open questions to be considered: 1. Is random data sufficient to prove the module meets its specification when tested? 2. Should random data only come from within the input domain? 3. How many values should be tested? As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that Random Testing is used in association with other structured techniques to provide a compromise between targeted testing and testing everything. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

From ISTQB* Syllabus: * International Software Testing Qualifications Board


Common features of experience-based techniques:

The knowledge and experience of people are used to derive the test cases. Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment. Knowledge about likely defects and their distribution.

White-box techniques
Also referred as structure-based techniques. These are based on the internal structure of the component. Tester must have knowledge of internal structure or code of software under test. Structural or structure-based techniques includes:

Statement testing Condition testing LCSAJ (loop testing) Path testing Decision testing/branch testing

From ISTQB Syllabus: Common features of structure-based techniques:

Information about how the software is constructed is used to derive the test cases, for example, code and design. The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.

Art of Test Case Writing


Objective and Importance of a Test Case - The basic objective of writing test cases is to ensure complete test coverage of the application. The most extensive effort in preparing to test a software, is writing test cases. Gives better reliability in estimating the test effort Improves productivity during test execution by reducing the understanding time during execution Writing effective test cases is a skill and that can be achieved by experience and in-depth study of the application on which test cases are being written. Documenting the test cases prior to test execution ensures that the tester does the homework and is prepared for the attack on the Application Under Test Breaking down the Test Requirements into Test Scenarios and Test Cases would help the testers avoid missing out certain test conditions What is a Test Case? It is the smallest unit of Testing A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly. Test cases must be written by a team member who thoroughly understands the function being tested. Elements of a Test Case Every test case must have the following details: Anatomy of a Test Case Test Case ID Requirement # / Section:

Objective: [What is to be verified? ] Assumptions & Prerequisites Steps to be executed: Test data (if any): [Variables and their values ] Expected result: Status: [Pass or Fail with details on Defect ID and proofs [o/p files, screenshots (optional)] Comments: Any CMMi company would have defined templates and standards to be adhered to while writing test cases.

Language to be used in Test Cases:


1. Use Simple and Easy-to-Understand language. 2. Use Active voice while writing test cases For eg. - Click on OK button - Enter the data in screen1 - Choose the option1 - Navigate to the account Summary page. 3. Use words like Verify / Validate for starting any sentence in Test Case description (Specially for checking GUI) For eg. - Validate the fields available in _________ screen/tab. (Changed as per Ricks suggestion See comments) 4. Use words like is/are and use Present Tense for Expected Results - The application displays the account information screen - An error message is displayed on entering special characters

Fault, Error and Failure:


Fault : It is a condition that causes the software to fail to perform its required function. Error : Refers to difference between Actual Output and Expected output. Failure : It is the inability of a system or component to perform required function according to its specification. IEEE Definitions

Failure: External behavior is incorrect Fault: Discrepancy in code that causes a failure. Error: Human mistake that caused fault Note:

Error is terminology of Developer. Bug is terminology of Tester

Unscripted Testing Techniques/Approaches


-Error Guessing
Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called Error Guessing. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach. An example of how to use the Error Guessing method would be to imagine you had a software program that accepted a ten digit customer code. The software was designed to only accept numerical data. Here are some example test case ideas that could be considered as Error Guessing: 1. Input of a blank entry 2. Input of greater than ten digits 3. Input of mixture of numbers and letters 4. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases, is to think about what could have been missed during the software design. This testing approach should only be used to compliment an existing formal test method, and should not be used on its own, as it cannot be considered a complete form of testing software.

-Exploratory Testing
This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it.

-Ad-hoc Testing
This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider: 1. Take some time to think about what you want to achieve 2. Prioritize functional areas to test if under a strict amount of testing time 3. Allocate time to each functional area when you want to test the whole item 4. Log as much detail as possible about the item under test and its environment 5. Log as much as possible about the tests and the results

-Random Testing
A Tester normally selects test input data from what is termed an input domain in a structured manner. Random Testing is simply when the Tester selects data from the input domain randomly. In order for random testing to be effective, there are some important open questions to be considered: 1. Is random data sufficient to prove the module meets its specification when tested? 2. Should random data only come from within the input domain? 3. How many values should be tested? As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that Random Testing is used in association with other structured techniques to provide a compromise between targeted testing and testing everything.

V-model is the basis of structured testing


You will find out this is a great model!

V-model is the basis of structured testing

The left side shows the classic software life cycle & Right side shows the verification and validation for Each Phase

Analyze User requirements


End users express their wish for a solution for one or more problems they have. In testing you have to start preparation of your user tests at this moment! You should do test preparation sessions with your acceptance testers. Ask them what cases they want to test. It might help you to find good test cases if you interview end users about the every day cases they work on. Ask them for difficulties they meet in every days work now. Give feedback about the results of this preparation (hand the list of real life cases, the questions) to the analyst team. Or even better, invite the analyst team to the test preparation sessions. They will learn a lot! System requirements One or more analysts interview end users and other parties to find out what is really wanted. They write down what they found out and usually this is reviewed by Development/Technical Team, end users and third parties. In testing you can start now by breaking the analyses down into 'features to test'. One 'feature to test' can only have 2 answers: 'pass' or 'fail'. One analysis document will have a number of features to test. Later this will be extremely useful in your quality reporting! Look for inconsistency and things you don't understand in the analysis documents. Theres a good chance that if you don't understand it, neither will the developers. Give Feedback your questions and remarks to the analyst team. This is a second review delivered by testing in order to find the bug as early as possible!

Lets discuss Left side of V Model: - Global and detailed design Development translates the analysis documents into technical design. - Code / Build Developers program the application and build the application. - Note: In the classic waterfall software life cycle testing would be at the end of the life cycle. The V-model is a little different. We already added some testing review to it.

The right side shows the different testing levels : - Component & Component integration testing These are the tests development performs to make sure that all the issues of the technical and functional analysis is implemented properly. - Component testing (unit testing) Every time a developer finishes a part of the application he should test this to see if it works properly. - Component integration testing Once a set of application parts is finished, a member of the Development team should test to verify whether the different parts do what they have to do. Once these tests pass successfully, system testing can start. - System and System integration testing In this testing level we are going to check whether the features to test, destilated from the analyses documents, are realised properly. Best results will be achieved when these tests are performed by professional testers. - System testing In this testing level each part (use case, screen description) is tested apart. - System integration testing Different parts of the application now are tested together to examine the quality of the application. This is an important (but sometimes difficult) step. Typical stuff to test: navigation between different screens, background processes started in one screen, giving a certain output (PDF, updating a database, consistency in GUI,...). System integration testing also involves testing the interfacing with other systems. E.g. if you have a web shop, you probably will have to test whether the integrated Online payment services works. These interface tests are usually not easy to realise, because you will have to make arrangements with parties outside the project group.

- Acceptance testing Here real users (= the people who will have to work with it) validate whether this application is what they really wanted. This comic explains why end users need to accept the application:

This is what actually Client Needs :-( During the project a lot off interpretation has to be done. The analyst team has to translate the wishes of the customer into text. Development has to translate these to program code. Testers have to interpret the analysis to make features to test list. Tell somebody a phrase. Make him tell this phrase to another person. And this person to another one... Do this 20 times. You'll be surprised how much the phrase has changed! This is exactly the same phenomenon you see in software development! Let the end users test the application with the real cases you listed up in the test preparation sessions. Ask them to use real life cases! And - instead of getting angry - listen when they tell you that the application is not doing what it should do. They are the people who will suffer the applications shortcomings for the next couple of years. They are your customer!

V Model to W Model | W Model in SDLC Simplified


We already discuss that V-model is the basis of structured testing. However there are few problem with V Model. V Model Represents one-to-one relationship between the documents on the left hand side and the test activities on the right. This is not always correct. System testing not only depends on Function requirements but also depends on technical design, architecture also. Couple of testing activities are not explained in V model. This is a major exception and the V-Model does not support the broader view of testing as a continuously major activity throughout the Software development lifecycle. Paul Herzlich introduced the W-Model. In W Model, those testing activities are covered which are skipped in V Model. The W model illustrates that the Testing starts from day one of the of the project initiation. If you see the below picture, 1st V shows all the phases of SDLC and 2nd V validates the each phase. In 1st V, every activity is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of that activity have been met and the deliverable meets its requirements. WModel presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on.

Fig 1: W Model

Fig 2: Each phase is verified/validated. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue.

Now, in the above figure,

Point 1 refers to - Build Test Plan & Test Strategy. Point 2 refers to - Scenario Identification. Point 3, 4 refers to Test case preparation from Specification document and design documents Point 5 refers to review of test cases and update as per the review comments. So if you see, the above 5 points covers static testing. Point 6 refers to Various testing methodologies (i.e. Unit/integration testing, path testing, equivalence partition, boundary value, specification based testing, security testing, usability testing, performance testing). After this, there are regression test cycles and then User acceptance testing. Conclusion - V model only shows dynamic test cycles, but W models gives a broader view of testing. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model).
More comparison of W Model with other SDLC models>> Document PDF.

The Testing Mindset


A professional tester approaches a product with the mind-set that the product is already broken - it has bugs and it is their job to find out them. They suppose the application under test is inherently defective and it is their job to illuminate the defects. This methodology/approach is required in testing. Designers and developers approach software with an optimism based on the guess/assumption that the changes they make are the accurate solution to a particular problem. But they are just that assumptions. Without being proved they are no more correct than guesses. Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. By taking a skeptical approach, the tester offers a balance. A Good Professional tester:

Takes nothing at face value. Always asks the question why? Seek to drive out certainty where there is none. Seek to illuminate the darker part of the projects with the light of inquiry. Sometimes this attitude can bring argument with Development Team. But development team can be testers too! If they can accept and adopt this state of mind for a certain portion of the project, they can offer excellent quality in the project and reduce cost of the project. Identifying the need for Testing Mindset is the first step towards a successful test approach and strategy.

Concept of Complete Testing | Exhaustive testing is impossible


It is not unusual to find people making claims such as I have exhaustively tested the program. Complete, or exhaustive, testing means there are no undiscovered faults at the end of the test phase. All problems must be known at the end of complete testing. For most of the systems, complete testing is near impossible because of the following reasons:

The domain of possible inputs of a program is too large to be completely used in testing a system. There are both valid inputs and invalid inputs. The program may have a large number of states. There may be timing constraints on the inputs, that is, an input may be valid at a certain time and invalid at other times. An input value which is valid but is not properly timed is called an inopportune input. The input domain of a system can be very large to be completely used in testing a program. The design issues may be too complex to completely test. The design may have included implicit design decisions and assumptions. For example, a programmer may use a global variable or a static variable to control program execution. It may not be possible to create all possible execution environments of the system. This becomes more significant when the behaviour of the software system depends on the real, outside world, such as weather, temperature, altitude, pressure, and so on. [From book - Software testing and quality assurance: theory and practice By Kshirasagar Naik, Priyadarshi Tripathy]

Must Read: Testing Limitations

Testing Limitations

You cannot test a program completely We can only test against system requirements - May not detect errors in the requirements. - Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.

Exhaustive (total) testing is impossible in present scenario. Time and budget constraints normally require very careful planning of the testing effort. Compromise between thoroughness and budget. Test results are used to make business decisions for release dates. Even if you do find the last bug, youll never know it You will run out of time before you run out of test cases You cannot test every path You cannot test every valid input You cannot test every invalid input

The Impossibility of Complete Testing by Dr. Cem Kaner >>>> Document PDF.

How and When Testing Starts


For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product. Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions. The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V Model for their development and testing. After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases.

After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

Requirement Specification document Review Guidelines and Checklists


To prepare effective test cases, testers and QA engineers should review the software specs documents carefully and raise as much queries as they can. The purpose of Software Requirement Specification Review is to uncover problems that are hidden within the specification document. This is a part of defect prevention. These problems always lead the software to incorrect implementation. So following guidelines for a detailed specification review is suggested: 1. Always review specification document with the entire testing team. Discuss each point with team members. 2. While reviewing specification document, look carefully for vague/fuzzy terms like ordinarily, most, mostly, some, sometimes, often, and usually and ask for clarification. 3. Many times it happens that list values are given but not completed. Look for terms: "etc., and so forth, and so on, such as." And be sure all the items/list values are understood. 4. When you are doing spec review, make sure stated ranges dont contain unstated/implicit assumptions. For example: The range of Number field is from 10 to 100. But is it Decimal? Ask for Clarification. 5. Also take care of vague/fuzzy terms like - skipped, eliminated, handled, rejected, processed. These terms can be interpreted in many ways. 6. Take care of unclear pronouns like The ABC module communicates with the XYZ module and its value is changed to 1. But whose value (of ABC Module or XYZ Module)? 7. Whenever a scenario/condition is defined in paragraph, then draw a picture of that in order to understand and try to find the expected result. If paragraph is too long, break it in multiple steps. It will be easy to understand. 8. In the specification document, if a scenario is described which hold calculations, then work on its calculations with minimum two examples.

9. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 10. If any mentioned scenario is complex then try to break it into points. 11. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client), then keep track of those issues. 12. Always go thru the revision history carefully. 13. After the specs are sign off and finalized, if any change come, then see the impacted areas.

Role of a tester in defect prevention and defect detection


Some Testers (especially beginners) often get confused with this Question What is the role of a tester in Defect Prevention and Defect Detection?. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase Role of a tester in defect prevention and defect detection. Defect prevention In Defect prevention, developers plays an important role. In this phase Developers do activities like code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art. While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated. Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention. We will discuss How to review the specification document?

Defect Detection In Defect detection, role of a tester include Implementing the most appropriate approach/strategy for testing ,preparation/execution of effective test cases and conducting the necessary tests like - exploratory testing, functional testing, etc. To increase the defect detection rate, tester should have complete understanding of the application. Ad hoc /exploratory testing should go in parallel with the test case execution as a lot of bugs can be found through that means.

Requirement Specification document Review Guidelines and Checklists


To prepare effective test cases, testers and QA engineers should review the software specs documents carefully and raise as much queries as they can. The purpose of Software Requirement Specification Review is to uncover problems that are hidden within the specification document. This is a part of defect prevention. These problems always lead the software to incorrect implementation. So following guidelines for a detailed specification review is suggested: 1. Always review specification document with the entire testing team. Discuss each point with team members. 2. While reviewing specification document, look carefully for vague/fuzzy terms like ordinarily, most, mostly, some, sometimes, often, and usually and ask for clarification. 3. Many times it happens that list values are given but not completed. Look for terms: "etc., and so forth, and so on, such as." And be sure all the items/list values are understood. 4. When you are doing spec review, make sure stated ranges dont contain unstated/implicit assumptions. For example: The range of Number field is from 10 to 100. But is it Decimal? Ask for Clarification. 5. Also take care of vague/fuzzy terms like - skipped, eliminated, handled, rejected, processed. These terms can be interpreted in many ways. 6. Take care of unclear pronouns like The ABC module communicates with the XYZ module and its value is changed to 1. But whose value (of ABC Module or XYZ Module)? 7. Whenever a scenario/condition is defined in paragraph, then draw a picture of that in order to understand and try to find the expected result. If paragraph is too long, break it in multiple steps. It will be easy to understand. 8. In the specification document, if a scenario is described which hold calculations, then work on its calculations with minimum two examples.

9. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 10. If any mentioned scenario is complex then try to break it into points. 11. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client), then keep track of those issues. 12. Always go thru the revision history carefully. 13. After the specs are sign off and finalized, if any change come, then see the impacted areas.

How and When Testing Starts


For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product. Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions. The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V Model for their development and testing.

After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design.

Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases. After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

Traceability Matrix from Software Testing perspective


Traceability Matrix is used in entire software development life cycle phases: 1. 2. 3. 4. 5. Risk Analysis phase Requirements Analysis and Specification phase Design Analysis and Specification phase Source Code Analysis, Unit Testing & Integration Testing phase Validation System Testing, Functional Testing phase

In this topic we will discuss:

What is Traceability Matrix from Software Testing perspective? (Point

5)

Types of Traceability Matrix Disadvantages of not using Traceability Matrix Benefits of using Traceability Matrix in testing Step by step process of creating an effective Traceability Matrix from requirements. Sample formats of Traceability Matrix basic version to advanced version. In Simple words - A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id Note We can make it a Test case Coverage checklist document by adding few more columns. We will discuss in later posts Types of Traceability Matrix: Forward Traceability Mapping of Requirements to Test cases Backward Traceability Mapping of Test Cases to Requirements Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.

Why Bi-Directional Traceability is required? Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements. This will help us in identifying if there are test cases that do not trace to any coverage item in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This backward Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements? Through Forward Traceability we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not? Forward Traceability Matrix ensures We are building the Right Product. Backward Traceability Matrix ensures We the Building the Product Right. Traceability matrix is the answer of the following questions of any Software Project:

How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customers needs? How can I certify that the final software product meets the customers needs? Now we can only make sure requirements are captured in the test cases by traceability matrix. Disadvantages of not using Traceability Matrix [some possible (seen) impact]:

No traceability or Incomplete Traceability Results into: 1. Poor or unknown test coverage, more defects found in production 2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release. 3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc Benefits of using Traceability Matrix

Make obvious to the client that the software is being developed as per the requirements. To make sure that all requirements included in the test cases To make sure that developers are not creating features that no one has requested Easy to identify the missing functionalities. If there is a change request for a requirement, then we can easily find out which test cases need to update. The completed system may have Extra functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.

Steps to create Traceability Martix: 1. Make use of excel to create Traceability Matrix: 2. Define following columns: Base Specification/Requirement ID (If any) Requirement ID Requirement description TC 001 TC 002 TC 003.. So on. 3. Identify all the testable requirements in granular level from requirement document. Typical requirements you need to capture are as follows: Used cases (all the flows are captured) Error Messages Business rules Functional rules SRS FRS So on 4. Identity all the test scenarios and test flows. 5. Map Requirement IDs to the test cases. Assume (as per below table), Test case TC 001 is your one flow/scenario. Now in this scenario, Requirements SR-1.1 and SR-1.2 are covered. So mark x for these requirements. Now from below table you can conclude Requirement SR-1.1 is covered in TC 001 Requirement SR-1.2 is covered in TC 001 Requirement SR-1.5 is covered in TC 001, TC 003 [Now it is easy to identify, which test cases need to be updated if there is any change request].

TC 001 Covers SR-1.1, SR, 1.2 [we can easily identify that test cases covers which requirements]. TC 002 covers SR-1.3.. So on.. Requirement ID SR-1.1 SR-1.2 SR-1.3 Requirement TC 001 description User should be able to x do this User should be able to x do that On clicking this, following message should appear x x TC 002 TC 003

SR-1.4 SR-1.5 SR-1.6 SR-1.7

x x x

This is a very basic traceability matrix format. You can add more following columns and make it more effective: ID, Assoc ID, Technical Assumption(s) and/or Customer Need(s), Functional Requirement, Status, Architectural/Design Document, Technical Specification, System Component(s), Software Module(s), Test Case Number, Tested In, Implemented In, Verification, Additional Comments,

Check the Excel worksheet>>>>>

Functional Requirements and Use Cases


Functional Requirements
Functional requirements capture the intended behavior of the system. This behavior may be expressed as services, tasks or functions the system is required to perform. In product development, it is useful to distinguish between the baseline functionality necessary for any system to compete in that product domain, and features that differentiate the system from competitors products, and from variants in your companys own product line/family. Features may be additional functionality, or differ from the basic functionality along some quality attribute (such as performance or memory utilization). One strategy for quickly penetrating a market, is to produce the core, or stripped down, basic product, adding features to variants of the product to be released shortly thereafter. This release strategy is obviously also beneficial in information systems development, staging core functionality for early releases and adding features over the course of several subsequent releases. In many industries, companies produce product lines with different cost/feature variations per product in the line, and product families that include a number of product lines targeted at somewhat different markets or usage situations. What makes these product lines part of a family, are some common elements of functionality and identity. A platform-based development approach leverages this commonality, utilizing a set of reusable assets across the family. These strategies have important implications for software architecture. In particular, it is not just the functional requirements of the first product or release that must be supported by the architecture. The functional requirements of early (nearly concurrent) releases need to be explicitly taken into account. Later releases are accommodated through architectural qualities such as extensibility, flexibility, etc. The latter are expressed as non-functional requirements. Use cases have quickly become a widespread practice for capturing functional requirements. This is especially true in the object-oriented community where they originated, but their applicability is not limited to object-oriented systems.

Use Cases
A use case defines a goal-oriented set of interactions between external actors and the system under consideration. Actors are parties outside the system that interact with the system (UML 1999, pp. 2.113- 2.123). An actor may be a class of users, roles users can play, or other systems. Cockburn (1997) distinguishes between primary and secondary actors. A primary actor is one having a goal requiring the assistance of the system. A secondary actor is one from which the system needs assistance. A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. It describes the sequence of interactions between actors and the system necessary to deliver the service that satisfies the goal. It also includes possible variants of this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure to complete the service because of exceptional behavior, error handling, etc. The system is treated as a "black box", and the interactions with system, including system responses, are as perceived from outside the system. Thus, use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system. Generally, use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain. This is engaging for users who can easily follow and validate the use cases, and the accessibility encourages users to be actively involved in defining the requirements.

Scenarios
A scenario is an instance of a use case, and represents a single path through the use case. Thus, one may construct a scenario for the main flow through the use case, and other scenarios for each possible variation of flow through the use case (e.g., triggered by options, error conditions, security breaches, etc.). Scenarios may be depicted using sequence diagrams.

Role of Use Cases in Architecting


See the "Functional Requirements and Use Cases" white paper by Ruth Malan and Dana Bredemeyer, for the role of use cases in the architecting process. This paper also contains an updated version of Derek Coleman's use case template (Coleman, 1998) and a full use case bibliography.

Recommended Reading on Use Cases


Booch, G., I. Jacobson and J. Rumbaugh, The Unified Modeling Language User Guide. AddisonWesley, 1999, pp. 219-241. Christerson, Magnus, "From Use Cases to Components", Rose Architect, 5/99. http://www.rosearchitect.com/cgi-bin/viewprint.pl Cockburn, Alistair, "Structuring Use Cases with Goals", Journal of Object-Oriented Programming, Sep-Oct, 1997 and Nov-Dec, 1997. Also available onhttp://members.aol.com/acockburn/papers/usecases.htm Cockburn, Alistair, "Basic Use Case Template", Oct .1998. Available on http://members.aol.com/acockburn/papers/uctempla.htm Coleman, Derek, "A Use Case Template: Draft for discussion", Fusion Newsletter, April 1998. http://www.hpl.hp.com/fusion/md_newsletters.html Constantine, Larry. "What Do Users Want? Engineering usability into software", http://www.foruse.com Malan, R. and D. Bredemeyer, "Functional Requirements and Use Cases", (functreq.pdf, 39k) June 1999. Malan, R. and D. Bredemeyer, "Use Case Action Guide", (Use_Case_Template.pdf, 25kb) April 2000. Pols, Andy, "Use Case Rules of Thumb: Guidelines and lessons learned", Fusion Newsletter, Feb. 1997. (This used to be available athttp://www.hpl.hp.com/fusion/md_newsletters.html.) Sehlhorst, Scott, The Impact of Change on Use Cases, July 24, 2006. Scott's Tyner Blain blog covers requirements-related topics. UML Specification. http://www.rational.com/uml/index.jtmpl. We have referenced V1.3 Alpha R5, March 1999 in this paper. Karl Wiegers, "Listening to the Customer's Voice," http://www.processimpact.com/articles/usecase.html

Practical interview questions on Software Testing Part 1


1. On which basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority? Always the priority is given by team leader or Business Analyst. Severity is given by the reporter of bug. For example, High severity: hardware bugs application crash. Low severity: User interface bugs. High priority: Error message is not coming on time, calculation bugs etc. Low priority: Wrong alignment, etc 2. What do you mean by reproducing the bug? If the bug was not reproducible, what is the next step? If you find a defect, for example click the button and the corresponding action didnt happen, it is a bug. If the developer is unable to find this behaviour he will ask us to reproduce the bug. In another scenario, if the client complaints a defect in the production we will have to reproduce it in test environment. If the bug was not reproducible by developer, then bug is assigned back to reporter or goto meeting or informal meeting (like walkthrough) is arranged in order to reproduce the bug. Sometimes the bugs are inconsistent, so that that case we can mark the bugs as inconsistent and temporarily close the bug with status working fine now. 3. What is the responsibility of a tester when a bug which may arrive at the time of testing. Explain? First check the status of the bug, then check whether the bug is valid or not then forward the same bug to the team leader and then after confirmation forward it to the concern developer. If we cannot reproduce it, it is not reproducible in which case we will do further testing around it and if we cannot see it we will close it, and just hope it would never come back ever again. 4. How can we design the test cases from requirements? Do the requirements, represent exact functionality of AUT? Ofcourse, requirements should represents exact functionality of AUT. First of all you have to analyze the requirements very thoroughly in terms of functionality. Then we have to think about suitable test case design technique [Black Box design techniques like Specification based test cases, functional test cases, Equivalence Class Partitioning (ECP), Boundary Valve Analysis (BVA),Error guessing and Cause Effect Graphing] for writing the test cases.

By these concepts you should design a test case, which should have the capability of finding the absence of defects. Read: Art of Test case writing 5. How to launch the test cases in Quality Centre (Test Director) and where it is saved? You create the test cases in the test plan tab and link them to the requirements in the requirement tab. Once the test cases are ready we change the status to ready and go to the Test Lab Tab and create a test set and add the test cases to the test set and you can run from there. For automation, in test plan, create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases. The test cases are sorted in test plan tab or more precisely in the test director, lets say quality centers database. test director is now referred to as quality center. 6. How is traceability of bug follow? The traceability of bug can be followed in so many ways. 1. Mapping the functional requirement scenarios(FS Doc) - test cases (ID) Failed test cases(Bugs) 2. Mapping between requirements(RS Doc) - Test case (ID) - Failed test cases. 3. mapping between test plan (TP Doc) - test case (ID) - failed test cases. 4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases. 5. Mapping between high level design(Design Doc) - test cases (ID) - Failed test cases. Usually the traceability matrix is mapping between the requirements, client requirements, function specification, test plan and test cases. 7. What is the difference between use case, test case, test plan? Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS), which are nothing but a steps which are given by the customer. Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to test and what not to test, scheduling, what to test using automation etc.

Mercury Quality Center Interview Questions


1. What is meant by test lab in Quality Centre? Test lab is a part of Quality Centre where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in the project. Internally Quality Centre will refer to this test while running then in the test lab. 2. Can you map the defects directly to the requirements(Not through the test cases) in the Quality Centre? In the following methods is most likely to used in this case:

Create your Req.Structure Create the test case structure and the test cases Map the test cases to the App.Req Run and report bugs from your test cases in the test lab module. The database structure in Quality Centre is mapping test cases to defects, only if you have created the bug from Application. test case may be we can update the mapping by using some code in the bug script module(from the customize project function), as per as i know, it is not possible to map defects directly to an requirements. 3. how do you run reports from Quality Centre. This is how you do it 1. Open the Quality Centre project 2. Displays the requirements modules 3. Choose report Analysis > reports > standard requirements report 4. Can we upload test cases from an excel sheet into Quality Centre? Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine. Now open excel, you can find the new menu option export to Quality Centre. Rest of the procedure is self explanatory.

5. Can we export the file from Quality Centre to excel sheet. If yes then how? Requirement tab Right click on main req/click on export/save as word, excel or other template. This would save all the child requirements Test plan tab: Only individual test can be exported. no parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as. Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats. Select all or selected option Defects Tab: Right click anywhere on the window, export all or selected defects and save excel sheet or document. 6. How many types of tabs are there in Quality Centre. Explain? There are four types of tabs are available 1. Requirement : To track the customer requirements 2. Testplan : To design the test cases and to store the test scripts 3. test lab : To execute the test cases and track the results. 4. Defect : To log a defect and to track the logged defects. 7. How to map the requirements with test cases in Quality Centre? 1. In requirements tab select coverage view 2. Select requirement by clicking on parent/child or grandchild 3. On right hand side(In coverage view window) another window will appear. It has two tabs a) Tests coverage b) Details Test coverage tab will be selected by default or you click on it. 4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You cans elect any test case you want to map with your requirements. 8. How to use Quality Centre in real time project? Once completed the preparing of test cases 1. Export the test cases into Quality Centre( It will contained total 8 steps) 2. The test cases will be loaded in the test plan module 3. Once the execution is started. We move the test cases from test plan tab to the test lab module. 4. In test lab, we execute the test cases and put as pass or fail or incomplete. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver) 5. If we got any defects and raise the defects in the defect module. when raising the defects, attach the defects with the screen shot.

9. Difference between Web Inspect-QA Inspect? QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability. Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect, auditors, compliance officers and security experts can perform security assessments on a web enabled application. Web inspect enables users to perform security assessments for any web application or web service, including the industry leading application platforms.

10. How can we add requirements to test cases in Quality Centre? Just you can use the option of add requirements. Two kinds of requirements are available in TD. 1. Parent Requirement 2. Child requirements.

Parent Requirements nothing but title of the requirements, it covers high level functions of the requirements Child requirement nothing but sub title of requirements, it covers low level functions of the requirements.

You might also like