You are on page 1of 28

XYZ Test strategy

Document Version 1.0


XYZ -test strategy

Copyright 2012 Sapient Corporation | Confidential

Table of Contents

Table of Contents
Glossary and Abbreviations 1 2 3 4 Project Overview Purpose Scope Test Strategy 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 Smoke Testing Functional Testing Cross Browser Testing Usability Testing Accessibility Testing System Integration Testing (SIT) Performance Testing Database Testing Security Testing 2 3 4 5 6 6 6 7 7 8 8 9 9 9 11 11 12 14 14 14 15 16 18 19 20 21 22 23 23 23 24 25 26 27

MDM testing approach 5.1 Important aspects of testing MDM application 5.2 Size and Complexity of the project Defect Management Process 6.1 Defect Status Meetings 6.2 Defect Priority Guidelines 6.3 Defect Reporting Workflow 6.4 Different stages of Defect Test Environment Test Tools Suspension/Resumption Criteria Test Deliverables Assumptions Communication approach 12.1 Status Reporting 12.2 Defect Review Session Roles and Responsibilities Client Responsibilities References Approvals

7 8 9 10 11 12

13 14 15 16

QA Capability MDM -test strategy

Glossary and Abbreviations

Glossary and Abbreviations


No. Term Description

1.

MDM

Master Data management

QA Capability MDM -test strategy

1 Project Overview
To provide an internal website to access the distributed data from various standalone applications and provide the consolidated information. This is done by creating and managing central repository of data and accessing it through the UI.

QA Capability MDM -test strategy

2 Purpose
The purpose of this document is to provide a test strategy for XYZ Corporation based on different implementations. The purpose of this document is to Define a Test strategy for MDM Describe the different testing types which would be relevant based on different criteria Describe the Testing Approach Define and identify roles and responsibilities in testing process

QA Capability MDM -test strategy

3 Scope
Sapient is responsible for testing the MDM application developed by either Sapient itself or by any other vendor. The major testing scope items would be: Test the customizations and configurations being done in the MDM Test the non customisable non configurable requirements based on the project needs Integration within and third party applications Testing the Nonfunctional requirements based on the closure between Sapient and clients

In general the scope of testing types includes: Smoke Testing Functional Testing Cross Browser Testing Usability Testing Accessibility Testing Regression Testing System Integration Testing Localization Testing Performance Testing Third party Integration testing Database Testing Security Testing

QA Capability MDM -test strategy

4 Test Strategy
As mentioned in the scope above, following testing types should be considered while testing a MDM implementation: 4.1 Smoke Testing Functional Testing Cross Browser Testing Usability Testing Accessibility Testing Regression Testing System Integration Testing Localization Testing Performance Testing Third party Integration testing Database Testing Security Testing

Smoke Testing Smoke testing is designed to quickly test the application and to understand of any potential issues which can lead to failure of major functionality. Testing teams will have the information of what functionalities are being releases in a particular iteration. Only when smoke testing is completed successfully, the testing team picks the rest of the functionality. Owner Activity Environment Entry Criteria Exit Criteria Testing team Stability of the application build QA Environment Build is ready All smoke testing scenarios have passed

Smoke Testing approach and activities: Smoke testing suite will consist of all high priority scenarios for the requirements delivered for a given iteration. Before QA environment is formally accepted for testing by testing team, high priority scenarios identified above are executed Any build that does not meet exit criteria for smoke testing will be rejected, and testing will continue on the previous base-lined build till the time new successful build is accepted by testing team

4.2

Functional Testing Testing team will create functional test cases and ensure that these test cases are mapped to all the test scenarios which in turn are mapped to requirements document. (Recommendation: In order to save time, it is better to use the high level scenarios created by QA Capability team as your base and then create the residual test scenarios) Functional testing focuses on verification of all the functionality described in the use cases. Owner Activity Environment Entry Criteria Exit Criteria Testing team Execution of functional test cases QA Environment All smoke testing scripts are passed No open P1s and P2 defects

Functional Testing approach and activities:

QA Capability MDM -test strategy

4.3

Testing team will create functional test scripts which will be mapped to Test scenarios and subsequently they will be mapped to requirements Test Scenarios should be validated with all the stakeholders Test scripts should be peer reviewed rd Use stubs for testing 3 party integration points The functional test scripts will be executed and results shared with stakeholders

Cross Browser Testing Cross Browser testing should be done to confirm that the functionality/look and feel of pages on different browsers is consistent. Different browsers that can be used for testing are:Internet Explorer 9.0 Firefox 3.6 Flash 10.0 Internet Explorer 8.0 Chrome 7.0 Safari 5.0 Internet Explorer 7.0 Opera 10

Owner Activity

Testing team Execute test scripts based on closed scope(Recommended: Execute High and Medium priority test scripts Testing Environment There are no Open P1s and P2s on the main browser There are no Open P1s and P2s on all the browsers There should be consistency of page layout and GUI in all the browsers and adheres to the Wireframes documents and no high priority defects from Business point of view. There should be no P1 defects from functionality and Business point of view

Environment Entry Criteria Exit Criteria Scope

Cross Browser testing approach: Open the main browser and the browser to be tested at the same time and execute the scripts at the same time. This will help in executing scripts faster

4.4

Usability Testing Usability testing is primarily done to ensure that the end user has a good experience while traversing through the website. Important aspects that would be tested in an MDM application are: To effectively see that the user can accomplish the desired tasks Effort needed to efficiently accomplish the desired tasks We have return users

In this type of testing, the application is tested for user interface items like colour, text, font, alignment of buttons, alignment of fields, status of buttons, size of the fields or buttons etc. This is a visual comparison of the application pages against the Wireframes/graphic designs. This testing will be done for all the requirements that involve a GUI component change or new development.

QA Capability MDM -test strategy

Owner Activity Environment Entry Criteria Exit Criteria

Testing Team Testing of user interface as per Wireframes and designs Testing Environment Smoke testing scripts are 100% passed There should be consistency of page layout and GUI in all the browsers and adheres to the Wireframes documents and no high priority defects i.e. P1s from Business point of view.

4.5

Accessibility Testing Testing team will carry out accessibility testing to ensure application meets minimum level of accessibility through mark up, scripting or other technologies that interact with or enable access through user agents, including assistive technologies. Accessibility testing will follow the guidelines specified in W3C Web Content Accessibility Guidelines 2.0, Conformance Level A. Owner Activity Environment Entry Criteria Exit Criteria Testing team Test the application for user accessibility. Testing Environment The application is successfully tested for functional requirements Application to meet minimum accessibility criteria with no P1s as specified in W3C Web Content Accessibility Guidelines 2.0, Conformance Level A.

This will be performed by ensuring: 4.6 Text alternatives are provided for any non-text content so that it can be changed into other forms people need, such as large print, Braille, speech, symbols or simpler language. Creation of contents that can be presented in different ways (for example simpler layout) without losing information or structure. Testing the application for all the functionality available from Keyboard. Provide users enough time to read and use content, wherever time based action is involved.

System Integration Testing (SIT) Primary purpose of SIT would be to execute end to end test scenarios and all the test cases which have been added over the period of different iterations. Following strategy will be followed: Testing team will use the test cases created for testing iterations and will also create end to end scenarios meant for testing the integrated code. Test Data would be provided by the team as per the agreed upon.

Owner Activity

Testing Team Interface Testing for all those interfaces where's there is an interaction with sapient responsible systems, thus verifying that response to request is coming properly or

QA Capability MDM -test strategy

not. Environment Entry Criteria SIT environment Smoke testing scripts are 100% passed Modules have undergone unit testing and passed functional testing. Interfaces and interactions between the various systems must be operational.

Exit Criteria

There are no P1 defects. Defects identified during SIT have either been fixed or accepted as known issues by the client.

4.7

Performance Testing Need to be covered by the respective teams in consultation with the performance testing team

4.8

Database Testing Testing team will create database testing test cases and ensure that these test cases are mapped to all the test scenarios which are picking the values from the database. (Recommendation: In order to save time, it is better to use the high level scenarios created by QA Capability team as your base and then create the database test scenarios based on project need) In Database Testing test engineer should test the data integrity, data accessing, query retrieving, modifications, updation and deletion etc Owner Activity Environment Entry Criteria Exit Criteria Database Testing approach and activities: Testing team will create database test scenarios and scripts Test Scenarios should be validated with all the stakeholders Test scripts should be peer reviewed The database test scripts will be executed and results shared with stakeholders Activities included a)Data validity testing. b)Data Integrity testing c) Testing of Procedure, triggers and functions Testing team Execution of database test cases QA Environment All smoke testing scripts are passed No open P1s and P2 defects

4.9

Security Testing Security testing is a process to determine that an information system protects data and maintains functionality as intended. The basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation using Parameter Tampering, cookie poisoning, Stealth commanding and Forceful Browsing.

QA Capability MDM -test strategy

Authentication - Testing the authentication schema means understanding how the authentication process works and using that information to circumvent the authentication mechanism. Basically, it allows a receiver to have confidence that information it receives originated from a specific known source. Authorization - Determining that a requester is allowed to receive a service or perform an operation. Confidentiality - A security measure which protects the disclosure of data or information to parties other than the intended. Integrity Whether the intended receiver receives the information or data which is not altered in transmission. Non-repudiation - Interchange of authentication information with some form of provable time stamp e.g. with session id etc.

Strategy should include the following: Testing team should validate Details /Scope for Security Testing Testing team should check for Session Maintenance. Testing team should validate security in various modules by testing strategies mentioned in table. Testing team should validate Entry and Exit Criteria.

QA Capability MDM -test strategy

5 MDM testing approach


Consider MDM as made up of a front end (the human-computer interface), a back end (Centralised database) and some middleware (some interfaces that integrates database with the standalone applications).In order to fully test the system it is important to test the different aspects in isolation and clubbed together. 5.1 Important aspects of testing MDM application

Browser compatibility Though relatively simple to do, it pays to spend enough time testing in this area. Decide on the lowest level of compatibility and test that the system does indeed work without problem on the early as well as the latest browser versions. Even with the same release version, browsers behave differently on different platforms, and when used with different language options. Testing should cover at least the main platforms (Unix, Windows, Mac, and Linux) and the expected language options.

Session Management Most applications and Web servers configure sessions so that they expire after a set time. Attempting to access a session object that has expired causes an error, and must be handled within the code. Testing of session expiration is ofter overlooked, largely because under normal operational circumstances, session expiration is unlikely to occur.

Usability Site navigation is crucial for attracting customers and retaining them. Sophisticated Web sites, such as for travel booking, need to pay particular attention to navigation issues. Large entities catalogs are central to many trading systems. Client should be able to quickly browse and search through catalogs. Developers can define tests to measure the effectiveness of entity navigation mechanisms. For example, you could test that a search on particular keywords brings up the correct entities.

Availability Before going live, predicted business usage patterns should indicate maximum stress levels. You should test system availability against the maximum stress levels plus a safety margin for a defined period of time.

Internationalization Does the site offer the option to view non-English pages? If the choice of language is based on browser preferences, does it work on all the desired browsers? Many older browsers do not support language customization. We need to test all the above aspects. Test that words are correctly displayed and that sentence are grammatically correct. Use a native speaker to verify that this is the case..

System Integration The data interface defines the format of data exchanged by front- and back-end systems. Tools such as XML (Extensible Mark- Up Language) alleviate data interface problems by providing document type definitions. The processing between front- and back-end systems may be time dependent. For example, a back-end system could be necessary to process a data transmission from the frontend system immediately or within a defined period. Tests should ascertain whether a system actually observes timeliness constraints and whether it activates data transmissions at the correct time.

QA Capability MDM -test strategy

One system must often update information in another system. Verify that batch programs and remote procedures perform the necessary update operations without side effects. Size and Complexity of the project

5.2

The size and complexity of an MDM implementation would drive the testing strategy for the project. Low complexity MDM implementation Low complexity MDM implementation would contain only basic features of MDM. The integration with 3 party vendors would be very limited and there would not be integration with multiple vendors. Any changes to the application would be an easy job. The testing of such systems would include: Testing of Interface Testing of integration with standalone applications
rd

Test data management is simpler in these implementations. Since the teams are smaller in size and the content on the sites are not heavy, the tester will have to manage smaller sized test data. Medium complexity MDM implementation Medium complexity MDM implementation would contain all the basic features as well as some advance features of ecommerce. There would be multiple touch points with other vendors. The information in these systems is not hardcoded and backend systems are used for publishing data on the website. The testing of such systems would include all that is mentioned under Low complexity implementation and the items mentioned below: Testing of data flow from backend rd Testing of information flowing between 3 party vendors

Test data management would require pre planning and co-ordination with other vendors. There should be a rd communication channel wherein the data coming into the system and going out of it is coordinated with 3 party vendors. Large complexity MDM implementation Large complexity MDM implementation would contain all the advance features and would involve multiple integrations rd with different 3 party systems. The implementation would be complex and would require huge testing effort. The data from one system would flow to multiple systems. The testing of such systems would include all that is mentioned in low and medium complexity implementations and the items mentioned below: Security and integrity of data Security of personal information Internationalization Performance

Test data management would require proper planning and co-ordination with other vendors. The ownership of test data needs to be defined and a central repository needs to be in place from where everyone can pick up data and use it. Approach for testing large complexity MDM implementation:Test Global and Test Distributed: MDM system is global in spirit and structure. The different underlying systems may be on different continents, but they appear to integrate seamlessly over large, distributed and non-homogenous

QA Capability MDM -test strategy

networks and other communication channels. Testing team has to validate that the impact of changes in one system should not impact other systems.

Consider User Profile: The user profile varies in terms of role(Admin or Normal user). While testing the application, ensure that all the access rights of user profile are taken care of.

QA Capability MDM -test strategy

6 Defect Management Process


A defect is a flaw in any aspect of a system including the requirements, the design or the code, that contributes, or may potentially contribute, to the occurrence of one or more failures. This process is essentially a workflow that defines how defects are captured, fixed, retested, and closed. Defects that are discovered must be logged, tracked and managed to resolution in order to ensure they are not propagated to the production environment. 6.1 Defect Status Meetings In order to ensure defects are on a path to resolution, daily sessions (TAR sessions) will be conducted to review defects with Client during SIT and UAT phase. These sessions are very critical to the success of testing. It is therefore critical that key lead resources attend. During this session, we review the outstanding issues (New, Open, and Reopen) by the priorities identified below. Some issues may require breakout sessions with other teams in order to resolve. These defects will be discussed briefly in the meetings and follow up meetings will be held immediately following the session. The duration of the meeting will vary based upon the number of open defects, and the speed at which defects are being worked. 6.2 Defect Priority Guidelines Priority levels will be assigned for the defects as below:
Priority Definition Showstopper / Critical - Testing cannot continue until issue is resolved - That prevents execution of major (core) functionality and has no work around - has major implication in the business. Example

P1

P2

High - Major functionality produces wrong results - Workaround exists

- Resulting system has reduced usability for


the end user

P3

Medium / Significant - Minor functionality produces wrong results or missed.

P4

Low/Minor - A defect that does not affect the functionality of the system. - Only minor cosmetic issues.

After clicking the Search button the application hangs Clicking a link leads to system exception error Submission of data leads to a system exception or error page From usability and accessibility perspective; flickering of screen could not be paused or stopped Search by specific text doesn't work Clicking on search from the Home page opens the wrong search page System responds incorrectly to invalid data, error handling not implemented Navigation within fields resulting into error message pop up saying the value entered is incorrect From usability and accessibility perspective; moving content could not be frozen Clicking on a link takes the user to the wrong location on the same page A comments field is not getting updated in the database Field validation missing Scroll bar not working, incorrect labels, and instructional text, heading or sub headings. From usability and accessibility perspective; tab order is not logical through links, forms, and objects Spelling or grammatical mistakes on web pages. Font type or color used is not according to the specified format

QA Capability MDM -test strategy

6.3

Defect Reporting Workflow

Start

Tester Log a Defect with Status New & assigns to QA TL

Action Required: Tester Logs Defect along with: a) Detail steps to reproduce defect. b) Screenshot of the Defect

Defect is discussed in TAR session.

No Is it a Defect?

Yes Defect is assigned to Dev TL with Priority. Defect is assigned to Developer with Status Open

Yes Action Required: Developer to mention duplicate Defect ID. Change Status to Reject Duplicate. Is Defect a Duplicate?

No Defect Not Valid Change Status to Reject Invalid Defect. Change Status to Reject Cant be Reproduced. No Is Defect reproducible? Yes Defect logged is not a Requirement. Developer fixes the Defect.

Change the status to Fixed. Action Required: Reason from discussion needs to be stated in Description Field. Defect assigned to QA TL. Build Released to Testing Environment. Enhancement Issue Training Requirement Follow Appropriate Lifecycle Tester retests the defect for correct fix. Defect assigned to QA TL with status Ready to Retest.

Defect assigned to Tester.

No Is Defect fixed? Yes Stop Defect Status is changed to Closed

Defect status changed to Reopen and Defect assigned back to Developer.

Action Required: Tester to add comments in Description Field.

QA Capability MDM -test strategy

The diagram above captures the defect tracking process: The tester will enter all defects into the defect tracking system; assign the defects to the QA TL in New status. A defect will contain a Title/Summary Description with steps to reproduce and screen shot or relevant information Status Priority Module Name Discovered In release The defect is then discussed in TAR session. If the defect is acknowledged to be Invalid, that the defect raised is invalid as per the current requirement, defect is marked as Reject-Invalid Defect. If the defect is acknowledged to be Valid, then it is assigned to the Development TL who assigns it to the concerned developer. When a developer acknowledges the defects, he will change the status to Open and will start working on the defect If the developer finds a defect to be a duplicate of an existing open defect, they will assign it back to testers mentioning the exact duplicate defect id. Testers re-verify whether it is actually a duplicate or not. If yes then defect is marked as Reject Duplicate else it is re-assigned back to developers with proper comment in Open status If the developer cannot reproduce the defect, he will change the status to Reject-Cant be Reproduced and assign it back to tester. Tester again tries to reproduce it and if it is re-producible, tester assigns it back to developer with appropriate comment in Open state else it is Closed If the defect is reproducible, then the developer fixes a defect & he will change its status to Fixed and keep it with him until next build is provided to QA Before every release, developers set the status to Ready to Retest and mentions the build in which the defect has been fixed and assigns it back to the QA TL and he/she assigns it to the tester Once the defect fixes are released, the tester will re-test to verify the defect. If the defect has been fixed, the status will be changed to Closed. If the Defect has not been fixed s/he will assign it back to developer with status as Reopen If the review team or developer requires further information about the issue, they will change the status to More Info Required and assign it back to testers. Testers provide required information and assign the defect back to developer in Open status Before every release if there are certain known defects to developers, they log it with status Known Issue When a defect is raised and the review team acknowledges it as a known limitation of the system but decides against fixing it & releasing the course with the defect, the status will be set to Known Issue If Testing has any suggestion about the functionality or UI which may result in requirement change, a Suggestion will be logged for the same. Suggestion is the category of the defect and these artifacts will not be counted in QA defect report.

QA team will work on only those defects which are assigned to the QA team members & are in only ReTest/Cannot Reproduce/More Info Required/Duplicate status.

6.4

Different stages of Defect 1. New Indicates new defect is logged on to defect tracking tool along with the Severity of the Defect. All New Defects are assigned to QA TL. 2. Open Defect is assigned Open status when it is assigned to a Developer for fix along with Priority assigned against it. 3. Fixed Developer, after fixing the defect changes its status to Fixed and dont assign it to anyone. 4. Ready to Retest Defect after getting fixed by developer is released to QA environment for Retest after new build containing the fix is deployed. The Defect status is then changed to Ready to Retest and assigned to QA TL who in turn assign it to Testers for retest.

QA Capability MDM -test strategy

5. Reopen On retesting the defect if tester finds that defect is not fixed, then its status is changed to Reopen and is assigned back to developer along with comments. 6. Reject-Duplicate Defect if found duplicate of an already existing defect already logged in defect tracking tool will be rejected with status Reject-Duplicate by the developer along with Duplicate defect ID and will be assigned to QA TL. 7. Reject-Cant be reproduced If the defect, assigned to developer for fix, cannot be reproduced by the developer then its rejected with status Reject-Cant be reproduced by the developer along with comments and will be assigned to QA TL. QA TL will discuss this defect with tester & developer. If this defect is getting reproduced then it will be assigned back to Developer with added comments and screenshot with status Reopen else it will remain in Reject-Cant be Reproduced status. 8. Reject-Invalid Defect A defect can be labelled as an invalid defect when: Defect logged by the tester which is not a requirement. Defect logged by the tester due to miss interpretation of requirement. 9 Closed The defects retested and working fine are closed by the tester. 10 More Info Required - the review team or developer requires further information about the issue, then status changed to More Info Required 11 Known Issue - known limitation of the system or before every release if there are certain known defects to developers 12 Suggestion - Testing has any suggestion about the functionality or UI which may result in requirement change, a Suggestion will be logged for the same

QA Capability MDM -test strategy

7 Test Environment
The following Test Environments would be made available for testing: Sapient Test QA Environment Client Test Environment Staging Environment Production Environment

In normal scenarios, testing will be done only in QA environment. If the QA environment is inaccessible due to technical issues then testing will be done on Dev environment. We would be using the following client environment setup for our testing Software/OS Windows XP Linux MAC Version SP-4

QA Capability MDM -test strategy

8 Test Tools
The following table lists the test tools required for this project. Requirement Test Management Tool Selenium Vendor Open Source Version 2.19

Defect Tracking

Bugzilla

Open Source

4.2

Unit Testing

JUnit

Open Source

4.10

QA Capability MDM -test strategy

9 Suspension/Resumption Criteria
The testing will be suspended if Test Environment and backup environment is unavailable. Smoke testing for the build fails. Incorrect version of code is deployed. Functionality is unstable, i.e., too many non-reproducible defects are encountered

Testing will be resumed if Stable test environment is available with stable build and correct version of code. Smoke testing for the build is passed. When showstoppers are fixed.

QA Capability MDM -test strategy

10 Test Deliverables
Described in the table below are testing deliverables, responsibilities and details for the project.

S. No.

Deliverable Name Test Strategy Document

Details

A Test Strategy document will be created once and it will outline the following details. Scope and types of testing Testing Approach and Key activities Entry and Exit criteria Defect Reporting workflow

Test Plan

This document contains testing activities planned.

QA Capability MDM -test strategy

11 Assumptions
MDM would be accessed through LAN or VPN. Proper permissions have been taken to access the real time data from standalone applications All the legal formalities have been carried out

QA Capability MDM -test strategy

12 Communication approach
12.1 Status Reporting The following status reports will be generated only during UAT phase to track testing and provide visibility into the status of testing, outstanding issues and risks. Test Execution Plan and Status Module-wise / Overall Active, resolved and closed defects Defects by severity and priority % of test cases passed vs. failed vs. remaining to run After UAT starts, Defect Triaging report for issues logged by the UAT team 12.2 Defect Review Session Defect review session will be held during Test script execution. Its task will be to review testing activities and prioritize and assign any defect(s) that have been raised during UAT. These TAR sessions will be done every day during UAT phase or as agreed upon with Client. Expected participants are Sapient and Client stake holders.

QA Capability MDM -test strategy

13 Roles and Responsibilities


Role Test Lead Who Sapient Responsibility Tester(s) Client Responsible for drafting and executing Testing Strategy as a whole Works with development team to ensure bugs are fixed in a timely manner Reviewing and Ensuring Test Scripts/cases are as per agreed standards. Creates/updates test scripts/cases. Assists the Tester(s) in understanding of the application and writing effective Test Cases Coordinates test activities Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT Prepare and Publish Test Status Reports at the end of Testing Cycle/Iteration. Risk and Issues escalation and tracking Will assist CO test lead in SIT planning. Provides test data created by Hybris system .

Test Lead

Client

Plans and co-ordinates test activities for SIT / UAT testing at Client test environment Ensures access to Testing testers to System Integration Testing test environment Provides test data required from Client Internal systems and External systems Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT Responsible for Security (Cybercom for external security testing), Migration (Migration Imports performed by Sapient would be verified by Sapient where as overall Migration testing would be done by Client), and User Acceptance testing Carries out System Integration testing for Clients internal and external systems to be working fine after integration Understands the requirements/stories and the application Creates and Updates test cases Execution of test scripts/cases as per the plan Capturing defects in defect tracking tool Retesting fixed defects and closing them

Tester(s) Sapient

QA Capability MDM -test strategy

14 Client Responsibilities
Following are the responsibilities: Access to their environments for Testing Team i.e. Client Test Environment, Staging Environment and Production Environment. Test Data required from standalone applications would be provided by Client. UAT Test Plan for the testing to be provided by Client team. Performance Test Data to be provided. User Acceptance Testing to be carried out by Client QA team. Resources to be used for UAT. Sign off process for High Level Scenarios from Client. Sign off process for Test Cases from Client. Sign off process for each Iteration.

QA Capability MDM -test strategy

15 References
Specifications of all standalone applications SRS

QA Capability MDM -test strategy

16 Approvals
Approval of Test Strategy document By signing this, I confirm my approval of the Test Strategy.

Name Vikas Shachi XYZ

Role Test manager Test Lead Client

Signature

Date

QA Capability MDM -test strategy

You might also like