You are on page 1of 34

TEST PLAN: A test plan is a document describing the scope, approach, resources, and schedule of intended testing activities.

. It identifies test items, the feature to be tested, the testing tasks, who will do each task, and any risks requiring contigency planning. TEST PLAN COMPONENTS: Broadly it may contain the below sections: test plan identifier introduction test items features to be tested features not to be tested approach item pass/fail criteria suspension criteria and resumption requirements test deliverables testing tasks environmental needs responsibilities staffing and training needs scheduling risk and contigencies approvals TEST PLAN COMPONENTS: Test Plan Identifier: unique num generated to identify the test plan Introduction: states the pupose-level of the plan includes references to other documents/itmes identifies the scope of the plan in relation with the s/w project plan Test Items: list of items/functions that need to be tested technical description of the features not in user understanding Features to be tested: listing of featues to be tested from users perspective of the sys functionalities sets level of risk for each feature Features Not To Be Tested: listing of what is no to be tested from users perspective and configuration management view could be phased out, low-risk or non-functional items Approach: overall strategy of the test plan- identification of rules and processes tools required, Metrics collection, h/w, s/w, regression levels Item Pass/fail criteria completion criteria for the plan could be an individual test case level criterion or a unit level plan or general functioanl requirments for higher level plans E.g.Specific coverage of test cases, specified percentage and level of accptable defects

Suspension Criteria and resumption requirements specify conditions to pause a test series identifies acceptable level of defects for testion to proceed Test deliverables: items to be delivered as a part of the test plan Testing Tasks: Multiphase process identifies the parts of the application that the overall plan does not address Mutiparty process identifies the features/functions that the sectional test plan does not cover Third party identifies test tasks beloning to both the internal groups and the extenal groups Environmental needs: specific h/w and s/w requirements are highlighted Responsibilities: identifies the person responsible for each and every task/deliverable as a part of the plan Staffing and training needs: identifies the training and resourse needs for the application/sys and test tools, if any Schedule the timeliness and milestones are created on the basis of estimates provided Risks and contigencies: specify the identifiable and potential risks in the project highlight the mitigation steps wherever possible Approvals: determine the approval process and the various stakeholders who determine the process as complete TEST PLAN-BENEFITS: Forms a contract b/w testers and project team Avoids random testing and missed features Optimizes resources TEST PLAN- KEY TASK One of the key task is developing a test plan in detail: Test Cycles Test Scenarios Test Cases Testing Types integration or system testing Non-functional quality attributes such as performance, reliability, usability etc. Testing Methods testing methods (black box testing, GUI testing, system flow, db,etc.) and strategy of testing method od analysing the test results are also to be documented. Indicate the test coverage if any identify the tools for defect tracking Automated & Manual tests specify the tests which are automated and the tests that will be carried ot manually TEST CASE

A test case is a set of conditions or variables and inputs that are developed for aparticular goal or objective or feature. It might take more than one test case to determine the true functionality of the application being tested Every requirements or objective to be achieved needs at least one test case broadly two types of test cases Formal & Informal

FORMAL TEST CASE: test cases written based on the application requirments at lease 2 test cases positive and negative characterized by a known i/p and by an expected o/p, which is worked out before the test is executed Typicallly compromises of three parts: information activity results FORMAL TEST CASE COMPONENTS: Information consists of general information about the test case incorporates identifier, test case creator, test case version, name of the test case, purpose or brief description and test case Activity consists of the actual test case activities contains information about the test case environment, activities to be done at test case initialization, activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing Results outcomes of a performed test case consists of information about expected results and the actual results INFORMAL TEST CASE: Test cases written for applications without formal requirements Based on the accepted normal operation of programs of a similar class E.g is scenario testing where hypothetical stories are used to think through a complex problem or system TEST CASE LEVELS: there are levels in which each test case will fall in order to avoid duplication efforts. Level 1: in this level you will write basic test case from the available specification & user docmentation. Level2: this is the practical stage in which writing test cases depend on actual functional and system flow of the application. level3: this is the practical stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases, max of 10 level 4: automation of the project. This will minimise human interaction with syusten and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing. TEST CASE DESIGN:

test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information: purpose of the test software requirements and hardware requirements (ifany) specific setup or configuration requirements description on how to perform the tests expected results or success criteria for the test TEST CASE STRUCTURE: test case ID test case name test case description : what is to be verified? Test data: variables and their values steps to be executed actual result pass/fail comments TEST CASE TEMPLATE ( TABLE) TEST REPORT TEST EXECUTION: the most important phase of the testing life cycle is the test execution phase.. this is when various tests are performed on the application/ product to verify if it is working as expected. TEST EXECUTION: the planned test cases should be scheduled day-wise and each test case should be assigned to one tester. The test should be executed in the relevent environment and the results be logged in the test Log document. If any of the test is not passed a defect log should be prepared and passsed on to the developers. Testing of the failed test cases should be done after the defect is fixed. TEST EXECUTION ASPECTS: test execution has following aspects: the concept and process of test bed set up. The process of recording test results and creation of logs. The decision making process on when to stop testing. Proactive escalation significance and procedure. Defect management. TEST BED: an execution environment configured for testing. This may consists of specific h/w, OS, Network Topology, configuration of the product under test, other application or system s/w and so on.. test data can be created in two ways: data mining from the production data cut. Creation of new data in the environment.

EXECUTION TEST: test execution involves executing the test cases to verify if the functionality is working as expected. Types of execution: planned Ad hoc WHEN TO STOP TESTING: In reality it is impossible to discover all the bugs in the application. Focus to discover bugs which hamper the smooth functioning of applications Several factors are involved in deciding that testing is adequeate/complete: the test manager has the confidence that the system will behave as expected in production. The quality goals defined achieved and their severity levels. The percentage of coverage achieved by the executed tests should be taken into concideration. The number of open defects and their severity levels. The risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration. TEST RESULTS: A test problem is a condition that exist within the s/w system that needs to be addressed. Documenting a test problem carefully and completely is the step in correcting the problem. Test results can be recorded in a simple word/excel document. Test results can also be recorded in tools like Test Director(TD). Stackholders who have an interest in the results: End user developers s/w project manager IT quality assurance PROACTIVE ESCALATION: SIGNIFICANCE AND PROCEDURE: Escalation does not mean finding fault with the developer Escalation of foreseen risks with the s/w or the environment could save a lot of time when testing an application. Procedure: status reports mails in case of immediate escalation DEFECT: A defect is a flaw in a s/w system, which causes the system to perform in am unintended or unanticipated manner. DEFECT LOGGING: if a test case fails during exection it needs to be failed in the defect reporting tool and the defect has to be reported/logged for the same. DEFECT CLASSIFICATION: when we add a new defect we assign the security of the defect which has impacts on the products tested. Severity classification is as given below: critical highSuspension criteria and Resumption criteria

medium low DEFECT STATUS: stages of a bug life cycle: new: when QA files new bug deferred: if the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred. Assigned: Assigned to' field is set by project lead or manager and assigns bug to devloper. Resolved/Fixed: when devloper makes necessary code changes and verifies the changes then he/she can make bug status as fixed and the bug is passed to testing team. Could not reproduce: if developer is not able to reproduce the bug by the steps given in bug report by QA the developers can mark the bug as 'CNR'. QA needs action to check if bug is reproducing and can assign to developer with detailed reproducing steps. Need more information: if developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as need more information. In this case QA needs to ass detailed reproducing steps and assign bug back to development for fix. Reopen If QA is not satisy with the fix and if bug is still reproducible even after fix then QA can mark it as reopen so that developer can take appropriate action. Closed: if bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as closed. rejected/invalid: sometimes developer or team lead can mark the bug as rejected or invalid if the system is working according to specification and bug is just due to some misinterpretation. DEFECT/BUG REPORT: defetc report needs to do more than just describing the bug. Effective bug report provides higher chances of defect resolutions. The basic items in a defect report are as follows: version producr data steps to reproduce the defect description supporting documentation DEFECT/BUG REPORTING QUALITIES: clearly specified bug number assign unique num to each reported bug reproducible clearly mention the steps to reproduce the bug Be Specific summarize the problem in min words yet effective way DEFECT/BUG REPORT TEMPLATE: simple bug report template may contain the following: reporter ur name n email address product in which product you found this bug. Version the product version if any component these are the major sub modules of the production platform mention the h/w platform where you found this bug. The various platforms like 'PC', MAC' , HP', 'Sun' etc.. Operating system - Mention all OS where you found the bug.Os llike Windows, linux,

unix,

sun OS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.. Priority when bug should be fixed? Priority is generally set from P1 to P5. P1 as fix the bug with highest priority and P5 as fix when time permits . Severity This describes the impact of the bug. Types of severity: Blocker: No further testing work can be done. Critical: Application crash, Loss of data. Major: Major loss of function. Minor: Minor loss of function. Trivial: Some UI enhancements. Enhancements: Request for new feature or some enhancements in existing one.

Status When you are logging the bug in any bug tracking system then by default the bug status is 'New'. Later on bug goes through various stages like Fixed, Verified, Reopen, Won't Fix etc... Assign To If you know which developer is responsible for the particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or manager will assign bug to developer. URL The page url on which bug occurred. Summary The brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is. Description A detailed description of bug. Use following fields for description field: Reproduce steps: Clearly mention the steps to reproduce the bugs. Expected result: How application should behave on above mentioned steps.. Actual result: What is the actual result on running above steps i.e. the bug behavior These are the important steps in bug report. Report type can be added as one more field which will describe the bug type. The report types are typicallly: Coding error Design error New suggestion Documentation issue Hardware problem DEFECT/BUG REPORT TIPS: Bonus tips to write a good bug report Report the problem immediately Reproduce the bug three times before writing bug report Test the same bug occurrence on other similar module Write a good bug summary Read bug report bfore hitting Submit button Do not use Abusive language. DEFECT TRACKING: As you monitor the progress of defect repair you update the information in defect reporting tool. If a defect is detected in an application: 1. You initially report the defect, by default it is assigned the status new. 2. A quality assurance or project manager reviews the defect, determines a repair priority, changes its status to open and assigns it to a member of the development

team. 3. A developer repairs the defect and assigns it the status fixed. 4. You retest the application, making sure that the defect does not recur. The quality assurance or project determines that the defect is actually repaired and assigns it the status closed. DEFECT CAUSE ANALYSIS: Defects occur because human beings are fallible and there is time pressure, complex code, complexity of infrastructure, changed technologies, and/or many system interactions. Defect causal analysis offers a simple, low cost method for systematically improving the quality of s/w produced by a team, project, or organization.

DEFECT ANALYSIS & PREVENTION: Defect analysis is the process of analyzing a defect to determine its root cause. Defect prevention is the process of addressing root causes of defects to prevent their future occurrence. DAY 12: LEVELS OF TESTING: The key to sucessful test strategies is picking the right level of testing at each stage in a project. Decision has to be made about the level of coverage required. Level of Test: Defined by a given environment. Environment is a collection of people, h/w, s/w, interfaces, data, etc.. Different levels of testing: Unit testing Integration Testing System Testing Acceptance Testing (chart) UNIT TESTING: Primarily carried out by the developers themselves. Deals with functional correctness and the completeness of individual program units. A unit is smallest testable piece of s/w can be compiled, linked, loaded e.g. functions/procedure, classes, interfaces Test cases written after coding White box testing methods are empldeveloper during the unit testingoyed Disadvantage Test Cases written to suit programmer's implementation (not necessarily specification) Better to use buddy testing. Buddy testing team approach to coding and testing one programmer codes the other tests and vice versa.

Test cases written by tester (before coding starts). Better than single worker approach . Objectivity Cross training Models program specification requirements UNIT TESTING PROCESS: During the unit testing the developer will: First unit test coding standards, libraries, subroutines and critical modules Log the runs Document observation in prescribed formats Back up test data Carry out regression testing after fixing the defects. UNIT TESTING STRATEGIES: unit testing strategy can be developed based on any of the following : Top-Down Approach Bottom-Up Approach Isolation Approach UNIT TESTING TOP DOWN: contrrol programs are tested first and then individual modules are handled. Disadvantages: only test stubs are used which are complicated. Errors in lower level critical modules are traced late. Designing of test case requires structural knowledge of when the unit under test calls the other units. UNIT TESTING BOTTOM UP: Individual modules are tested and integrated. Disadvantages: Sequence of units to be tested are contrained by hierarchy Testing cannot overlap development process as units developed last are tested first. Interface errors are encountered late in the testing process Changes made to higher units increase retesting and life cycle maintenance costs UNIT TESTING ISOLATION: testing is done on small units and modules of applications disadvantage: does not provide any integration of units. UNIT TESTING TOOLS (DIA) UNIT TESTING TOOLS Junit: code Snippet of min framework for getting the test started. Line 1: impoert junit.framework.*; line 2: line 3: public class TestSimple extends TestCase{ line 4: line 5: public TestSimple(String name){ line 6: super(name); line 7: } line 8:

line 9: public void testAdd(){ line 10: assertEqual(2,1+1) line 11: } line 12: line 13: } Line 1 import statement helps include and brings in the necessary Junit classes Line 3 It is the class definition; each class that contains the test have to extend TestCase. Line 5 Constructs of the class takes a String parameter which is passed to be the base class, that is the TestCase. Line 9 All methods with names beginning with Test are automatically executted by Junit. UNIT TESTING TOOLS NUnit code Snippet of minimum framework for getting the test started. Line 1: using Nunit.Framework; line 2: [test fixture] line 3: public class TestSimple{ line 4: [Test] line 5: public void Largestof3(){ line 6: Assert.AreEqual(9,Cmp.Largest(new int[] {8,9,7})); line 7: Assert.Are.Equal(75,Cmp.Largest(new int[] {75,4,25} )); line 8: Assert.Are.Equal(64,Cmp.Largest(new int[] {1,64,38} )); line 9: } line 10:} line1 The using statement helps include and brings in the necessary Nunit classes line 3 Each class that contains the test has to be annotated by [Test Fixture] line 5 The class contains individual methods annotated with [test] line 7&8 Multiple Assert Statements can be added in a single method INTEGRATION TESTING: Test for correct interaction between system units systems built by merging existing libraries modules coded by different people Integration testing can expose problems with the interfaces amoung program components before trouble occurs in real world program execution. There are two major forms of integration testing viz. Bottom up integration testing early testing is aimed at proving the feasibility and practicality of a particular module. Clustering of various modules can be done. Use of drivers Top-Down integration testing control prgms are tested 1st. Modules are integrated one at a time. Use of Stubs Software components may be integrated in an iterative way or all together (big bang). INTEGRATION TESTING: Who does integration testing and when is it done? Done by developers/testers Test cases written when detailed specification is ready Test continuously throughout project.

Where is it done? Programmer's work bench why is it done? Discover inconsistencies in the combination of units. Involves the below forms of testing: Regression Testing Change of behavior due to modification or addition is called 'Regression'. Used to bring changes from worst to least. Incremental Integration Testing Checks out for bugs which are encountered when a module has been integrated to the existing system. Smoke Testing it is the battery of test which checks the basic functionality of a program. If it fails then the program is not sent for further testing. SYSTEM TESTING: Deals with testing the whole program system for its intended purpose. Find disparities between implementation and specification. Usually where most resources are utilized Follows the black box testing techniques. SYSTEM TESITING COVERAGE: system testing is an investigatory testing phase, where the focus is not only the design, but also the behaviour and even the believed expectations of the customer. System testing helps to: Find errors in the overall system behaviour Establishes confidence in system functionality Validate non-functional system requirements who performs system testing and when is it done? Done by the test team test cases written when high level design specification is ready where is it done? Done on a system test machine usually in a simulated environment e.g. Vmware. Involves the below for of testing Recovery Testing System is forced to fail and is checked out how well the system recovers the failure. Security Testing Checks the capability of sys to defend itself from hostile attack on programs and data. Load & Stress Testing The system is tested for max load and extreme stree points are figured out. Performance Testing Used to determine the processing speed. Reliability Testing Used to determine product reliability Installation Testing Installation & uninstallation is checked out in the target platform. ACCEPTANCE TESTING: Demonstrats satisfaction of user Building the confidence of the client and user is the role of the acceptance test phase. User are an essential part of this process usually merged with system testing done by test team and customer done in simulated environment/real environment. Involves the below forms of testing:

UAT ensures that the project satisfies the customer requiremnts. Alpha Testing It is the test done by the client at the developer's site. Beta Testing This is the test done by the end-user at the client's site. Long Term Testing Checks out for faults occurrence in a long term usage of the product. Compatibility Testing Determines how well the product is substantial to product transition. TESTING FORMS EXPLAINED: some more types of testing handled here: design testing smoke testing regression testing retesting performance testing DESIGN TESTING; S/w design is the stage of s/w development that transforms a specification into a structure suitable for implementation. Employing various verification and validation techniques to ensure that the modeled design meets the desired specifications. Fusion primary design method as it provides comprehensive coverage from analysis through to implementation. Emphasizes procss issues including checklists for verifying the completion of phase. Method that integrates and extends earlier object-oriented methods. SMOKE TESTING: An acceptance test prior to introducing a new build to the main testing processing. Usually performed when a bug fixing or change request is executed. Smoke tests are designed to confirm that changes in the code funtion as expected and do not destabilize an entire build. Smoke tests can be broadly categorized as functinal tests or unit tests. Performed by Developers or testers developers done before the release of product/build testers - done before moving to next types of testing Also called as build Verification Test ot Rattle Test. SMOKE TESTING GUIDELINES following are some best practices for effective smoke testing work with the developer to understand the changes in the code how the change affects functionality how the change affects the interdependencies of various components conduct a code review before smoke testing focuses on any changes in code most effective and efficient method to validate code quality and ensure against code defects and faults of commission. Install private binaries on a clean debug build test must run on a clean test environment by using the debug binaries for the files being tested. Create Daily Builds Requires the team members to work together and encourage the developers to stay in sync.

Delayed iterations of builds may cause products with multiple dependencies to get out of sync. A process of bulding daily and smoke testing any changed or new binaries ensures high quality. Web and Load Testing Here, Smoke Testing is short and light Validates that everything is configured and running as expected before running your tests for performance or stress testing. REGRESSION TESTING A regression test re-runs previous tests against the changed s/w to ensure that the changes made in the current s/w do not affect the functionality of the existing s/w. Small test program built of a subset of tests, for each integration of new, modified, or fixed s/w. Tests the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. REGRESSION TESTING METHODS: Selecting Existing Test Cases Identify tests or test scripts that we have already run once. Re-run these tests on a regular basis to ensure that any change to the s/w have not affected existing areas of the s/w. Make use ot the Templates and Sets functionality. Process: identify existing test scripts that should be re-run. Create templates from test scripts Select and group templates into sets Select and deploy regression test sets Track the regression test results incremental regression testing focus of earlier regression testing approach Achieve required coverage of the modified program with minimal re-work focus of this approach Verify that behavior of modified program is unchanged, except where required by the modifications. Approach: Identify test cases in the existing test suite on which the original and modified programs may produce different o/p. Execution Slice Technique: key idea: If a test case does not execute any modified statement, it need not be re-run. Definition: An execution slice is the set of statements executed by a test case in a program. Approach: Compute the execution slice of each test case, then re-run only those test cases whose execution slices contain a modified statement. DYnamic Slice Technique: key idea: A test case whose o/p is not affected by a modification, need not be re-run. Definition: A dynamic slice on the o/p of a program is the set of statements that are executed by a test case and that affect the o/p of the program for that test case. Approach:Compute the dynamic slice on the o/p of each test case, then re-run only those test cases

whose dynamic slices contain a modified statement. Relevent Slice Technique: key idea A test case whose o/p is neither affected nor potentially affected by a modification, ned not be run. Definition: A relevant slice on the o/p of a program is the set of statements that are executed by a test case and that affect, or could potentially affect, the o/p of the program for the that test case. Approach : Compute the relevant slice on the o/p of each test case, then re-run only test cases whose relevant slices contain a modified statement. Test case Prioritization: definition: The test case prioritization Problem is to order the tests in a test suite so that faults can be revealed as early as possible during testing. Key idea: The test cases that are more likely to reveal faults should be run defore test cases that are less likely to reveal faults., Test case prioritization how to? Earlier approaches : order test cases based on their total/additional coverage of testing requirments does not account for whatever the exectedtesting requirements actually influence the program o/p. A test case may be given higher priority than it should, if it exercise many testing requirements but only few of them can actually affect the o/p. Key Observation: Modification that may affect program o/p for a test case should affect some computation in the relevant slice of the program o/p for that test case. Key idea: take relevent slicing information into account! RETESTING: Tests performed to ensure that the test cases which failed in last exection are passing after the defects against those failures are fixed. Also known as Confirmation Testing RETESTING STLC PHASE: as a part of STLC process: Defect logged in bug Tracking Tool Developer fixes and provides with official testable build Re-run of failed test cases to ensure defect fixing. RETESTING POINTERS: pointers a Tester needs to follow while retesting: Execute the test case in exactly the same way as it was executed for the 1st time. Use the same i/p data and test environment. If the test case passes then this ensures that the defect got fixed and tester can close that defect. DAY13: TESTING TYPES: the various forms of testing can be broadly classified as: Functional testing: Testing is based on verification of the functional specifications of the s/w component

under test. Defines: what a system is supposed to do. Non-Functional Testing: Testing is based on verification of the non-functional specifications of the s/w component. Defines: how a system is supposed to be. FUNCTIOANL TESTING: Validating that an application or web site conforms to its specification and correctly perform all its required functions. It entails a series of tests which perform a feature by feature validation of behaviour, using a wide range of normal and erroneous i/p data. Testing can be performed on an automated or manual basis using black box or white box methodologies. FUNCTIONAL TESTING TYPES: Types of testing covered: Unit testing Smoke testing Sanity testing Integration Testing (Top down and bottom up) System Testing Regression Testing Pre User Acceptance Testing White Box & Black box Testing NON-FUNCTIONAL TESTING Non-functional requirements specify the system's quality characteristics or quality attributes. It is designed to eveluate the readiness of a system according to several criteria not covered by functional testing. It provides confidence that the system functions in a secure, timely and useable manner and operates under likely error conditions. Non-functional testing is also known as technical testing or technical requirements testing NON-FUNCTIONAL TESTING TYPES: Types of testing covered: Performance Testing load testing Stress Testing Usability Testing Scalability Testing Reliability Testing Recovery Testing Security Testing Data Integrity Testing Interoperability Testing Installation Testing Compatibility Testing .. SANITY TESTING: It is a narrow regression test that focuses on one or more or a few areas of functionality.

Sanity testing is usually narrow and deep. It is used to determine that a small section of the application is still working after a minor change. SANITY TESTING V/S SMOKE TESTING: SMOKE: 1. Smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. 2. Smoke test is scripted, either using a written set of tests or an automated test. 3. It is designed to touch every part of the application in a cursory way. Iy's shallow and wide. 4. It is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. 5. Smoke testing is normal health check up of a build of an application before taking it to testing in depth. SANITY 1. Sanity testing is a narrow regression test that focus on one or a few areas of functionality. It is usually narrow and deep. 2. Sanity Test is usually unscripted. 3. Sanity Test is used to determine that a small section of the application is still working after a minor change. 4. It is a cursory testing. This level of testing is a subset of regression testing. 5. Sanity testing is used to verify whether requirements are met or not, checking all features breadth-first. PERFORMANCE TESTING: Performance testing is a process of testing an application in production like environments with the intent of realizing the QoS (Quality of Service) requirements Performance Measures: Processing Speed Respose Time Efficiency It is designed to test the run time performance of s/w. It occurs throughout all steps in the testing process (test levels). PERFORMANCE TESTING WHY? To measure and monitor performance of business infrastructure consisting of shared resources. For protecting IT investments by predicting scalability and performance. Increasing uptime of mission-critical systems. Identifying performance bottlenecks in the application. Predict peak processing ability of application before deployment. Aid in capacity projection. Analyze effect if h/w and/or s/w change. PERFORMANCE TESTING BENEFITS: Identification of performance bottle necks before production deployment. Helps collection of metrics that allows tuning and capacity planning of IT infrastructure. PERFORMANCE TEST MODELS: Classified as per Client requirements: Production Release All architectural tiers are performance tested for all possible

Performance bottlenecks before the application goes live in Production Environment. System Upgrade Verify the performance of the application after a s/w / h/w upgrade and compare with pre-upgrade application perormance. s/w upgrade : appserver/webserver/db version upgrade, OS patch/ service pack upgrade etc.. h/w upgrade : increase in RAM/ # of processors, n/w bandwidth etc.. Production Issue - Simulate a performance issue observed in production environment and conduct a Root Cause Analysis. Growth Projection Subjecting the application to higher loads in line with future user growth projections to predict the aplication behaviour in advance. Classified as per system architecture: OLTP Simulate real time user concurrency to validate RT and Throughput for an Onlie Transaction Processing System. Batch Programs Validate scalability of a batch application by testing for various batch windows. Component Based Determine performance of a particular component of the system a architecture by conducting isolated Testing. E.g. TIBCO, MQSeries, Crystal Reports, Documentum etc.. PERFORMANCE TESTING TERMINOLOGY: Work Load : The number of user request or batch operations. Throughput: Number of requests processed in unit time, amount of data transferred from one place to another and processed in specified amount of time, can be transaction per second or bytes per time. Arrival Rate: Request arrival rate is equal to system throughput. Response Time: Elapsed time from submitting request to time response is returned. Queue Time : Average number of customers in a queue at a particular time. Residence Time: The time spent by a customer at a service center. Service Demand: Total average time spent by a request of given type in obtaining service from a given resource, same as average queue waiting time. Open System: Processing independent of the type of incoming jobs Closed System: processing depends on the behaviour of incoming jobs. Little's Law: Average queue length = system throughput x response time. PERFORMANCE TESTING PROCESS: (DIA) EXPLAINED: Test Requirement Analysis : Determine performance requirements from a response time and resource utilization perspective. Performance Test Strategy and Planning: Identity load generation, monitoring approach. Performance Test Design and Development: Prepare load generation script and set up test environmental Performance Test Execution and Result Generation Execute tests and capture results. Performance Test Results Analysis: Analyze results and determine bottlenecks Post-Tuning Testing and Result Analysis: Verify that the performance fixes have enanced performance.

PERFORMANCE TESTING PARAMETERS: Basic measures: % CPU utilization Memory utilization I/O usage %Network Utilization % Disk Time Response Time Throughput(# of transactions per second) other application & DB server connection pooling # of sessions - active/inactive Thread pool size Hit ratio (buffer, cache, latch....) Locks Top waits Advanced measures: Memory committed bytes, virtual memory, Memory Pages/sec, Memory page faults/sec, memory cache faults/sec, Physical Memory Available (Bytes). CPU Utilization Processor % Interrupts/sec, Process Thread count. Disk I/O Disk time, Disk Queue length Network n/w Interface o/p queue length, n/w Interface Bytes Total/sec, current bandwidth. PERFORMANCE TESTING TYPES: various forms of performance testing could be listed as: Load Testing Stress Testing Endurance Testing Volume Testing Reliability Testing Scalability Testing Availability Testing LOAD TESTING: What is? It will simulate a real time user load on the application. The simulated user load will be the normal load that the application is subjected or the load expected on the application when it goes. The system is subjected to multiple Virtual users and then monitored for performance. The main objective of load testing is to determine the response time of the s/w for critical transactions and make sure that they are within the specified limit. Benefit: Load testing the application prior to production, ensures application will be stable and any performance issues can be addressed in pre-production phase. Purpose of load test: Quantification of risk: through formal testing, determine the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. Load testing does not mitigate risk directly, but through identification and quantification of risks, presents tuning opportunities and an analysis of

options that mitigate risk. Determination of Minimum Configuration: Through formal testing, determine the minimum configuration that will allow the system to meet the formal stated performance. Expectatons of stakeholders so that extraneous h/w, s/w and the associated cost of ownership can be minimized. This is a business Technology Optimization (BTO) type test. LOAD TEST CRITERIA: criteria for determining business functions/processes to be in a load test: Basis fo inclusion in load test : 1. high frequency transactions: (comment) Most frequently used transactions potential to impact the performance of all other transaction if they are inefficient. 2. Mission critical transaction Important transactions that facilitate the core objectives of the system should be included, Failure under load of these transactions has the greatest impact. 3. Read Transactions At least one READ ONLY transaction should be included performance of such transactions can be differentiated from other complex transactions. 4. Update Transaction Atleast one update transaction should be included performance of such transactions can be differentiated from other transactions. LOAD TESTING PERFORMANCE CAUSES: possible causes for slow system performance under load, including, but not limited to, the following: Application server(s) or s/w Database Server(s) N/w latency, congestion, etc. Client side processing Load Balancing b/w multiple servers STRESS TESTING INTRODUCTION: What is? It subjects the application with gradually increasing load till the application reaches saturation point ( in terms of response time, throughput,etc. ) or its break point. Parameters affecting System Performance like: System Configuration, Processing Capacity, N/w Bandwidth, Disk Space, etc. are taken into consideration while performing the test. The final component of stress testing is determining how well or how fast a system can recover after an adverse event. Benefit: Stress Testing ensures that the application which is tested for expected load can take on spikes in the load condition like increase in rate of transactions and study its impact on the system resources. It helps tune and configure the system optimally. STRESS TESTING ENVIRONMENT: When conducting a stress test, an adverse environment is deliberately created and maintained actions may include: Running several resource intensive applications in a single computer at the same time. Attempting to hack into computer and use it as a zombie to spread spam. Flooding a server with useless e-mail messages. Making numerous, concurrent attempts to access a single Web site Attempting to infect a system with viruses, Torjans, spyware or other malware. STRESS TESTING EXAMPLES:

1. Stress Test of the CPU Running s/w application with 100% load for some days which will ensure that the s/w runs properly under usage conditions. 2. s/w has minimum memory requirement of 512 MB Ram- the s/w application is tested on a machine which has 512 MB memory with exxtensive loads to find out the system/s/w behaviour. STRESS TESTING V/S LOAD TESTING STRESS TESTING : 1. Stress testing focuses on identified transactions, pushing to a level so as to break transaction or systems. 2. Transactions are selectively heaily stressed while the database may not experience much load. 3. It is the loading of concurrent users over and beyond the level the system can handle. LOAD TESTING: 1. Load testing examines the entire environment and database, while the response time. 2. Database experiences a heavy load while some transaction may not be stressed. 3. It is the loading of normal, real-time user load which is expected in production. ENDURANCE TESTING: what is? It is executed with expected user load sustained over longer period of time normal ramp up and ramp down time. Checks the reaction of a subject under test under a possible simulated environment for a given duration and for a given threshold. Also known as soak Testing, Longevity Testing. Benefit: Endurance testing helps in uncovering the performance bottlenecks like memory leaks, which becomes visible when the system is subjected to normal load for a prolonged time. VOLUME TESTING: what is? It is typicallly load except that a large volume of data is populated on the database. Volume testing confirms that any values may become large over time ( logs, data files, etc..) can be accomodated by the program and will not cause it to stop working or degrade its operation. Volume testing will seek to verify the physical and logical limits to a systems capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization's business processing. Benefit: To study the application behaviour when the databse is populated with production like data and to find the impact on the application response time and database overall health. RELIABILITY TESTING INTRODUCTION: what is? Verifies the probability of a failure free s/w operation for a specified period of time in a specified environment. Involves designing steps -stress, test/analyze/fix and continuously increasing stress testing techniques. Obtains raw failure time data for product life analysis. Benefit:

Helps determine product reliability, and whether the s/w meets the customer's reliability requirements. RELIABILITY TESTING PURPOSE: specific purposes of Reliability tests are as follows: Product Reliability Assurance Evaluating new Designs, Components, Processes, etc.. Investigating Test Methods Accident/failure Countermeasures Determining Failure Distribution Collecting Reliability Data Reliability Control. RELIABILITY TESTING METRICS: Tracks failure intensity (failure per transaction, failures per hour) which helps in : Guiding the Test processing Determining the feasibility of the s/w release Determining whether the s/w meets customer's reliability requirements SCALABILITY TESTING: What is? It involves loading the system with increasing load simulating the expected business growth) of the application down the years. Determine Maximum Transaction per second (Max TPS). Measures the system capability to scale up or scale out in terms of the user load supported, the number of transactions, the data volume etc. Benefit: To determine how effectively the system can scale to accommodate the increasing load and for system capacity planning for procurement of more resourcces down the line. SCALABILITY TESTING ATTRIBUTES: Performance Attributes have to determined at every scaling point. The attributes are as listed below: Response Time Throughput Screen transition Time {Session time, reboot time, printing time, transaction time, task execution time} Hits per second, Request per second, Transaction per seconds Performance measurements with number of users. Performance measurement under load. CPU usage, Memory usage {Memory leakages, thread leakage}, Bottlenecks { Memory, cache, process, processor, disk and network} N/W Usage {Bytes, packets, segments, frames receives and sent per sec, Bytes Total/sec, Current Bandwidth Connection Failures, Connection Active, failures at n/w interface level and protocol level}. Web server { request and response per second, services succeeded and failed, server problem if any }. AVAILABILITY TESTING: what is? It means running an application for a planned period of time, collecting failure events and repair time. It focuses on aspects like up-time of application and IT Infrastructure.

Benefit: to ensure the service level agreement (SLA) is met by comparing the availability percentage to the original desired percentage. AVAILABILITY TESTING FAILURE EVENTS: failure events associated with availability tesing: Testing Inadequate Change Management Lack of ongoing Monitoring an Analysis Operation Errors Weak Code Lack of QA Process. Interaction with External Applications Different Operating Conditions Unusual Events H/w failures AVAILABILITY TESTING CONCEPTS: Test the change control processing change control process is a large source of downtime causing errors. Test Catastrophic Failure Validate the catastrophic Recovery Procedures Create outage of a catastropic nature and test the recovery process. Validates the correctness of the recovery procedure and provides a measure of confidence in a well-prepared recovery response team. Test the failover Technologies Validate the implemented failover Technologies. Test the Monitoring Technology Analyze the windows Management Instrumentation (WMI) data using the intended monitoring reports and ensure that resource consumption data and the test outages are closely reviewed. Ensure that the necessary failure, availability, and trend analyssis data is being captured. Test the Help Desk Procedures: For critical applications, the help desk must trained and ready to handle customer inquiries and failure scenarios Test the Escalation Processing. Test for Resource Conflicts: Test for conflicts by evaluating an application's interactions with other system processes. Should be conducted in a production like environment where transient workloads cause multiple applications to comete for resources allocation. DAY 14: NON-FUNCTIONAL TESTING TYPES: other non-functional testing types to be touched upon are: Usability Testing Recovery Testing Security Testing Installation/Uninstallation Testing Compatibility & Migration Testing USABILITY TESTING INTRODUCTION:

Determines how well the user will be able to understand and interact with the system Userfriendliness checklists. Interface is developed keeping in mind the educational background and intelligence of the user. System Navigation is verified here O/p and errors message are checked to see if they are meaningful and simple or not. It is a black-box testing technique. USABILITY TESTING GOALS: The four main testing goals of usability testing can be identified as: Performance How much time, and how many steps, are required for people to complete basic tasks.? Accuracy How many mistakes did people make? Where they fatal of recoverable with the right information? Recall How much does the person remember afterwards or after periods of non-use? Emotional Response How does the person feel about the task completed? Is the person confident, stressed? Would the user recommend this system to a friend? USABILITY TESTING METHODS hallway testing Testers are random people indicative of a cross-section of end users. Particularly effective in the early stages of a new design when designers are looking for critical problems. Remote Usability Testing The user and evaluators are seperated over space and time. Remote testing can be either synchronous or asynchronous synchronous- involoves real time one-on-one communication between the evaluator and the user (video conferencing, WebEx) Asynchronous involves the evaluator and user working seperately. Disadvantages Synchronous remote testing may lack the immediacy and sense of presence desired to support a collaborative testing process. Approaches may need to be sensitive to the cultures involved across locations Reduced control over the testing environment Distractions and interruptions experienced by the participants in their nnative environment. Expert Review: Relies on bringing in experts with experience in the field, to evaluate the usability of a product. Automated Expert Review Provides expert review through the use of programs with given rules for good design and heuristics Quick and consistent Use an Iterative Design Approach: Develop and test prototypes through an iterative design approach to create the most useful and usble application. Solicit Teset Participants' comments: Solicit usability participant's comments either during or after the performance of tasks.

Evaluate Application Before and after Making Changes Conduct 'before and after studies' when revising an application to determine changes in usability. Prioritize Tasks: Give high priority to usability issues preventing 'easy' tasks from being easy. USABILITY TESTING STEPS: Distinguish between Frequency and Severity: Distinguish between frequency and severity when reporting on usability issues and problems Select the right Number of Participants: Select the right number of participants when using different usability techniques. Using too few may reduce the usability of an Applicatioon; using too many wastes valuable resources. Use the Appropriate Prototyping Technology Create prototypes using the mostappropriate technology for the phase of the design, the required fidelity of the prototype, and skill of the person creating the prototype. Use Inspection Evaluation Results Cautiously Inspection evaluation include heuristic evaluation, expert reviews, and cognitive walkthroughs. Inspection evaluation should be used cautiously because they appear to detect far more potential problems than actually exist. Recognize the 'Evaluator Effect' Multiple evaluators eveluating the same interface defect markedly different stes of problems Apply Automatic Evaluation Methods Use appropriate automatic evaluation methods to conuct initial evaluation on web sites. Use Congnitive Walkthroughs Cautiously Cognotive walkthroughs are often conducted to resolve obvious problems before conducting performance tests. They appear to defect far more potential problems than actually exist, when compared with performance usability testing results. Choosing Laboratory vs Remote Testing Testers can use either laboratory or remote usability testing because they both elicit similar results. Use Severity Ratings Cautiously Prioritize design problems found either by inspection evaluation or expert reviews. RECOVERY TESTING INTRODUCTION: Activity of testing how well an application is able to recover from crashes, h/w failures and other catastrophic problems System is deliverable forced to fail and has to return to the actual points/page in the application. If the system recovers itself, the mechanism for re-initialization and check point is validated for correctness. Type or extent of recovery is specified in the requirement specifications. RECIVERY TESTING EXAMPLES Loss of database Integrity: While an application is running, suddenly restarted the computer, and afterwards check the validity of the application's data integrity. Loss Of Communication: while an application is receiving data from a n/w, unplug the connecting cable. After some time, plug the cable in and analyze the application's ability to continue receiving data from

the point at which the n/w connection disappeared. Evaluate Adequancy of back up Data: Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them. RECOVERY TESTING USAGE: Recovery Testing includes reverting to a point where integrity of the system is known, then reprocessing up until the point of failure. The time taken to recovery depends upon: The number of Restart Points Volume of Application Training and skill of people conducting Recovery Activities Tools available for recovery RECOVERY TESTING OBJECTIVES: To ensure Operations can be continued after a disaster. Recovery Testing verifies the Recovery Process and Effectiveness of the Recovery Process. Adequate back up data is preserved and kept in secure location Recovery Procedures are documented. Recovery Personnel have been assigned and trained. Recovery Tools have beendeveloped and are available. RECOVERY TESTING HOW TO USE: Procedures, Methods, Tools and Techniques are assessed to eveluate the adequacy. After system is developed a Failure can be introduced in the system and check whether the system can recover. A simulated disaster is usually performed on one aspect of application system. When there are multiple failures, then, instead of taking care of all, recovery testing should be carried out in a structured fashion. RECOVER TESTING PARTICIPANTS: Participants: System Analysts Testing Professionals Management Personnel This type of testing is used when: User says that the continuity of a system is needed in order for the system to perform or function properly. User should then estimate the losses, time span to carry out recovery testing. SECURITY TESTING INTRODUCTION: Security testing aims to determine that a system protects data and maintains functionality as intended. The six basic security concepts that need to be covered during security testing are: Confidentiality Integrity Authentication Authorization Availability Non-Repudiation. SECURITY TESTING CONCEPTS:

The six basic security concepts that need to be covered during security testing are: Confidentiality a security measure which protects against the disclosure of information to parties other than the intended recipient Integrity A measure intended to allow the receiver to determine that the information which it is providing is correct. Authentication This might involove confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packing and labelling claims to be, or assuring that a computer program is a trusted one. Authorization The process of determining that a requester is allowed to receive a service or perform an operation. Access control is an example of authorization Availability Assuring information and communication services will be ready for when expected. Information must be kept available to authorized persons when they need it. Non-Repudiation. A measure intended to prevent the later denial of message transfer b/w sender and receiver. Non-repudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the receipient cannot deny having recevied the message. SECURITY TESTING TERMINOLOGY: Common terms used for the delivery of security testing: Discovery: The purpose of this stage is to identify systems within scope and the services in use. Version detection may highlight deprecated versions of s/w/ firmware and thus indicate potential vulnerablilities. Vulnerability Scan: Identifies known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. Vulnerability Assessment: This uses discovery and vulnerability scanning to identiofy security vulnerabilities and places the findings into the context of the environment under test. Security Assessment: Bulids upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorized access to a system to confirm system settings. It involves examining logs, system responses, error messages, codes, etc. It works to gain a board coverage of the system under test but not the depth of exposure that a specific vulnerability could lead to. Penetration Test Penetration test simulates an attack by a malicious party. Each test is approached using a consistent and complete methodology in a way that allows the testers to use their problem solving abilities, the o/p from a range of tools and their own knowledge of networking and systems to find vulnerabilities that

would/could not be identified by automated tools. Security Audit: Driven by an Audit/Risk function to look at a specific control or compliance issue. Characterized by a narrow scope, this type of engagement makes use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test) . Security Review: Verification that industry or internal security standards have been applied to system components or product. This activity is completed through gap analysis and utilizes build/code reviews or by reviewing design documents and architecture diagrams. It does not utilize any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit). SECURITY TESTING HACKER PROFILE: a security tester has to understand the various hacker profiles in order to be effective. Skill set required: Proficient with computer and programming skills. Has in depth knowledge of target platforms such as Windows/Unix. Should be familiar with vulnerability Research and have mastery of various hacking techniques. (TABLE -expan) BLACK HAT: 1. Also known as crackers 2. Individuals with extraodinary computing skills 3. Restore to malicious and destructive activities. 4. Typically hack for financial gain. WHITE HAT: 1. Also known as security Analysts 2. Individuals with Hacker skills 3. Use their skills for defensive purposes GRAY HAT: 1. Individual who work offensively and defensively at various times. INSTALLATION/UNINSTALLATION TESTING: Installatin testing is often the most under tesed area in testing. It is performed to ensure that all installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed. The installation Process also establishes whether the system can be installed or uninstalled on different plaatforms. Tested for full, partial, or upgrade install/unistall processes or different h/w, s/w environment. INSTALLATION/UNINSTALLATION TESTING TIPS: Below are some tips for doing installation testing: check if while installing application, product checks for the dependent patches/s/w. Check whether the installer gives a default installation path. Installer should allow user to uninstall at location other then the default installation path as well. Installation should start automatically when the CD is inserted/ setup is executed. If the product is a new version of some old product, then previous version should not be over installed on the newer version. Installer should give the Remove /Repair options.

Check if the product can be installed in a n/w. Try to install the application/ s/w on different platform/OS. Try to install on system having less memory /RAM/HDD than that required by the system. Uninstallation Check that all the registry keys, files, dlls, shortcuts active X components are removed from the system after uninstalling the system. INSTALLATION (EXAMPLE DIA) INSTALLATION TESTING EXAMPLE: Use Flow Diagrams: Flow diagrams simplify our task Flow diagram depicted earlier is for basic installation tesing test case. Install the Full Application: If previously installed compact basic version of application, then use same path for full version. Files on Disk If different files are being written on disk during installation, use flow diagram in reverse order to test uninstallation of all the installed files on disk. Automated Testing: use flow diagrams to automate the testing efforts Convert diagrams into automated scripts. Disk Space Requirement Test the installer scripts used for checking the required disk space. Verify whether more disk space is utilizes during installation. If yes flag this as error Test disk space required on different file system format. Set a dedicated system for only creating disk images. Distributed Testing Environment: Create a master machine, which will drive different slave machines on n/w. Start installation simulataneously on different machine from the master system. Distributed environment saves time and helps effectively manage tests. Registry Changes: Use s/w availability freely in market to verify registry changes on successful installation. Verify the registry with your expected change list after installatiooon. Break the installation process: Verify behavior of system and whether system recovers to its original state withou any issues. Disk Space Checking: Crucial checking in the installation testing scenario Check for disk space availability before and after installation - manual and automated methods. Check system behavior on low disk space conditions while installation. Test for Uninstallation: Before each new iteration of installation make sure thaty all the files written to disk are removed after uninstallation check for rebooting option after uninstallation manually and forefully not to reboot. COMPATIBILITY & MIGRATION TESTING Compatibility testing: Compatibility testing is conducted on the application to evaluate the application's computing

environment factors such as OS, DB, bandwidth, etc.. Testing how well a system performes in a particular environment that includes h/w, n/w, OS and other s/w etc.. Compatibility testing can be automated using automation tools or can be performed manually and is a part of non-functional s/w testing.

COMPUTING ENVIRONMENT ELEMENTS: Various elements of computing environment for compatibility Testing are: Hardware evauation of the performance of system/application/website on a certain h/w platform. E.g. various combinations of chipsets (such as Intel, Macintosh Gforce), motherboards etc. Browser Evaluation of the performance of system / website/ application on a certain type of browser E.g compatibility with browser like IE, Firefox etc. Network Evaluation of the performance of system/ application/ website on n/w with varying parameters such as bandwidth, varience etc.., set up to replicate the actual operating environment. Peripherals Evaluation of the performance of system / application in connection with various systems/ peripheral devices connected directly or via network. E.g. Printers, fax machines, telephone lines etc. Compatibility Between Versions Evaluation of the performance of system/ application in connection with its own predecessor/ successor versions ( backward and forward compatibility). E.g. Windows 98 was developed with backward compatibility for windows 95 etc. Softwares Evaluation of the performance of system / application in connection with other software. E.g s/w compatibility with operating tools for n/w, web servers, messaging tools etc. Operating System Evaluation of the performance of system / application in connection with the underlying iperating system on which it will be used. Database Database Compatibility Testing is used to evaluate an application/system's performance in connection to the database it will interact with. MIGRATION TESTING: As companies migrate from one platform or operating system to asnother, migration testing assures that the applications, infrastructure, and data will make the transition intact. MIGRATION TESTING REASONS: Reasons for migration could be all or some of the below. Achieve cost savings. H/w or s/w end-of-life Enhance performance and scalability Minimize risk with existing product viability Conplete a consolidation and standardization initiative on the technology or standards front. Accommodate change in business structure, could be due to a latest acquisition or merger

completed MIGRATION TESTING TYPES: the various types of migration testing used to ensure : performance, Reliability, Scalability and Availability Cost Savings Mitigation risks Consolidation Types used: Database Migration Testing e.g. Sybase to oracle. Technology Migration Testing e.g. Legacy to web, EDI to Web Services, Java to .NET, etc. Standards Migration Testing e.g. ICD9 to ICD10 in Healthcare, Proprietary to FIX protocol, legacy files to XML, etc. OTHER TESTING TYPES: Other Additional testing types covered here are: Comparison Testing Exploratory Testing Ad-Hoc Testing Walk Through Testing Alpha Testing Beta Testing COMPARISON TESTING: comparison of product strengths and weaknessess with previous versions or other similar products. Basically compares the performance parameters of the s/w. COMPARISON TESTING ATTRIBUTES: Some of the key attributes used for comparing applications are: Features Ease of use Performance Price Usability Reliability EXPLORATORY TESTING: often taken to mean a creative, informal s/w test that is not not based on formal test plans or test cases; testers may be learning the s/w as they test it. Described as simulataneous learning, test design and test execution. The quality of the testing is dependent on the tester's skill of inventing test cases and finding defects. EXPLORATORY TESTING TYPES: Basic types of Exploratory Testing: Pair Testing Two persons create test cases together; one performs them, and the other documents. Session- Based Testing:

Method specifically designed to make exploratory testing auditable and measureable on a wider scale. EXPLORATORY TESTING BASICS: Basic fundamentals of Exploratory Testing Test Design Careful Observation Critical Thinking Diverse Ideas Rich Resources EXPLORATORY TESTING ADVANTAGES & DISADVANTAGES Advantages: Minimal preperation, accelerated bug detection At execution time, the approach tends to be more intellectually stimulating thean execution of scripte tests. Testers can use deductive reasoning based on the results of previous results to guide their future testing. Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored Disadvantage: Tests invented and performed on the fly can't be reviewed in advance Difficult to show exactly which tests have been run. Exploratory test ideas cannot be revisited to repeat specific details of the earlier tests. AD-HOC TESTING: s/w testing performed without planning and documentation Similar to exploratory testing, but often to mean that the testers have significant understanding of the s/w before testing it. Least formal test method. AD-HOC TESTING IN STLC: Ad-Hoc testing can be performed throughout the test cycle In the beginning, it helps testers to understand the program and in early discovery of bugs. In the middle of the project, the data is used to set priorities and timetables. Towards the closure, it can be used to examine defect fixes more rigorously AD-HOC TESTING STRENGTHS: Discover early bugs Finding bugs in test strategy, explaining relationships b/w sub-systems. Tool for verifying the completeness of the test. Not restricted in the plan and methodology of testing Useful when an interim release of the s/w has to be handed over for demos or some other situations where it is not required that the s/w be of perfect quality, but embarrassing defects should not be there in the application. WALK THROUGH TESTING: Method of conducting informal group / individual review is called walkthroughs Designer or programmer leads members of the devlopment team and other interested parties through a s/w product, and the participants ask questions and make comments about

possible errors, violation of development standards, and other problems or may suggest improvement on the article. Walkthrough can be pre planned or can be conducted at needed basics and generally people working on the work product are involved in the walkthrough process.

WALK THROUGH TESTING PURPOSE: the purpose of walk through testing is to: Find Problems Discuss Alternative Solutions Focusing on demonstrating how work product meets all requirements WALK THROUGH TESTING ROLES: Three specialist roles in a walk through Leader conducts the walkthrough, handles administrative tasks, and ensures orderly conduct (and who is often the author) Recorder Notes all anomalies (potential defects), decisions, and action itmens identified during the walkthrough meeting Generates minutes of meeting at the end of walkthrough session. Author Presents the s/w product in step-by-step manner at the walk-through meeting Responding for completing most action itmens. WALK THROUGH TESTING PROCESS: Author describes the artifact to be reviewed to reviewers during the meeting Reviewers present comments, possible defects, and improvement suggestions to the author. Recorder records all defect, suggestions during walkthrough meeting. Based on reviewer comments, Author performs any necessary rework of the work product if required. Recorder prepares minutes of meeting and sends them to the relevant stakeholders. Leader normally monitors overall walkthrough meeting activities as per the defined company process. Leader conducts the reviews, generally performs monitoring activities, commitment against items etc. ALPHA TESTING: In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be as a result of such testing. Alpha testing is simulated or actual operationla testing by potential users/customres or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the s/w goes to beta testing. In alpha testing, the develpoer looks over shoulder and records errors and usage problems. Tests are conducted in a controlled environment. BETA TESTING: Type of User Acceptance Testing performed by existing users/customers at an external site not otherwise involved with the developers. Determines whether a component/system satisfies the user/customer needs and fits with the

business processes Conducted in Live Application environment not controlled by developer. Customer records all problems encountered and reports to developer at regular intervals. Final testing before releasing application for commercial purpose.

DAY15: TEST AUTOMATION INTRODUCTION: Automation testing refers to automating the manual testing process. It is the process of writing set of instructions that are designed, Scripted , tested, and verified by a person, then executed by a machine, to produce results that can be analyzed. Automation testing process kick-starts with the Automation Feasibility Analysis phase. Automation is NOT the preferred approach for testing an application for the first time OR single time OR checking usability OR for Ad-hoc Testing. Automation is mainly about: Creation of test cases in terms of test scripts Executing test scripts using tools with minimum manual intervention Usage of tools to control execution, compare results, establish traceability to requirements and perform different kinds of reporting Automaiton is best suited for tests That need to be run for every build of the application That use multiple data values for the same actions Which are complex and time consuming Requiring a great deal of precision Needed on multiple OS/Browser combinations TEST AUTOMATION NEED: Repetitive manual execution of test scripts becomes monotonous and erroneous due to various reasons such as: s/w release cycles becoming shorter and the list of test in the regression test suite becoming larger. Difficulty to do complete manual regression testing with every release Incomplete and inefficient testing cause new bugs ( defects introduced while fixing a bug) going undetected before the release. Test Automation addresses the previously mentioned pain points by providing consistency, reliability and repeatability of tests. Same tests can be repeated exactly in the same manner, every time when the automated scripts are executed. Test quality remains consistent across all the test cycles Eliminates manual testing errors Shorter test cycles Detects the bugs that can be uncovered only with long runs Better usage of resources. TEST AUTOMATION MISSION: Posssible Missions for Test Automation Find important Bugs fast Measure and Document Product Quality Verify Key Features

Keep up with Development Assess S/w stability, Concurrency, Scalability... Provide Service

TEST AUTOMATION KEY DRIVERS cost &time shorten test execution cycles Decrease manual effort More product releases per year Tests can be scheduled for out of hours at no extra cost Effectiveness Reduce human error Higer defect detection rates More test coverage Automation test execution reports Repeated test execution with multiple sets of test data TEST AUTOMATION MINIMUL SETUP: The minimal Setup for automated testing includes Detailed Test Cases Predictable Expected Results Dedicated Test Environment Dedicated and Skilled Resources AUTOMAITON FEASIBILITY ANALYSIS INTRODUCTION Automaiton feasibility analysis is the process of determining: How and What is to be automated Effort needed for automation Technical feasibility study involves automation of a sample test case AUTOMAITON FEASIBILITY ANALYSIS CRITERIA Entry Criteria: clearly defined Requirement Sound understanding of Client Expectations Exit Criteria: Approved Feasibility Analysis report

You might also like