You are on page 1of 27

Introduction

Software Testing

SOFTWARE TESTING

Unit 1 INTRODUCTION
1.1 INTRODUCTION TO SOFTWARE TESTING

Testing is an essential activity in software life cycle. The software testing is an important activity carried out in order to improve the quality of the software. Definition Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and coding. According to Glen Myers, the testing objectives are: Testing is a process of executing a program with intend of finding an error. A good test case is one that has high probability of finding an undiscovered error. A successful test is one that uncovers and as yet undiscovered error. Testing Principles Every software engineer must apply following testing principles while performing the software testing. Tests should be planned long before testing begins. Testing should begin in the small and progress toward testing in the large Exhaustive testing is not possible.
1 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

To be most effective, testing should be conducted by an independent third party. Attributes of a Good Test A good test has a high probability of finding an error. A good test is not redundant. In a group of tests that have a similar intent, time and resource, the test that has the highest likelihood of uncovering a whole class of errors should be used. A good test should be neither too simple nor too complex. Each test should be executed separately People associated with testing Software customer The party or department that contracts for the software to be developed. Software user The individual or group that will use the software once it is placed into production. Software developer The individual or group that receives the requirements from the software user or assists in writing them, designing, building, and maintaining the software, as needed. Development tester The individual or group that performs the test functions within the software development group. IT management The individual or group with responsibility for fulfilling the information technology mission. Testing supports fulfilling the mission.
2 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Senior management The CEO of the organization and other senior executives who are responsible for fulfilling the organization mission. Information technology is an activity that supports fulfilling that mission. Auditor The individual or group responsible for evaluating the effectiveness, efficiency, and adequacy of controls in the information technology area. Testing is considered a control by the audit function. Project manager The individual responsible for managing, building, maintaining, and/or implementing the software. Defects versus Failures A defect found in the system being tested can be classified as wrong, missing, or extra. The defect may be within the software or in the supporting documentation. While the defect is a flaw in the system, it has no negative impact until it affects the operational system. A defect that causes an error in operation or negatively impacts a user/customer is called a failure. The main concern with defects is that they will turn into failures. It is the failure that damages the organization. Some defects never turn into failures. On the other hand, a single defect can cause millions of failures. Why Defects are hard to find? There are at least two reasons defects go undetected: Not looking Tests often are not performed because a particular test condition is unknown. Also, some parts of a system go untested because developers assume software changes dont affect them. Looking but not seeing

3 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

This is like losing your car keys only to discover they were in plain sight the entire time. Sometimes developers become so familiar with their system that they overlook details, which is why independent verification and validation should be used to provide a fresh viewpoint. Defects found in software systems are the results of the following Circumstances: IT improperly interprets requirements Information technology (IT) staff misinterprets what the user wants, but correctly implement what the IT people believe is wanted. Users specify the wrong requirements The specifications given to IT staff are erroneous. Requirements are incorrectly recorded Information technology staff fails to record the specifications properly. Design specifications are incorrect The application system design does not achieve the system requirements, but the design as specified is implemented correctly. Program specifications are incorrect The design specifications are incorrectly interpreted, making the program specifications inaccurate; however, it is possible to properly code the program to achieve the specifications. Errors in program coding. The program is not coded according to the program specifications. Data entry errors. Data entry staff incorrectly enters information into the computers. Testing errors. Tests either falsely detect an error or fail to detect one. Mistakes in error correction. The implementation team makes errors in implementing your solutions. Corrected condition causes another defect. In the process of correcting a defect, the correction process itself institutes additional defects into the application system. Workbench Concept In IT organizations, workbenches are more frequently referred to as phases, steps, or tasks. The workbench is a way of illustrating and documenting how a specific activity is to be performed. There are four components in each workbench:
4 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

1. Input. The entrance criteria or deliverables needed to complete a task. 2. Procedures to do. The work tasks or processes that will transform the input into the output. 3. Procedures to check. The processes that determine that the output meet the standards. 4. Output. The exit criteria or deliverables produced from the workbench. The workbench is illustrated in the following figure, and the software development life cycle, which is comprised of many workbenches, is also illustrated.

Fig: 1.1 The workbench for testing software Test process contains multiple workbenches.

Fig: 1.2 The test process contains multiple workbenches The workbench concept can be used to illustrate one of the steps involved in building systems. The programmers workbench consists of the following steps:
5 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

1. Input products (program specs) are given to the producer (programmer). 2. Work is performed (e.g., coding/debugging); a procedure is followed; a product or interim deliverable (e.g., a program/module/unit) is produced. 3. Work is checked to ensure product meets specifications and standards, and that the do procedure was performed correctly. 4. If the check process finds problems, the product is sent back for rework. 5. If the check process finds no problems, the product is released to the next Workbench
1.2 MULTIPLE ROLES OF TESTING

The role of testing changes as the type of process used to build the product changes. Build processes can be divided into three categories. Manufacturing Manufacturing is a process that produces many similar products. In information technology, this is most associated with data center operations. Testing is a binary activity that validates the presence or absence of product attributes. For example, in computer operations, testing would validate that the right data files have been mounted. Job Shop The building process would most commonly be associated with creating software. Testing of job shop products normally involves verifying that the requirements are correct and then validating that the end product meets the true needs of the customer/user. * Testing in a job shop environment is a value added activity The check activity is used in conjunction with the do activity to assure that a high quality product is produced.
6 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

* The role of the user also changes in a job shop environment The skill sets of the user and instructions provided by the user impact the effectiveness and efficiency of the software system. For sample, the system depends on the user entering the correct data and properly interpreting the output and decision making. Thus testing in a job shop environment validates the user, in fact can use the system properly.

Professional Process In this process, the products created are unique and may not resemble any other product. An example of professional products would be working with customers to demonstrate how computer technology can assist them in solving their business problems. Using a professional product, the customer validates whether the product is satisfactory. A group of senior analysts may evaluate the recommended solution, or independent consultants can be brought into perform the evaluation. In any of these processes, any variation noted by the tester is a defect.
1.3 STRUCTURAL APPROACH TO SOFTWARE TESTING

Traditionally, the SDLC places testing immediately prior to installation and maintenance. Testing after coding is the only verification technique used to determine the adequacy of the system. Indeed, an error discovered in the latter parts of the SDLC must be paid four different times. The first cost is developing the program erroneously, which may include writing the wrong specifications, coding the system wrong, and documenting the system improperly. Second, the system must be tested to detect the error. Third, the wrong specifications and coding must be removed and the proper specifications, coding, and documentation added. Fourth, the system must be retested to determine whether the problem(s) have been corrected. The following activities should be performed at each phase:
7 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Analyze the software documentation for internal testability and adequacy. Generate test sets based on the software documentation at this phase. Traditional software development life cycle

Fig: 1.3 Traditional software development life cycle

Studies have shown that the majority of system errors occur in the design phase. Analysis and design errors are the most numerous

Fig: 1.4 Analysis and design errors are the most numerous

In addition, the following should be performed during the design and program phases: Determine that the software documentation is consistent with the software documentation produced during previous phases. Refine or redefine test sets generated earlier.
8 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Table: 1.1 Life Cycle Verification Activities

The recommended test process involves testing in every phase of the life cycle. During the requirements phase, the emphasis is on validation to determine that the defined requirements meet the needs of the organization. During the design and program phases, the emphasis is on verification to ensure that the design and programs accomplish the defined requirements. During the test and installation phases, the emphasis is on inspection to determine that the implemented system meets the system specification. During the maintenance phase, the system should be retested to determine whether the changes work as planned and to ensure that the unchanged portion continues to work correctly. Requirements The verification activities performed during the requirements phase of software development are extremely significant. The adequacy of the requirements must be thoroughly analyzed. Developing scenarios of expected system may help to determine the test data and anticipated results. These tests will form the core of the final test set. Vague or untestable requirements will leave the validity of the delivered product in doubt. Requirements defined to later phases of development can be very costly.
9 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Design During the design phase, the general testing strategy is formulated and a test plan is produced. If needed, an independent test team is organized. A test schedule with observable milestones should be determined. At this same time, the framework for test documentation should be established. During the design phase, validation support tools should be acquired or developed and the test procedures themselves should be produced. Test data to exercise the functions introduced during the design process, as well as test cases based upon the structure of the system should be generated. Simulation can be used to verify properties of the system structures and subsystem interaction, design walkthroughs should be used by the developers to verify the flow and logical structure of the system, while design inspection should be performed by the test team.

Areas of concern include missing cases, faulty logic, module interface mismatches, data structure inconsistencies, erroneous I/O assumptions, and user interface inadequacies. Program Actual testing occurs during the program stage of development. Many testing tools and techniques exist for this stage of system development. Code walkthrough and code inspection are effective manual techniques. Static analysis techniques detect errors by analyzing program characteristics such as data flow and language construct usage. For programs of significant size, automated tools are required to perform this analysis. Dynamic analysis, performed as the code actually executes, is used to determine test coverage through various instrumentation techniques. Formal verification or proof techniques are used to provide further quality assurance.

10 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Test During the test process, careful control and management of test information is critical. Test sets, test results, and test reports should be catalogued and stored in a database. For all but very small systems, automated tools are required to do an adequate job he bookkeeping chores alone become too large to be handled manually. A test driver, test data generation aids, test coverage tools, test results management aids, and report generators are usually required. Installation The process of placing tested programs into production is an important phase normally executed within a narrow time span. Testing during this phase must ensure that the correct versions of the program are placed into production, that data if changed or added is correct, and that all involved parties know their new duties and can perform them correctly. Maintenance More than 50 percent of a software systems life cycle costs are spent on maintenance. As the system is used, it is modified either to correct errors or to augment the original system. After each modification, the system must be retested. Such retesting activity is termed regression testing. The goal of regression testing is to minimize the cost of system revalidation. Usually only those portions of the system impacted by the modifications are retested. However, changes at any level may necessitate retesting, re-verifying, and updating documentation at all levels below it. For example, a design change requires design re-verification, unit retesting, and subsystem retesting.
1.4 TEST STRATEGY

If IT management selects a structured approach to testing software, they need a strategy to implement it. This strategy explains what to do. Testing tactics explain how to implement the strategy.

11 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

The objective of testing is to reduce the risks inherent in computer systems. The strategy must address the risks and present a process that can reduce those risks. Two components of the testing strategy Test factors Test phase Test factors. The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application system. Test phase. The development team will need to select and rank the test factors for the specific software system being developed. Once selected and ranked, the strategy for testing will be partially defined. The test phase will vary based on the testing methodology used. For example, the test phases in a traditional SDLC methodology will be much different from the phases in a rapid application development methodology.
1.5 METHODS FOR DEVELOPING TEST STRATEGY

The following figure illustrates a generic strategy. However, this strategy should be customized for any specific software system. The following four steps are performed to develop a customized test strategy. 1. Select and rank test factors: The customers/key users of the system in conjunction with the test team should select and rank the test factors. In most instances, only three to seven factors will be needed. Statistically, if the key factors are selected and ranked, the other factors will normally be addressed in a manner consistent with supporting the key factors. These should be listed in the matrix in sequence from the most significant test factor to the least significant. 2. Identify the system development phases: The project development team should identify the phases of their development process. This is normally obtained from the system development methodology. These phases should be recorded in the test phase component of the matrix.
12 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Fig : 1.5 Test factor / Test phase Matrix 3. Identify the business risks associated with the system under development: The developers, key users, customers, and test personnel should brainstorm the risks associated with the software system. Most organizations have a brainstorming technique, and it is appropriate for individuals to use the technique in which they have had training and prior use. The risks should then be ranked into high, medium, and low. 4. Place risks in the matrix: The risk team should determine the test phase in which the risk needs to be addressed by the test team, and the test factor to which the risk is associated. Take the example of a payroll system: If there were a concern about compliance to federal and state payroll laws, the risk would be the penalties associated with noncompliance. Assuming compliance was picked as one of the significant test factors, the risk would be most prevalent during the requirements phase. Thus, in the matrix, at the intersection between the compliance test factor and the requirements phase, the risk of penalties associated with noncompliance to federal and state payroll laws should be inserted.
13 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

The following list briefly describes the test factors: Correctness. Assurance that the data entered, processed, and outputted by the application system is accurate and complete. Accuracy and completeness are achieved through controls over transactions and data elements, which should commence when a transaction is originated and conclude when the transaction data has been used for its intended purpose. File integrity. Assurance that the data entered into the application system will be returned unaltered. The file integrity procedures ensure that the right file is used and that the data on the file and the sequence in which the data is stored and retrieved is correct. Authorization. Assurance that data is processed in accordance with the intents of management. In an application system, there is both general and specific authorization for the processing of transactions. General authorization governs the authority to conduct different types of business, whereas specific authorization provides the authority to perform a specific act. Audit trail. The capability to substantiate the processing that has occurred. The processing of data can be supported through the retention of sufficient evidential matter to substantiate the accuracy, completeness, timeliness, and authorization of data. The process of saving the supporting evidential matter is frequently called an audit trail. Continuity of processing. The ability to sustain processing in the event problems occur. Continuity of processing ensures that the necessary procedures and backup information are available to recover operations should integrity be lost. It includes the timeliness of recovery operations and the ability to maintain processing periods when the computer is inoperable. Service levels. Assurance that the desired results will be available within a time frame acceptable to the user. To achieve the desired service level, it is necessary to match user requirements with available resources. Resources include input/output capabilities, communication facilities, processing, and systems software capabilities. Access control. Assurance that the application system resources will be protected against accidental and intentional modification, destruction, misuse,
14 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

and disclosure. The security procedure is the totality of the steps taken to ensure the integrity of application data and programs from unintentional and unauthorized acts. Compliance. Assurance that the system is designed in accordance with organizational strategy, policies, procedures, and standards. These requirements need to be identified, implemented, and maintained in conjunction with other application requirements. Reliability. Assurance that the application will perform its intended function with the required precision over an extended period of time. The correctness of processing deals with the ability of the system to process valid transactions correctly, while reliability relates to the systems being able to perform correctly over an extended period of time when placed into production. Ease of use. The extent of effort required to learn, operate, prepare input for, and interpret output from the system. This test factor deals with the usability of the system to the people interfacing with the application system. Maintainability. The effort required to locate and fix an error in an operational system. Error is used in the broad context to mean both a defect in the system and a misinterpretation of user requirements. Portability. The effort required to transfer a program from one hardware configuration and/or software system environment to another. The effort includes data conversion, program changes, operating system, and documentation changes. Coupling. The effort required to interconnect components within an application system and with all other application systems in their processing environment. Performance. The amount of computing resources and code a system requires performing its stated functions. Performance includes both the manual and automated segments involved in fulfilling system functions. Ease of operation. The amount of effort required to integrate the system into the operating environment and then to operate the application system. The procedures can be both manual and automated.
15 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Table: 1.2 Test factors


16 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

1.6 FUNCTIONAL AND STRUCTURAL TESTING

Functional testing is sometimes called black box testing because no knowledge of the systems internal logic is used to develop test cases. For example, if a certain function key should produce a specific result when pressed, a functional test would be to validate this expectation by pressing the function key and observing the result. When conducting functional tests, youll be using validation techniques almost exclusively. Structural testing is sometimes called white box testing because knowledge of the systems internal logic is used to develop hypothetical test cases. Structural tests predominantly use verification techniques. If a software development team creates a block of code that will allow a system to process information in a certain way, a test team would verify this structurally by reading the code, and given the systems structure, see if the code could work reasonably. If they felt it could, they would plug the code into the system and run an application to structurally validate the code. Each method has its pros and cons, as follows: Functional testing Advantages: 1. Simulates actual system usage 2. Makes no system structure assumptions Disadvantages: 1. Includes the potential to miss logical errors in software 2. Offers the possibility of redundant testing Structural testing Advantages 1. Enables you to test the softwares logic 2. Enables you to test structural attributes, such as efficiency of code Disadvantages 1. Does not ensure that youve met user requirements 2. May not mimic real-world situations

17 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Structural and Functional Tests Using Verification and Validation Techniques Testers use verification techniques to confirm the reasonableness of a system by reviewing its structure and logic. Validation techniques, strictly apply to physical testing, to determine whether expected results occur. Using verification to conduct structural tests would include Feasibility reviews. Tests for this structural element would verify the logic flow of a unit of software. Requirements reviews. These reviews verify software attributes; for example, in any particular system, the structural limits of how much load (transactions or number of concurrent users) a system can handle. Functional tests are virtually all validation tests, and inspect how the system performs. Examples of this include Unit testing. These tests verify that the system functions properly. For example, pressing a function key to complete an action. Integrated testing. The system runs tasks that involve more than one application or database to verify that it performed the tasks accurately. System testing. These tests simulate operation of the entire system, and verify that it ran correctly. User acceptance. Once the organization staff, customers, or vendors begin to interact with the system, they will verify that it functions properly. The following table shows the relationships explained, listing each of the six test activities, who perform them, and whether the activity is an example of verification or validation.

18 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Table: 1.3 Functional testing


1.7 TESTING METHODOLOGIES

The following are eight considerations in developing Testing Methodologies 1. Determine the test strategy objectives. 2. Determine the type of development project. 3. Determine the type of software system. 4. Determine the project scope. 5. Identify the software risks. 6. Determine when testing should occur. 7. Define the system test plan standard. 8. Define the unit test plan standard. 1. Determining the Test Strategy Objectives Test strategy is normally developed by a team very familiar with the business risks associated with the software; tactics are developed by the test team. In this study, the test team should ask the following questions: What is the ranking of the test factors? Which of the high-level risks are the most significant? What damage can be done to the business if the software fails to perform correctly? What damage can be done to the business if the software is not completed on time? Which individuals are most capable of understanding the impact of the identified business risks?
19 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

2. Determining the Type of Development Project The type of development project refers to the environment/methodology in which the software will be developed. As the environment changes, so does the testing risk. For example, the risks associated with the traditional development effort differ from the risks associated with off-the-shelf purchased software. Different testing approaches must be used for different types of projects, just as different development approaches are used.

Table: 1.4 Test tactics for different project types

20 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

3. Determining the Type of Software System The type of software system refers to the processing that will be performed by that system. This step contains 16 different software system types. Batch (general): Can be run as a normal batch job and makes no unusual hardware or input-output actions (for example, a payroll program or a wind tunnel data analysis program). Event control: Performs real-time data processing as a result of external events (for example, a program that processes telemetry data). Process control: Receives data from an external source and issues commands to that source to control its actions based on the received data. Procedure control: Controls other software (for example, an operating system that controls the execution of time-shared and batch computer programs). Advanced mathematical models: Resembles simulation and business strategy software, but has the additional complexity of heavy use of mathematics. Message processing: Handles input and output messages, processing the text or information contained therein. Diagnostic software: Detects and isolates hardware errors in the computer where it resides or in other hardware that can communicate with that computer. Sensor and signal processing: Similar to that of message processing, but requires greater processing to analyze and transform the input into a usable data processing format. Simulation: Simulates an environment, mission situation, other hardware; inputs from these to enable a more realistic evaluation of a computer program or hardware component. Database management: Manages the storage and access of (typically large)
21 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

groups of data. Such software can also prepare reports in user-defined formats based on the contents of the database. Data acquisition: Receives information in real time and stores it in some form suitable for later processing (for example, software that receives data from a space probe and files it for later analysis). Data presentation: Formats and transforms data, as necessary, for convenient and understandable displays for humans. Decision and planning aids: Uses artificial intelligence techniques to provide an expert system to evaluate data and provide additional information and consideration for decision and policy makers. Pattern and image processing: Generates and processes computer images. Such software may analyze terrain data and generate images based on stored data. Computer system software: Provides services to operational computer programs. Software development tools: Provides services to aid in the development of software (for example, compilers, assemblers, and static and dynamic analyzers). 4 . Determining the Project Scope The project scope refers to the totality of activities to be incorporated into the software system being testedthe range of system requirements/specifications to be understood. The scope of new system development is different from the scope of changes to an existing system. Consider the following issues: New systems development: What business processes are included in the software? Which business processes will be affected? Which business areas will be affected? What existing systems will interface with this system? Which existing systems will be affected? Changes to existing systems: Are the changes corrective or is new functionality being added? Is the change caused by new standards? What other systems are affected?
22 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Is regression testing needed? 5. Identifying the Software Risks Strategic risks are the high-level business risks faced by the software system; software system risks are subsets. The purpose of decomposing the strategic risks into tactical risks is to assist in creating the test scenarios that will address those risks. Tactical risks can be categorized as follows: Structural risks Technical risks Size risks 6. Determining When Testing Should Occur Testing can and should occur throughout the phases of a project. Examples of test activities to be performed during these phases are: A. Requirements phase activities Determine test strategy Determine adequacy of requirements Generate functional test conditions B. Design phase activities Determine consistency of design with requirements Determine adequacy of design Generate structural and functional test conditions C. Program phase activities Determine consistency with design Determine adequacy of implementation Generate structural and functional test conditions for programs/units D. Test phase activities Determine adequacy of the test plan Test application system E. Operations phase activities Place tested system into production F. Maintenance phase activities Modify and retest

23 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

7. Defining the System Test Plan Standard This test plan will provide background information on the software being tested, on the test objectives and risks, as well as on the business functions to be tested and the specific tests to be performed.

24 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

25 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Table: 1.5 System Test Plan Standard

8. Defining the Unit Test Plan Standard


During internal design, the system is divided into the components or units that perform the detailed processing. Each of these units should have its own test plan. The plans can be as simple or as complex as the organization requires based on its quality expectations.
26 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

Introduction

Software Testing

Table: 1.6 Unit Test Plan Standard The importance of a unit test plan is to determine when unit testing is complete. It is a bad idea economically to submit units that contain defects to higher levels of testing. Thus, extra effort spent in developing unit test plans, testing units, and ensuring that units are defect free prior to integration testing.

27 Author:S.V.Karthik,AP/CSE PRIST University/Kumbakonam

You might also like