You are on page 1of 18

Master Test Plan

for Software Testing Course 8102020, Spring 2005 Tampere University of Technolgy Version 0.7
Antti Kervinen ask@cs.tut. 11th February 2005

Contents
References Glossary 1 Introduction 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Tasks and deadlines . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Structure of this document . . . . . . . . . . . . . . . . . . . . . 4 4 4 4 5 5

I
2 3 4 5 6 7

Test plan for system testing


Test items Features to be tested Features not to be tested Approach Item pass/fail criteria Test deliverables 7.1 Test design and test case specications (the rst task) . . . . . . . Testing tasks 8.1 First task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Environmental needs

6
6 6 6 6 8 8 8 10 10 11

II Test plan for unit testing


10 Test items 11 Features to be tested 12 Features not to be tested 13 Approach 13.1 Class hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Stubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Test deliverables 14.1 Test design (the second task) . . . . . . . . . . . . . . . . . . . .

12
12 12 12 13 13 13 13 14 14

15 Testing tasks 15.1 Unit test design (the second task) . . . 15.2 Unit testing (the third task) . . . . . . 15.3 Setting up the unit testing environment 15.4 Setting up CTC++ . . . . . . . . . . . 15.5 Using cppunit and CTC++ . . . . . . 16 Environmental needs

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

15 15 15 16 16 17 18

References
[CPPUNIT] Feathers M., Lepilleur B.: CppUnit Cookbook http://cppunit.sourceforge.net/doc/lastest/cppunit_cookbook.html [HTML] Korpela, J.: Documents about WWW written or recommended by Jukka Korpela. http://www.cs.tut./jkorpela/www.html [IEEE829] IEEE Standard for Software Test Documentation. IEEE Std 829-1998. September, 1998. Available from TUT addresses in http://www.ieeexplore.ieee.org/xpl/tocresult.jsp?isNumber=16010 [Mozilla] Mozilla web pages. http://www.mozilla.org. [Mye04] Myers G. J., Sandler C., Badgett T., Thomas T. M.: The Art of Software Testing, 2nd edition. John Wiley & Sons, 2004. [RFC2396] Uniform Resource Identiers (URI): Generic Syntax. http://www.ietf.org/rfc/rfc2396.txt [RFC2732] Format for Literal IPv6 Addresses in URLs. http://www.ietf.org/rfc/rfc2732.txt

Glossary
Mozilla RFC URL Web browser developed from the source code released by Netscape Request for comments, a series of Internet informational documents and standards Unied Resource Locator, a string that identies a resource by its location

1 Introduction
This document describes the course project for software testing course in Tampere University of Technology, Spring 2005.

1.1 Overview
In this project the URL parsers of Mozilla [Mozilla] web browser will be tested. The tests are designed and executed according to the following V model: System test design Unit test design

System test execution Unit test execution

In the system tests the parsers will be tested by running the real Mozilla, giving it some URLs to parse and by examining the results of the parser. In the unit testing a standard URL parser class is tested separately by calling its methods from test drivers written by the students. The project must be done in pairs.

1.2 Tasks and deadlines


At rst you should nd a pair and register your team to the Kuha system. You can nd a link to Kuha in the course project page: http://www.cs.tut.fi/testaus/harjoitustyo/ The project is divided in four tasks: 1. System test design report due 9.2.2005 12:00. 2. Unit test design report due 9.3.2005 12:00. 3. Unit test report due 6.4.2005 12:00. 4. System test report due 27.4.2005 12:00. In Kuha you can book times to preliminary reviews of your reports. The reviews are voluntary. The nal version of a preliminarily reviewed and accepted report will give you 14 points. If not reviewed, you can get 04 points. Only the rst two reports can be preliminary reviewed.

1.3 Structure of this document


This document contains two parts: a test plan for system testing and a test plan for unit testing. Both parts follow the template for test plans in IEEE standard for software test documentation [IEEE829]. The rst part includes the requirements for tasks 1 and 4 , the second part for tasks 2 and 3.

Part I

Test plan for system testing


2 Test items
The URL parser of Mozilla 1.7.2 will be tested. The very same parser that is used in Mozilla 1.7.3 (which is the latest stable Mozilla release at the moment) and in Firefox 1.0. The syntax of URLs is specied in [RFC2396]. The syntax is extended in [RFC2732] to contain IPv6 addresses. Both documents should be used as a specication in the test case design.

3 Features to be tested
Test how Mozilla parses valid URLs and what it does to invalid ones. The objective is to nd correct URLs that are parsed incorrectly and incorrect URLs that are considered valid. Finding an URL that cause a crash or some other unexpected behaviour is, of course, even more desired, and you may even get rewarded for that (see http://www.mozilla.org/security/bug-bounty.html). We limit the testing to the URLs in http scheme, that is,

4 Features not to be tested


Others than URLs in http scheme (such as ftp://ftp.funet.fi/README, mailto:testaus@cs.tut.fi etc.) will not be tested at this point. Other functionality than the URL parser will not be tested.

5 Approach
The source code of Mozilla (in /share/tmp/testaus) has been altered to support system testing in this project. Every time a URL beginning with http is parsed, it is printed to standard output. Also, the result of the parsing (valid URL 6

absolute URLs that begin with http:, for example http://www.cs.tut.fi/testaus/ relative URLs in the cases where the base URL is in http scheme, for example ../../images/foobar.jpg

or invalid URL) is printed, as well as some details if the URL was considered valid. In the details the given URL is separated to components. It should be veried that the separation is correct. In the system testing phase you should run Mozilla so that it parses the URLs in your test cases. All results of the parsing should recorded. You may decide how you give the URLs in your test cases to Mozilla. There are several ways to do this. One possibility is to write an HTML [HTML] le containing all the URLs to be tested. Here is one example of the le contents. (Warning: this example contains way too few test cases to pass the course.)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <base href="http://u:p@foo.foobar.fi/dir1/dir2/file1.ext"> <title> </title> </head> <body> <p>Valid relative paths<br> <a href="/foo.txt">starts with /</a><br> <a href="//servername/foo.txt">starts with //</a><br> <a href="*foo.txt">starts with *</a><br> </p> <p>Invalid relative paths<br> </p> <p>Valid absolute paths<br> </p> <p>Invalid absolute paths<br> </p> </body>

Given that you have the le that contains your test cases, you can then run (in bash): dist/bin/mozilla file:path&le 2>/dev/null | tee test.log tee program reads standard input and copies it to the given les and standard output. 2>/dev/null throws away the stuff Mozilla writes to standard error. Whenever your mouse cursor is over a link, the corresponding URL is parsed. This is how you can check the parsing results URL by URL. 7

This is just one possibility. You are allowed to separate your test cases in many HTML les or feed the URLs one by one in the location text box. If it makes your life easier, you may write a program that, for example, reads a list of URLs, generates an HTML le that contains the URLs, runs Mozilla and checks that the results of parsing are what you expected. After all, what you have to return with your report is the output of Mozilla when it parsed your URLs. No matter how you get there. (The other contents of the report are listed in Section 7.)

6 Item pass/fail criteria


The system test fails if and only if any (valid or invalid) URL causes Mozilla to crash or stop responding, or if a valid URL is incorrectly parsed.

7 Test deliverables
7.1 Test design and test case specications (the rst task)
You should return a test design document due 9.2.2005 12:00. The document should contain the following information. An example of wanted information is written in italics below. 1. Features to be tested

2. Approach renement

3. Test identication

Identify test items and describe the features and combinations of features that is going to be tested. Numerical IPv4 addresses User info (names and passwords) ...

Tell how you are going to run the tests and what you will need to do that. For example, list the les you create or modify for test runs and the les that will be created during the test runs. Tell what the les are for and what they contain. List the commands that will be executed; what they do and why they need to be executed. What should the tester do during test runs? How is the data going to be gathered and analysed? Summarise common attributes of all test cases.

4. Feature pass/fail criteria

5. Priorisation

6. Test cases At least the following information on every test case should be given.

In the following example of test cases the rst test case ok-allfields includes all the elds that the modied Mozilla parser outputs when it is given a valid URL. In the output specications of test cases you need to specify only the values of the 9

Give an identier and a brief description of each test case: what are the features that are tested by this test case. Note that the same test case can be used to test many features. For example, you can present identied features and test cases in a table like the this: Username Password Hostname Empty TC1 TC1 Very long TC2 TC2 TC2 Illegal In the table it is easy to see that empty username and empty password are tested in test case TC1. It also shows that there is no test case that would test an URL with somehow illegal hostname. (Your table should have more rows and columns.) However, do not forget features like handling too many ../../ in relative URLs, which might not t in the table.

Specify the criteria to be used to determine whether a feature has passed or failed.

Imagine that, when a new version of the browser is nally given to testers, there will be no time to execute all the test cases you have created. To prepare for the situation divide you test cases in three classes based on their importance: Critical. At least these test cases should be executed and passed before the new browser can be released. Important. These test cases should be run next, if there is still time left. Low. Explain why you divided the test cases like you did. (A small scale risk analysis is wanted.)

Test case identier. Priority Input specications. Output specications.

elds that should be checked in the test case. In the output specications of test case err-IPv4 we show what the parser outputs in case of an invalid URL.
Test case id Priority Input Output ok-allfields Critical http://usr:pwd@tut.fi/d1/f.txt;p1?q#r result valid URL port -1 scheme http authority usr:pwd@tut.fi username usr password pwd hostname tut.fi path /d1/f.txt;p1?q#r lepath /d1/f.txt directory /d1/ basename f extension txt param p1 query q ref r err-IPv4 Low http://127.0.0../999/foo.txt result invalid URL Two dots in a row are not allowed in IP address or in host name

Test case id Priority Input Output Special notes

8 Testing tasks
8.1 First task
1. Familiarise yourself with the URL specications [RFC2396] and [RFC2732]. 2. Based on the RFC documents, identify the features to be tested (empty username, extremely long password, invalid IPv6 address. . . ). 3. Design test cases that test the features you identied. Specify inputs (URLs), expected outputs (the results of the parsing). 4. Decide how you run the test cases. (Approach.) 5. Prioritise the test cases. (Risk analysis.) 6. Write system test design document, outlined in Section 7.1. Finnish students should use the given template for test plans. Foreign students should use the cover page of the template, the outline in Section 7.1. All students are also adviced to check out Sections 5 and 6 in [IEEE829]. 7. Return the document due 9.2.2005 12:00. 10

9 Environmental needs
The source code of Mozilla 1.7.2 or 1.7.3 is needed. (Their URL parses are exactly the same.) A commercial tool called CTC++ is used to measure the code coverage. The tool is installed in Lintula, and it currently works on Solaris only.

11

Part II

Test plan for unit testing


10 Test items
The class hierarchy that implements the URL parser of Mozilla 1.7.2 will be tested.

11 Features to be tested
The methods of utility class nsStdURLParser should be tested. The most of the methods are implemented in its base classes. The methods are

The code segments that are compiled only for Windows and OS/2 operating systems (that is, XP_WIN or XP_OS2 is dened) should be tested also. The goal is to achieve as high multicondition coverage (moniehtokattavuus in Finnish) of the code as possible [Mye04].

12 Features not to be tested


The methods of the nsStdURLParsers base classes that are not inherited by nsStdURLParser are not tested. For example, ParseAuthority of nsBaseURLParser class is not tested, but ParseAuthority of nsAuthURLParser is tested, because the former is not but the latter is inherited by nsStdURLParser. Non-functional features of the methods are not tested.

ParseURL ParseAfterScheme ParseAuthority ParseUserInfo ParseServerInfo ParsePath ParseFilePath ParseFileName

12

13 Approach
13.1 Class hierarchy

The class hierarchy to be tested is located in nsURLParsers module. The code inside nsURLParsers has not been touched, but the module has been extracted from the Mozilla source tree and relocated to /share/tmp/testaus/unittest-READONLY The directory includes all the necessary header les (most of them are empty) needed for compiling the module. To run the tests, a driver and some stubs has to be implemented.

13.2

Stubs

nsIURLParser module includes the skeletons of two functions, (net_isValidScheme, IsAsciiAlpha), one method (ToInteger) and one class constructor (nsCAutoString). They must be implemented in order to test the nsStdURLParser class. The stubs should pass the cppunit tests that are dened in testCAutoString and testUtilityFunctions modules. You can run the tests by setting up the unit test environment, explained in Section 15.3, and running in there % make run_stubtest You are allowed to extract the code for the stubs from the Mozilla source tree, or you can write your own implementations. The latter may be easier because the needed functionality of the stubs is very limited.

13.3

Driver

cppunit will be used to run tests, so you should write the test cases in cppunit classes. To nd out how to do that, see modules testCAutoString and testUtilityFunctions, and refer to cppunit documentation [CPPUNIT]. Note that the same test cases should be immediately runnable also with the new versions of the unit under test. Therefore, do not touch the code of the unit under test.

13

14 Test deliverables
14.1 Test design (the second task)

The outline of the unit test design document is very similar to the system test design. The only difference is that priorisation is not required here. 1. Features to be tested In this phase the goal is high multicondition coverage of the code. Identify conditions in the source code of the unit under test. For example, in ParseURL method there is row
for (p = spec; len && *p && !colon && !slash; ++p, --len) {

which has four conditions:

These conditions form 16 combinations of truth values (false, false, false, false; false, false, false, true; . . . ; true, true, true, true), but all of them might not be possible. In the features to be tested try to describe, separately for each method to be tested, what kind of inputs are needed to cover all possible truth value combinations. For example, length of spec is 0, there is no colon nor slash before 0, slash comes before colon and 0, etc. 2. Approach renement

Tell how you are going to run the tests and what you will need to do that.

For example, list the les you create or modify for test runs and the les that will be created during the test runs. Tell what the les are for and what they contain. List the commands that will be executed; what they do and why they need to be executed. What should the tester do during test runs? How is the data going to be gathered and analysed? Summarise common attributes of all test cases.

3. Test identication

Give an identier and a brief description of each test case. List the features that are tested by this test case.

4. Feature pass/fail criteria

14

1. 2. 3. 4.

len!=0 *p!= 0 colon==0 slash==0

5. Test cases At least the following information on every test case should be given.

15 Testing tasks
What you should do in the second and the third tasks is told in the rst two subsections. The tools that should be used to do it, are shortly introduced in the other subsections.

15.1

Unit test design (the second task)

1. Read the code of the methods that will be tested. 2. For each method, identify the features to be tested as described in Section 14.1 item 1. 3. For each method, design test cases so that every identied feature will be tested. Specify the parameters that are given to the method and what output is expected (what is returned by the function and what is returned in the parameters). 4. Describe how the test cases will be run. Specify which test cases will be implemented in which cppunit test suite. 5. Write the unit test design document. 6. Return the document due 9.3.2005 12:00.

15.2

Unit testing (the third task)

1. Setup unit testing environment. 2. Implement the stubs in nsIURLParser.cpp le. The stubs should pass tests in testCAutoString and testUtilityFunctions. 3. Implement the test cases you have specied to cppunit test suites.

Specify the criteria to be used to determine whether a feature has passed or failed.

Test case identier. Input specications (which method is called and what are the arguments). Output specications (what should be the return value, what should be returned in the arguments, what else has changed its value). Environmental needs (which stubs are required to execute this test case)

15

4. Modify the Makele so that command gmake run_unittest runs the cppunit test suites and measures the multicondition coverage of the code in nsURLParsers (MON.sym and MON.dat les are generated). 5. Write the test report (the outline will be given later). 6. Return the test report due 6.4.2005 12:00. Enclose unittest.tar.gz package to the electonic version of the submission. The package should contain the modied Makele and all source codes of cppunit tests and stubs that you have written. When the course robot executes
mkdir cleandir; cd cleandir . /share/testwell/bin/testwell_environment echo yes | /share/tmp/testaus/scripts/setup.unittest.sh cd unittest gtar xzf /submitted/by/you/unittest.tar.gz rm -f MON.sym MON.dat *.o gmake run_unittest

all your test suites should be compiled and run. Also les MON.sym and MON.dat should be (re)created. Make sure this really happens before submitting anything!

15.3

Setting up the unit testing environment

Execute
/share/tmp/testaus/scripts/setup.unittest.sh

in a directory where you want the unittest environment to be setup. Less than 300 kB of space will be required. Therefore, it is recommended to use a location that is more reliable than /share/tmp/. Home directories are a good choise.

15.4

Setting up CTC++

CTC++ is used to instrument the code during the compilation. When the instrumented code is run, MON.dat and MON.sym les will be created. They contain the coverage information of the instrumented code. You can examine the results by executing ctcpost MON.dat. Whenever CTC++ tools are used or instrumented code executed, you must have the CTC++ environment variables set. It can be done by running
. /share/testwell/bin/testwell_environment

(The dot and the space in the front of the line are important!) When the environment is set, you can see the manual page of CTC++ with command man ctc. For 16

the purposes of this project it should be enough to understand the ctc commands that are explained in Section 15.5.

15.5

Using cppunit and CTC++

Cppunit gives means of dividing test cases into test suites. It provides macros for implementing test suites and stating assertions in the test cases. What you need is

Given that you have the unit test environment (see 15.3) and you also have CTC++ environment variables set (see 15.4), run
gmake clean gmake stubtest

The GNU Make output is explaned here:


ctc -i m g++ -o stubs.o -c nsIURLParser.cpp

This compiles nsIURLParser.cpp to stubs.o object le. nsIURLParser.cpp contains the stubs that you will implement in task three. The code is instrumented with CTC++, because in stubtest the stubs are the unit under test. (When you test nsURLParsers.cpp, it will be instrumented instead of the stubs.) -i m switch tells CTC++ to instrument the code for measuring the multicondition coverage.
g++ -I. -Wall -pedantic -o testRunner.o -c testRunner.cpp

This compiles the main program. The main runs all the test cases that are written in cppunit classes in the other test*.o les. You do not need to edit testRunner.cpp in any task.
g++ -I. -Wall -pedantic -o testCAutoString.o -c testCAutoString.cpp

This compiles the cppunit test class that will test the implementation of CAutoString stub class. 17

the unit under test (object code is enough) the main program (object code is again enough) a class for each test suite (a test case in a test suite is a method in the corresponding class. The classes and the methods are what you need to implement)

g++ -I. -Wall -pedantic -o testUtilityFunctions.o -c testUtilityFunctions.cpp

Similarly to the previous command this compiles again a class containing test cases. These test cases will test the stubs of utility functions.
ctc -i m g++ -o stubtest stubs.o testRunner.o testCAutoString.o testUtilityFunctions.o -L. -lcppunit

This links the object les to an executable binary. Note that the unit under test (in this case stubs.o) and the main program (testRunner) do not need changes nor recompiliation when new test cases are created and added. Just relink the two object les with what ever test case object les you have, and you will get a new binary which executes all linked test cases. Test cases are located in cppunit test classes (test suites). Every test suite has setUp and tearDown methods that are called in the beginning and at the end of the test suite. There should be also one method for each test case in the suite. Test cases do the testing and then check the results with assertion macros, for example with CPPUNIT_ASSERT_MESSAGE(assertion,message) macro. This macro checks the given assertion and, if it fails, causes the execution of the test case to return with verdict fail and the given message to be printed. The test case is passed if the execution of a test case reaches the end of the method.

16 Environmental needs
Unit tests has to executed in Lintula Sparc/Solaris servers or workstations. Only Sparc/Solaris version of CTC++ is currently installed in Lintula and only Sparc/Solaris version of cppunit is ready for use in the unit test environment.

18

You might also like