You are on page 1of 9

Vol. 4 Issue 3 -1- www.testinginstitute.

com

RUP: Implementing an Iterative Test Design Approach


Steve Boycan, Managing Director, SIAC
Yuri Chernak, Consultant, Valley Forge Consulting, Inc.

According to RUP, test design should iteratively evolve


along with system requirements and design. Here’s how
it can be implemented by combining features of the
requirements-based and exploratory testing schools.

When SIAC* projects began implementation of the Rational Unified Process (RUP), the
main challenge for testers was to adjust the rigid requirements-based testing style to the
iterative nature of the RUP framework. According to the RUP, software requirements and
design iteratively evolve throughout the project lifecycle, and so it is expected that the
test design would also do so. Hence, testers have to implement a test design approach that
should be effective in the context of evolving software requirements. Although the RUP
methodology is a comprehensive source of activities and guidelines that define the
overall testing workflow, still it lacks some details about test design. In particular, we
needed more details about a) how the test design steps and artifacts relate to other RUP
disciplines, and b) how to implement the test design activities based on evolving
requirements. To fill this gap, we developed an approach that we called – Iterative Test
Design (ITD). In this article, we describe our approach that combines features of the
requirements-based and exploratory testing schools. The main benefit of the ITD is that it
provides a framework for test design that is effective in the context of incomplete or
evolving requirements.

Two Schools of Testing


When we refer to requirements-based testing, we usually mean a traditional school of
formal testing that has been practiced for the last 20-30 years. Some sources that present
this methodology could be the books by G. Myers1, B. Hetzel2, E. Kit3, just to name a
few. The main concept of this methodology is that test design and test cases are derived
from external functional requirements that are assumed to be complete and frozen before
testers start using them for test design. And, the commonly cited issue with this approach
is that the requirements-based testing is only as good as the requirements that are often
incomplete, late, or not available at all. On projects that follow the Unified Process,
testers also have an issue with this approach as requirements iteratively evolve and
change during the course of a project.

As an alternative to the requirements-based testing, the exploratory testing school


emerged as a solution for projects that lack software requirements. Cem Kaner initially
coined the term “exploratory testing” in his book4 published over ten years ago. Another

*
SIAC – Securities Industry Automation Corporation
Vol. 4 Issue 3 -2- www.testinginstitute.com
testing expert James Bach has been evolving and teaching the exploratory testing as a
systematic risk-based approach to testing5. The main concept of this methodology is that
test design and test execution happen at the same time; and testers develop test ideas
driven by exploration of the product’s quality risks. As we will discuss in this article, the
features of both testing schools can be combined to help us effectively deal with
incomplete or evolving requirements.

Iterative Test Design Approach


Our approach to test design has the following main features:
ƒ Iterative Test Design – test design iteratively evolves in a top-down fashion
throughout the Plan-Design-Execute phases of the testing lifecycle.
ƒ Use-Case-Driven Test Design – this means that use cases, defined for the system,
are the primary source used for the high-level test design.
ƒ Quality-Risk-Based Test Design – the approach is based on exploration of product
quality risks existing in the context of use cases. The identified quality risks become
testing objectives that guide testers in designing, prioritizing, and executing tests.
ƒ Project-Context-Driven Test Design – this means that the amount and details of test
documentation depend on a project’s context and management objectives.

The ITD workflow is defined as the following four steps:


ƒ Step 1 - Identify Generic Product Quality Risks (and develop a high-level approach
to application testing)
ƒ Step 2 - Identify Specific Quality Risks Existing in a Use Case Context (and develop
ideas about what to test)
ƒ Step 3 - Refine a Use Case Test Approach (and develop ideas about what and how to
test)
ƒ Step 4 - Manually Execute Tests to Explore the Product (and search for new test
ideas)

There is an important difference between the last step and the first three steps. In
particular, while performing our test design activities during the first three steps, we
explore product requirements; whereas at Step 4, we manually execute tests to explore
the product itself. Test execution is sometimes viewed as a simple procedure that can be
completely automated. Therefore, someone may have a question, “Why is test execution
a part of test design?” According to the exploratory testing concept6, software testing is a
challenging intellectual process that focuses on software product exploration. That, in
turn, requires human observation and analysis that can never be replaced by automated
scripts. As a result, in the course of manual test execution, testers can develop new test
ideas and enhance their existing test design.

STEP 1 – Identify Generic Product Quality Risks


ITD begins early in the process with defining an approach to application functional
testing. The word approach implies that at this point we capture just high-level ideas
about what needs to be tested and define preliminary steps for test execution. At Step 1 of
our workflow, we do not know many details about the system implementation. However,
based on our prior experience and having such sources as user requirements and business
Vol. 4 Issue 3 -3- www.testinginstitute.com
use cases, that are usually available early in the process, we can start developing ideas
about what in general can go wrong with the system and what kind of failures it can
experience in production. That is why we entitled this step – Identify Generic Product
Quality Risks. By our definition, generic quality risks are particular product quality
concerns that have a recurring context across the functional areas of a system under test.
Some examples could be GUI features, data-entry field validation, record deletion rules,
concurrency control, etc.

Once we have identified generic quality risks, we then develop ideas about how to
execute tests for the identified risks. Forms to capture the test logic could be as simple as
a checklist of test conditions, or it could be a much more elaborate form, such as test
design patterns. In general, patterns in software development are used to capture a
solution to the issue that has a recurring context7. Hence, the test logic for generic quality
risks can be presented by test design patterns. The deliverable of this step is a defined
approach to application functional testing. The forms to present the test approach could
be either a list of test ideas we developed at this step, or a section in a test plan document,
if it is a required deliverable that follows the IEEE Std.829-1998 “IEEE Standard for
Software Test Documentation”.

STEP 2 – Identify Specific Quality Risks Existing in a Use Case Context


ITD is a use-case-driven approach. We can proceed to the next step as soon as drafts of
use cases become available, which is, again, early in the process. At this step, we should
explore software requirements to identify specific quality risks existing in the context of
use cases. Using the word context we would like to emphasize that testers, developing
test ideas, should look beyond what is written in the use-case description and analyze the
whole situation presented in a given use case to understand all possible ways the system
can fail.

We begin our activities at this step with analyzing the use case description and
decomposing it into a few control flows. For this purpose, we can use a use-case activity
diagram, if it is available. Then we analyze each control flow to identify specific quality
risks and develop ideas about what to test, i.e., what can go wrong if the user (or actor)
performs all possible valid and/or invalid actions. While exploring requirements
described in a use case, we should use the list of generic quality risks (our deliverable
from Step 1) as a reference source that can suggest some test ideas that are not obvious
from the use case description. Analyzing use cases, we can find that a particular kind of
quality risks not initially included in the generic list is identified across a number of use
cases; hence, it can also be qualified as a generic risk. In this case we should go back to
Step 1 and update the list of generic quality risks. Finally, we define test objectives for
the use case based on the identified specific quality risks. An important deliverable of this
step is a draft of high-level test design specifications that capture our test ideas and test
objectives for the use cases.

STEP 3 – Refine a Use Case Test Approach


As the project progresses, more sources may become available to testers that provide
additional details about the system implementation. For example, on our projects at SIAC
Vol. 4 Issue 3 -4- www.testinginstitute.com
these could be various Rational Rose diagrams, a user interface specification, storyboards
that illustrate use cases, detailed Software Requirements Specification (SRS), and so on.
Based on the additional specifications, we can refine the test approach for each use case
and define a rationale for selecting test cases for already identified test objectives. This is
the point where conventional black-box test design techniques, e.g. decision table,
boundary-value analysis, equivalence partitioning, and so on can be used to identify
positive and negative test cases. Once we have identified test cases, we can then develop
ideas about how to execute these tests.

As we learn more details about the system implementation, our understanding of quality
risks can further evolve and we can identify additional test objectives for the use cases. In
this case, we should go back to Step 2 and update objectives in our test designs. When
we feel our test designs are fairly complete, we should review them with business experts
and, based on their feedback, revise the documents, if necessary. Another good practice is
to validate the test designs using a system prototype to make sure our documents captured
correct test ideas. Deliverables of this step are high-level test design specifications
(completed and reviewed at this point) and test case specifications, still in draft form.

According to our approach, the high-level test design specifications are testers’ main
deliverables. Even though this document type has been known for many years, for some
reason testers do not use it as frequently as test case specifications. As we found in the
context of complex projects with iterative development, this document type provides the
following benefits. First, test design specifications are easy to review and verify as they
present high-level ideas about “what” to test as opposed to test case specifications that are
intended to capture detailed information about “how” to test. Second, being high-level
documents, test design specifications are typically less affected by changing and evolving
system requirements and design specifications as opposed to detailed test case
specifications. Hence, their maintenance cost can be significantly lower. Third, from test
design specifications, management can see the number of test cases to be developed and
executed. These data, being available early in the process, can help management better
allocate resources and schedule testing activities in the project plan.

STEP 4 – Manually Execute Tests to Explore the Product


At some point in the project lifecycle, developers feel that the build is completed and
stable enough to be given to testers to do their part. At Step 4, which is the last step of our
test design workflow, testers manually execute tests to further explore the product’s
quality risks. When testers manually execute tests, their understanding of quality risks
can significantly evolve. Hence, they can develop new test ideas and add test cases to the
existing test designs. The number of additional or “exploratory” test cases will primarily
depend on a) incompleteness of requirements used for the initial test design, and b)
testers’ experience with exploratory testing. Deliverables of this step are high-level test
design specifications, enhanced and validated, and detailed test case specifications,
completed and verified. There is one more important activity of this step - analysis of
software defects reported during test execution - that will be discussed in detail next.

Defect Data Analysis During Test Execution


Before test execution, our test design was based primarily on assumptions about product
quality risks. However, during the test execution step, additional data become available to
Vol. 4 Issue 3 -5- www.testinginstitute.com
testers that reflect the areas of product instability. Indeed, software defects are a clear
manifestation of the product’s implementation flaws. Therefore, by analyzing reported
defects we can see how our initial assumptions about quality risks are actually realized in
the course of test execution. By building a distribution of defects, for example, by generic
quality risks or by functional areas of the application under test we can better see the
areas of product instability and make a better decision about where the focus of test
execution should be. When testers perform system testing as a team, another effective
practice is analyzing each software defect reported by other testers. Seeing how other
testers find defects and analyzing their test logic can help us develop additional test ideas
and perform in-process improvement of our own test design.

Static View: RUP Workflows and ITD Steps


The RUP methodology is represented by a number of disciplines8, where each discipline
groups related process activities and presents them as a logical workflow. Figure 1 shows
four of the RUP disciplines – Business Modeling, Requirements, Analysis and Design,
and Implementation – whose artifacts are primary sources for our iterative test design
approach. Understanding how the ITD steps relate to other RUP disciplines is important
because it can help us better plan and schedule test design activities. Also, as the software
process iteratively evolves, deliverables of the four disciplines may change and affect the
test design deliverables. Hence, understanding how the test design steps depend on other
RUP workflows can help us perform impact analysis and keep the evolving project
artifacts consistent.

RUP Disciplines ITD Steps ITD Deliverables

Business
Modeling Step 1 Approach to testing

High-level Test
Requirements Step 2 Designs (draft)

Analysis and • High-level Test Designs


Design Step 3 (completed / reviewed)
• Test Case specs (draft)

• High-level Test Designs


Implementation (enhanced / validated)
Step 4 • Test Case specifications
(completed / verified)

Figure1. RUP Workflows and ITD Steps


Dynamic View: Testing Lifecycle and ITD Activities
Figure 2 shows an activity diagram that presents as a logical workflow of all activities
necessary to implement our iterative test design approach. In particular, on this diagram
we can see that from each consecutive step we always come back and revise or enhance
test ideas that we developed at the previous step. Also, this diagram illustrates how the
steps of the proposed approach correspond to the test process lifecycle defined by the
RUP. As we can see, our activities are performed throughout the Plan-Design-Execute
phases of the testing lifecycle, which illustrates a top-down nature of our approach.
Vol. 4 Issue 3 -6- www.testinginstitute.com
According to the RUP, project team members have particular roles for performing the
software process activities. The test process roles are defined as Test Manager, Test
Analyst, Test Designer, and Tester9. Different roles do not necessarily mean different
people, but rather define various responsibilities that can be assigned either to the same
person or to different team members. Figure 2 illustrates how these roles are involved in
our approach implementation. By definition, Test Analyst is primarily responsible for
generating test ideas. As the diagram shows, this role is involved at all steps of the
approach workflow. This means, that the process of developing test ideas continues
throughout the testing lifecycle. Another interesting point is that during test execution
(Step 4 of ITD workflow) all three roles are involved in the test design activities. As we
discussed, Test Analyst continues to explore the product’s quality risks and analyze the
reported software defects in order to find new test ideas. On the other hand, Test
Designer validates the test design specifications and updates them if necessary, while
executing tests based on the existing test design. Also, if Test Analyst found a new test
idea, Test Designer should enhance the existing test design based on the new test idea.
Finally, Tester completes and verifies detailed test case specifications while executing
tests to make sure they correctly reflect the test execution details. This is especially
important when these documents are used for test automation or considered a product that
we deliver with the system.
Vol. 4 Issue 3 -7- www.testinginstitute.com
Phases
of testing Plan Test Design Test Execute Test
lifecycle

Project Test Analyst


team Test Designer
roles Tester

Figure 2. Testing Lifecycle and ITD Activities

ITD is a Project-Context-Driven Approach


On critical projects the ITD approach can be compliant with the IEEE Std.829 by
delivering two types of test design related documents – test design specification and test
case specification. High-level test design specifications are a tester’s key deliverables that
capture ideas about “what” to test. And detailed test case specifications capture the “how”
details about test execution. If establishing traceability is one of the management
objectives on a project, it can be implemented at different levels, for example, from a use
case to a test design specification (this will be a case of 1:1 relationship), or from a test
design specification to the corresponding test case specifications (this will be a case of 1:
M relationship).

Our test design approach is project-context-driven. This means that our decisions about
whether we need detailed test case specifications and what level of details is appropriate
for test design specifications should depend on a project’s context and management
objectives. In general, before making a decision about how much to invest in designing
and maintaining test documentation, we should clarify requirements for the test
documentation by answering such questions as “What is the purpose of test
documentation?” and “Who and how will be using test documentation?” The same type
of test documentation can have different details for different purposes. For example, it
could be a product to be delivered with a system that should comply with contractual
requirements; or it could be a tool for testers that should help them execute testing in a
structured way; or test documentation could be a tool for management that should help
them better plan and control the project. Therefore, as any other deliverable on a project,
test documentation should have defined requirements and design guidelines tailored to a
given project context. The book6 “Lessons Learned in Software Testing” has a detailed
discussion of this topic.
Vol. 4 Issue 3 -8- www.testinginstitute.com

ITD Implementation Challenges


While implementing iterative test design on our projects, we faced a few challenges that
apparently may be common to any project that follows this approach. The first issue is
concerned with a paradigm shift. Transition to the exploratory way of testing could be
challenging for testers who are mostly experienced with the requirements-based testing.
To resolve this issue, management should plan training for testers and test design
reviews. The second issue can be best described by a question - “What to Test Next?”
Under time pressure during test execution, if a tester finds a new test idea, it could be a
challenge for the tester to decide what to test next – either develop and execute new test
cases or keep going with the original plan. This issue relates to the next one - planning a
test schedule. Planning the test schedule, a test manager should take into account the
additional number of test cases that testers will identify during test execution. This can
also resolve the previous issue and leave testers enough time to explore new test ideas
that can be found during test execution. However, we do not know in advance how many
new test cases will be found. The test manager should make an assessment based on a
given project context and the experience of previous projects. The last issue we would
like to address relates to test documentation management. Commercial test management
tools may not directly support a two-level hierarchy of documents - high-level test
designs and detailed test case specifications. This issue can be resolved in a number of
ways. For example, we can use a commercial test management tool only as a repository
of test case specifications, and the test design specifications can be created as external
Word documents that are associated with the test cases in the repository. The alternative
approach could be to develop and use a customized test management tool that can better
fit particular project needs as opposed to commercial tools.

Iterative Test Design Benefits


Concluding this article we would like to highlight the main benefits that the approach can
provide for a project team:
ƒ ITD provides testers with a framework for product exploration and effective test
design when requirements are incomplete or evolving.
ƒ ITD is defined based on a top-down workflow that helps testers better cope with the
functional complexity of a system under test.
ƒ High-level test design specifications, the main deliverable of the ITD approach, are
easy to review and maintain.
ƒ By delivering test designs early in the process, the approach provides management
with better visibility into the expected cost and effectiveness of functional testing.

Biography of Yuri Chernak


Yuri Chernak is the president and principal consultant of Valley Forge Consulting, Inc.
As a consultant, Yuri has worked for a number of major financial firms in New York
helping senior management improve their software test process. Yuri has a Ph.D. in
Vol. 4 Issue 3 -9- www.testinginstitute.com
computer science and he is a member of the IEEE Computer Society. He has been a
speaker at several international conferences and has published papers on software testing
in the IEEE journals. Contact him by email: ychernak@yahoo.com.

Biography of Steve Boycan


Steve Boycan has been leading software engineering and
process improvement efforts in the military, telecommunications, and financial sectors
for 12 years. He currently manages several testing related functions for critical NYSE
systems. He also manages groups that provide management support tools and automated
test infrastructure development. Process improvement and software project measurement
are key areas of interest.

Acknowledgement
This work was supported by the Application Scripting Group (ASG) at SIAC, and the
authors are grateful to the ASG testers for their feedback.
References
1
G. Myers "The Art of Software Testing", John Wiley & Sons, New York, 1979
2
B. Hetzel “The Complete Guide to Software Testing”, John Wiley & Sons, 1988
3
E. Kit "Software Testing in the Real World", Addison-Wesley, 1995
4
C. Kaner “Testing Computer Software”, International Thomson Computer Press, 1988
5
J. Bach “Risk-Based Testing”, STQE Magazine, November 1999, pp.22-29
6
C. Kaner, J. Bach, B. Pettichord “Lessons Learned in Software Testing”, John Wiley &
Sons, 2002
7
E. Gamma, et al. “Design Patterns. Elements of Reusable Object-Oriented Software”,
Addison-Wesley, 1995
8
P. Kruchten “The Rational Unified Process. An Introduction”, Addison-Wesley, 2000
9
P. Kroll, P. Kruchten “The rational Unified Process Made Easy. A Practitioner’s Guide
the RUP”, Addison-Wesley, 2003

You might also like