You are on page 1of 34

SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS

Capers Jones, Vice President and Chief Technology Officer


Namcook Analytics LLC www.Namcook.com
raft !." J#ly $%, $"%&
A'stract
The cost of finding and fixing bugs or defects is the largest single expense element in the history
of software. Bug repairs start with requirements and continue through development. After
release bug repairs and related customer support costs continue until the last user signs off. Over
a 25 year life expectancy of a large software system in the !"!!! function point si#e range
almost 5! cents out of every dollar will go to finding and fixing bugs.
$iven the fact that bug repairs are the most expensive element in the history of software" it might
be expected that these costs would be measured carefully and accurately. They are not. %ost
companies do not measure defect repair costs" and when they do they often use metrics that
violate standard economic assumptions such as &lines of code' and &cost per defect' neither of
which measure the value of software quality. Both of these measures distort quality economics.
(ines of code penali#e high)level languages. *ost per defect penali#es quality. A new metric"
&technical debt' is a good metaphor but incomplete. Technical debt does not include pro+ects
with such bad quality they are canceled and never delivered so there is no downstream debt.
,oor measurement practices have led to the fact that a ma+ority of companies do not -now that
achieving high levels of software quality will shorten schedules and lower costs at the same time.
But testing alone is insufficient. A synergistic combination of defect prevention" pre)test defect
removal" and formal testing using mathematical methods all need to be part of the quality
technology stac-.
Copyright 2012-201 !y C"p#r$ %o&#$' A(( right$ r#$#r)#*'
1
SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS
(ntrod#ction
The software industry spends about .!.5! out of every ..!! expended for development and
maintenance on finding and fixing bugs. %ost forms of testing are below /50 in defect removal
efficiency or remove only about one bug out of three. All tests together seldom top 150 in
defect removal efficiency. About 20 of bug repairs include new bugs. About 30 of test cases
have bugs of their own. These topics need to be measured" controlled" and improved. 4ecurity
flaws are leading to ma+or new costs for recovery after attac-s. Better security is a ma+or subset
of software quality.
A synergistic combination of defect prevention" pre)test defect removal" and formal testing by
certified personnel can top 550 in defect removal efficiency while simultaneously lowering
costs and shortening schedules.
6or companies that -now how to achieve it" high quality software is faster and cheaper than low
quality software. This article lists of all of the ma+or factors that influence software quality as of
year)end 2!/.
These quality)related topics can all be measured and predicted using the author7s 4oftware 8is-
%aster 9 tool. Additional data is available from the author7s boo-s" The :conomics of
4oftware ;uality" Addison <esley 2! and 4oftware :ngineering Best ,ractices" %c$raw =ill
2!!.
2
)oftware efect Origins
4oftware defects originate in multiple origins. The approximate >.4. total for defects in
requirements" design" code" documents" and bad fixes is 5.!! per function point. Best in class
pro+ects are below 2.!! per function point. ,ro+ects in litigation for poor quality can top 2.!!
defects per function point.
?efect potentials circa 2!/ for the >nited 4tates average@
8equirements .!! defects per function point
?esign .25 defects per function point
*ode .25 defects per function point
?ocuments !.3! defects per function point
Bad fixes !.A! defects per function point
Totals 5.!! defects per function point
Bote that these values are not constants and vary from C 2.!! per function point to D 2.!! per
function point based on team experience" methodology" programming language or languages"
certified reusable materials" and application si#e.
The best results would come from small pro+ects C 5!! function points in si#e developed by
expert teams using quality)strong methods and very high)level programming languages with
substantial volumes of certified reusable materals.
The worst results would come from large systems D 5!"!!! function points in si#e developed by
novice teams using ineffective methodologies and low)level programming languages with little
or no use of certified reusable materials.
The ma+or defect origins include@
. 6unctional requirements
2. Bon)functional requirements
/. Architecture
A. ?esign
5. Bew source code
3. >ncertified reused code from external sources
2. >ncertified reused code from legacy applications
1. >ncertified reused designs" architecture" etc.
5. >ncertified reused test cases
!. ?ocuments Euser manuals" =:(, text" etc.F
. Bad fixes or secondary defects in defect repairs E20 is >.4. averageF
2. ?efects due to creeping requirements that bypass full quality controls
/. Bad test cases with defects in them E30 is >.4. averageF
A. ?ata defects in data bases and web sites
3
5. 4ecurity flaws that are invisible until exploited
6ar too much of the software literature concentrates on code defects and ignores the more
numerous defects found in requirements and design. Gt is also interesting that many of the
companies selling quality tools such as static analysis tools and test tools focus only on code
defects.
>nless requirement and design defects are prevented or removed before coding starts" they will
eventually find their way into the code where it may be difficult to remove them. Gt should not
be forgotten that the famous &H2I' problem ended up in code" but originated as a corporate
requirement to save storage space.
4ome of the more annoying <indows 1 problems" such as the hidden and arcane method needed
to shut down <indows 1" did not originate in the code" but rather in questionable upstream
requirements and design decisions.
<hy the second most common operating system command is hidden from view and requires
three mouse clic-s to execute it is a prime example of why applications need requirement
inspections" design inspections" and usability studies as well as ordinary code testing.
Pro*en +ethods for Pre*enting and ,emo*ing )oftware efects
efect Pre*ention
The set of defect prevention methods can lower defect potentials from >.4. averages of about
5.!! per function point down below 2.!! per function point. *ertified reusable materials are the
most effective -nown method of defect prevention. A number of Japanese quality methods are
beginning to spread to other countries and are producing good results. ?efect prevention
methods include@
. Joint Application ?esign EJA?F
2. ;uality function deployment E;6?F
/. *ertified reusable requirements" architecture" and design segments
A. *ertified reusable code
5. *ertified reusable test plans and test cases Eregression testsF
3. Ianban for software Emainly in JapanF
2. Iai#en for software Emainly in JapanF
1. ,o-a)yo-e for software Emainly in JapanF
5. ;uality circles for software Emainly in JapanF
!. 4ix 4igma for 4oftware
. Achieving *%%G levels KD / for critical pro+ects
2. >sing quality)strong methodologies such as 8>, and T4,
/. :mbedded users for small pro+ects C 5!! function points
A. 6ormal estimates of defect potentials and defect removal before starting pro+ects
4
5. 6ormal estimates of cost of quality E*O;F and technical debt ET?F before starting
3. ;uality targets such as D 520 defect removal efficiency E?8:F in all contracts
2. 6unction points for normali#ing quality data
1. Analysis of user)group requests or customer suggestions for improvement
5. 6ormal inspections Eleads to long)term defect reductionsF
2!. ,air programming Eleads to long)term defect reductionsF
Analysis of software defect prevention requires measurement of similar pro+ects that use and do
not use specific approaches such as JA? or ;6?. Of necessity studying defect prevention needs
large numbers of pro+ects and full measures of their methods and results.
Bote that two common metrics for quality analysis" &lines of code' and &cost per defect' have
serious flaws and violate standard economic assumptions. These two measures conceal" rather
than reveal" the true economic value of high software quality.
Another newer measure is that of &technical debt' or the downstream costs of repairing defects
that are in software released to customers. Although technical debt is an interesting metaphor it
is not complete and ignores several ma+or quality costs.
The most serious omission from technical debt are pro+ects where quality is so bad that the
application is canceled and never delivered at all. *anceled pro+ects have a huge cost of quality
but #ero technical debt since they never reach customers. Another omission from technical debt
is the cost of financial awards by courts where vendors are sued and lose lawsuits for poor
quality.
6unction point metrics are the best choice for quality economic studies. The new 4BA, non)
functional si#e metric has recently been released" but little quality data is available because that
metric is too new.
The new metric concept of &technical debt' is discussed again later in this article" but needs
expansion and standardi#ation.
Pre-Test efect ,emo*al
<hat happens before testing is even more important than testing itself. The most effective
-nown methods of eliminating defects circa 2!/ include requirements models" automated
proofs" formal inspections of requirements" design" and codeL and static analysis of code and text.
EA popular method for Agile and extreme programming EM,F pro+ects is that of &pair
programming.' This method is expensive and does not have better quality than individual
programmers who use static analysis and inspections. ,air programming would not exist if
companies had better measures for costs and quality. >nli-e inspection pair programming was
5
not validated prior to introduction" and still has little available benchmar- data whereas
inspections have thousands of measured pro+ects.F
6ormal inspections have been measured to top 150 in defect removal efficiency and have more
than A! years of empirical data from thousands of pro+ects. %ethods such as inspections also
raise testing defect removal efficiency by more than 50 for each ma+or test stage. The ma+or
forms of pre)test defect removal include@
. ?es- chec-ing by developers
2. ?ebugging tools EautomatedF
/. ,air programming Ewith cautionF
A. ;uality Assurance E;AF reviews of ma+or documents and plans
5. 6ormal inspections of requirements" design" code" >%(" and other deliverables
3. 6ormal inspections of requirements changes
2. Gnformal peer reviews of requirements" design" code
1. :diting and proof reading critical requirements and documents
5. Text static analysis of requirements" design
!. *ode static analysis of new" reused" and repaired code
. 8unning 6O$ and 6(:4*= readability tools on text documents
2. 8equirements modeling EautomatedF
/. Automated correctness proofs
A. 8efactoring
5. Gndependent verification and validation EGNONF
,re)test inspections have more than A! years of empirical data available and ran- as the top
method of removing software defects" consistently topping 150 in defect removal efficiency
E?8:F.
4tatic analysis is a newer method that is also high in ?8:" frequently toping 350. 8equirements
modeling is another new and effective method that has proved itself on complex software such as
that operating the %ars 8over. 8equirements modeling and inspections can both top 150 in
defect removal efficiency E?8:F.
As mentioned earlier" one of the more unusual off shoots of some of the Agile methods such as
extreme programming EM,F is &pair programming.' The pair programming approach is included
in the set of pre)test defect removal activities.
<ith pair programming two individuals share an office and wor- station and ta-e turns coding
while the other observes.
This should have been an interesting experiment" but due to poor measurement practices it has
started into actual use" with expensive results. Gndividual programmers who use static analysis
and inspections have better quality at about half the cost and 250 of the schedule of a pair.
6
Gf two top guns are paired the results will be good" but the costs about A!0 higher than either one
wor-ing alone. 4ince there is a severe shortage of top)gun software engineers" it is not cost
effective to have two of them wor-ing on the same pro+ect. Gt would be better for each of them
to tac-le a separate important pro+ect. Top)guns only comprise about 50 of the overall software
engineering population.
Gf a top gun is paired with an average programmer" the results will be better than the average
team member might product" but about 5!0 more expensive. The quality is no better than the
more experienced pair member wor-ing alone. Gf pairs are considered a form of mentoring there
is some value for improving the performance of the wea-er team member.
Gf two average programmers are paired the results will still be average" and the costs will be
about 1!0 higher than either one alone.
Gf a marginal or unqualified person is paired with anyone" the results will be suboptimal and the
costs about 50 higher than the wor- of the better team member wor-ing alone. This is
because the unqualified person is a drag on the performance of the pair.
4ince there are not enough qualified top)gun programmers to handle all of the normal wor- in
many companies" doubling them up adds costs but subtracts from the available wor- force.
There are also 3 occupation groups associated with software. Gf programmers are to be paired"
why not pair architects" designers" testers" and pro+ect managersP
%ilitary history is not software but it does provide hundreds of examples that shared commands
often lead to military disaster. The Battle of *annae is one such example" since the 8oman
commanders alternated command days.
4ome pairs of authors can write good boo-s" such as ?ouglas ,reston and (incoln *hild. But
their paired boo-s are no better than the boo-s each author wrote individually
,airing should have been measured and studied prior to becoming an accepted methodology" but
instead it was put into production with little or no empirical data. This phenomenon of rushing
to use the latest fad without any proof that it wor-s is far too common for software.
%ost of the studies of pair programming do not include the use of inspections or static analysis.
They merely ta-e a pair of programmers and compare the results against one unaided
programmer who does not use modern pre)test removal methods such as static analysis and peer
reviews.
Two carpenters using hammers and hand saws can certainly build a shed faster than one
carpenter using a hammer and a hand saw. But what about one carpenter using a nail gun and an
electric circular sawP Gn this case the single carpenter might well win if the pair is only using
hammers and hand saws.
7
By excluding other forms of pre)test defect removal such as inspections and static analysis the
studies of pair programming are biased and incomplete.
Test efect ,emo*al
Testing has been the primary software defect removal method for more than 5! years.
>nfortunately most forms of testing are only about /50 efficient or find only one bug out of
three.
?efects in test cases themselves and duplicate test cases lower test defect removal efficiency.
About 30 of test cases have bugs in the test cases themselves. Gn some large companies as many
as 2!0 of regression test libraries are duplicates which add to testing costs but not to testing
rigor.
?ue to low defect removal efficiency at least eight forms of testing are needed to achieve
reasonably efficient defect removal efficiency. ,re)test inspections and static analysis are
synergistic with testing and raise testing efficiency.
Tests by certified test personnel using test cases designed with formal mathematical methods
have the highest levels of test defect removal efficiency and can top 350. The ma+or forms of
test)related factors for defect removal include@
. *ertified test personnel
2. 6ormal test plans published and reviewed prior to testing
/. *ertified reusable test plans and test cases for regression testing
A. %athematically based test case design such as using design of experiments
5. Test coverage tools for requirements" code" data" etc.
3. Automated test tools
2. *yclomatic complexity tools for all new and changed code segments
1. Test library control tools
5. *apture)recapture testing
!. ?efect trac-ing and routing tools
. Gnspections of ma+or code changes prior to testing
2. Gnspections of test libraries to remove bad and duplicate test cases
/. 4pecial tests for special defectsL performance" security" etc.
A. 6ull suite of test stages including@
a. 4ubroutine test
b. >nit test
c. Bew function test
d. 8egression test
e. *omponent test
f. ,erformance test
g. >sability test
h. 4ecurity test
8
i. 4ystem test
+. 4upply)chain test
-. *loud test
l. ?ata migration test
m. :8, lin- test
n. :xternal beta test
o. *ustomer acceptance test
p. Gndependent test Eprimarily military pro+ectsF
Testing by itself without any pre)test inspections or static analysis is not sufficient to achieve
high quality levels. The poor estimation and measurement practices of the software industry
have long slowed progress on achieving high quality in a cost)effective fashion.
=owever modern ris-)based testing by certified test personnel with automated test tools who also
use mathematically)derived test case designs and also tools for measuring test coverage and
cyclomatic complexity can do a very good +ob and top 350 in defect removal efficiency for the
test stages of new function test" component test" and system test.
>ntrained amateur personnel such as developers themselves seldom top /50 for any form of
testing. Also the &bad fix in+ection' rate or new bugs added while fixing older bugs tops 20 for
repairs by ordinary development personnel.
Bad fixes are inversely proportional to cyclomatic complexity" and also inversely proportional to
experience. Bad fixes by a top)gun software engineer wor-ing with software with low
cyclomatic complexity can be only a fraction of 0.
At the other end of the spectrum" bad fixes by a novice trying to fix a bug in an error)prone
module with high cyclomatic complexity can top 250.
Gn one lawsuit where the author was an expert witness the vendor tried four times over nine
months to fix a bug in a financial pac-age. :ach of the first four fixes failed" and each added
new bugs. 6inally after nine months the fifth bug repair fixed the original bug and did not cause
regressions. By then the client had restated prior year financials" and that was the reason for the
lawsuit.
Another issue that is seldom discussed in the literature is that of bugs or errors in the test cases
themselves. On average about 30 of test cases contain errors. 8unning defective test cases adds
costs to testing but nothing to defect removal efficiency. Gn fact defective test cases lower ?8:.
9
National efect ,emo*al .fficiency and )oftware /#ality
4ome countries such as Japan" Gndia" and 4outh Iorea place a strong emphasis on quality in all
manufactured products including software. Other countries" such a *hina and 8ussia" apparently
have less interest and less understanding of quality economics and seem to lag in quality
estimation and measurement. Among the quality)strong countries Japan" for example" had more
than 5/0 ?8: on the pro+ects examined by the author.
6rom a comparatively small number of international studies the approximate national ran-ings
for defect removal efficiency levels of the top 2! countries in terms of ?8: are@
/#ality ,anks of $" Co#ntries0
. Japan
2. Gndia
/. ?enmar-
A. 4outh Iorea
5. 4wit#erland
3. Gsrael
2. *anada
1. >nited Iingdom
5. 4weden
!. Borway
. Betherlands
2. =ungary
/. Greland
A. >nited 4tates
5. Bra#il
3. 6rance
2. Australia
1. Austria
5. Belgium
2!. 6inland
All of the countries in the top 2! can produce excellent software and often do. *ountries with
significant amounts of systems and embedded software and defense software are more li-ely to
have good quality control than countries producing mainly information technology pac-ages.
Almost all countries in 2!/ produce software in significant volumes. %ore than 5! countries
produce millions of function points per year. The preliminary ran-s shown here indicate that
more studies are needed on international software quality initiatives.
*ountries that might be poised to +oin the top)quality set in the future include %alaysia" %exico"
the ,hilippines" 4ingapore" Taiwan" Thailand" and Niet Bam. 8ussia and *hina should be top)
ran-ed but are not" other than =ong Iong.
10
;uality measures and predictive quality estimation are necessary precursors to achieving top
quality status. ?efect prevention and pre)test defect removal must be added to testing to achieve
top)ran- status.
One might thin- that the wealthier countries of the %iddle :ast such as ?ubai" 4audi Arabia"
Iuwait" and Jordan would be in the top 2! due to regional wealth and the number of ma+or
software)producing companies located there. Although the %iddle :ast ran-s near the top in
requesting benchmar- data" little or no data has been published from the region about software
quality.
11
(nd#stry efect ,emo*al .fficiency and )oftware /#ality
Gn general the industries that produce complex physical devices such as airplanes" computers"
medical devices" and telephone switching systems have the highest levels of defect removal
efficiency" the best quality measures" and the best quality estimation capabilities.
This is a necessity because these complex devices won7t operate unless quality approaches #ero
defects. Also" the manufacturers of such devices have ma+or liabilities in case of failures
including possible criminal charges. The top 2! industries in terms of defect removal efficiency
are@
/#ality ,anks of $" 1.). (nd#stries0
. $overnment Q intelligence agencies
2. %anufacturing Q medical devices
/. %anufacturing Q aircraft
A. %anufacturing Q mainframe computers
5. %anufacturing Q telecommunications switching systems
3. Telecommunications Q operations
2. %anufacturing Q defense weapons systems
1. %anufacturing Q electronic devices and smart appliances
5. $overnment Q military services
!. :ntertainment Q film and television production
. %anufacturing Q pharmaceuticals
2. Transportation ) airlines
/. %anufacturing Q tablets and personal computers
A. 4oftware Q commercial
5. %anufacturing Q chemicals and process control
3. Ban-s Q commercial
2. Ban-s Q investment
1. =ealth care Q medical records
5. 4oftware Q open source
2!. 6inance Q credit unions
As of 2!/ more than 5!! industries produce software in significant volumes. 4ome of the
lagging industries that are near the bottom in terms of software defect removal efficiency levels
are those of state governments" municipal governments" wholesale chains" retail chains" public
utilities" cable television billing and finance" and some but not all insurance companies in the
areas of billing and accounting software. :ven worse are some of the companies that do
programmed stoc- mar-et trading" which all by themselves have triggered ma+or financial
disruptions due to software bugs.
6or example in 8hode Gsland one of the author7s insurance companies seems to need more than a
wee- to post payments and often loses trac- of payments. Once the author7s insurance company
even sent bac- a payment chec- with a note that it had been &paid too early.' Apparently the
12
company was unable to add early payments to accounts. EThe author was leaving on an
international trip and wanted to pay bills prior to departure.F
Another more recent issue involved the insurance company unilaterally changing all account
numbers. >nfortunately they seemed not to develop a good method for mapping clients7 old
account numbers to the new account numbers.
A payment made +ust after the cut over but using the old account number required 1 days from
when the chec- reached the insurance company until it was credited to the right account. Before
that the company had sent out a cancellation notice for the policy. 6rom discussions with other
clients" apparently the company loses surprisingly many payments. This is a large company with
a large GT staff and hundreds of clerical wor-ers.
*ompanies that are considering outsourcing may be curious as to the placement of software
outsource vendors. 6rom the author7s studies of various industries outsource vendors ran- as
number 25 out of 25 industries for ordinary information technology outsourcing. 6or embedded
and systems software outsourcing the outsource vendors are approximately equal to industry
averages for aircraft" medical device" and electronic software pac-ages.
Another interesting question is how good are the defect removal methods practiced by quality
companies themselves such as static analysis companies" automated test tool companies"
independent testing companies" and defect trac-ing tool companiesP
Gnterestingly" these companies publish no data about their own results and seem to avoid having
outside consulting studies done that would identify their own defect removal efficiency levels.
Bo doubt the static analysis companies use their own tools on their own software" but they do not
publish accurate data on the measured effectiveness of these tools.
All of the test and static analysis companies should publish annual reports that show ranges of
defect removal efficiency E?8:F results using their tools" but none are -nown to do this.
)oftware e*elopment +ethods
4ome software development methods such as GB%7s 8ational >nified ,rocess E8>,F and <atts
=umphrey7s Team 4oftware ,rocess ET4,F can be termed &quality strong' because they lower
defect potentials and elevate defect removal efficiency levels.
Other methods such as <aterfall and *owboy development can be termed &quality wea-'
because they raise defect potentials and have low levels of defect removal efficiency. The /!
methods shown here are ran-ed in approximate order of quality strength. The list is not absolute
and some methods are better than others for specific si#es and types of pro+ects. ?evelopment
methods in ran- order of defect prevention include@
13
. %ashup Econstruction from certified reusable componentsF
2. =ybrid
/. GntegraBova
A. T4,R,4,
5. 8>,
3. T)N:*
2. :xtreme ,rogramming EM,F
1. AgileR4crum
5. ?ata state design E?4?F
!. Gnformation :ngineering EG:F
. Ob+ect)Oriented EOOF
2. 8apid Application ?evelopment E8A?F
/. :volutionary ?evelopment E:NOF
A. Jac-son development
5. 4tructured Analysis and ?esign Technique E4A?TF
3. 4piral development
2. 4tructured systems analysis and design method E44A?%F
1. Gterative development
5. 6low)based development
2!. N)%odel development
2. ,rince2
22. %erise
2/. ?ata state design method E?4?%F
2A. *lean)room development
25. G4ORG:*
23. <aterfall
22. ,air programming
21. ?o? 232A
25. ,roofs of correctness EmanualF
/!. *owboy
Once again this list is not absolute and situations change. 4ince Agile development is so
popular" it should be noted that Agile is fairly strong in quality but not the best in quality. Agile
pro+ects frequently achieve ?8: in the low 5!0 range" which is better than average but not top)
ran-ed.
Agile lags many leading methods in having very poor quality measurement practices. The poor
measurement practices associated with Agile for both quality and productivity will eventually
lead *GO7s" *TO7s" *6O7s" and *:O7s to as- if actual Agile results are as good as being
claimed.
>ntil Agile pro+ects publish productivity data using function point metrics and quality data using
function points and defect removal efficiency E?8:F the effectiveness of Agile remains
ambiguous and uncertain.
14
4tudies by the author found Agile to be superior in both quality and productivity to waterfall
development" but not as good for quality as either 8>, or T4,.
Also" a $oogle search using phrases such as &Agile failures' and &Agile successes' turns up
about as many discussions of failure as success. A new occupation of &Agile coach' has
emerged to help reduce the instances of getting off trac- when implementing Agile.
O*erall /#ality Control
4uccessful quality control stems from a synergistic combination of defect prevention" pre)test
defect removal" and test stages. The best pro+ects in the industry circa 2!/ combined defect
potentials in the range of 2.! defects per function point with cumulative defect removal
efficiency levels that top 550. The >.4. average circa 2!/ is about 5.! bugs per function point
and only about 150 defect removal efficiency. The ma+or forms of overall quality control
include@
. 6ormal software quality assurance E4;AF teams for critical pro+ects
2. %easuring defect detection efficiency E??:F
/. %easuring defect removal efficiency E?8:F
A. Targets for topping 520 in ?8: for all pro+ects
5. Targets for topping 550 in ?8: for critical pro+ects
3. Gnclusion of ?8: criteria in all outsource contracts E D 520 is suggestedF
2. 6ormal measurement of cost of quality E*O;F
1. %easures of &technical debt' but augmented to fill ma+or gaps
5. %easures of total cost of ownership ET*OF for critical pro+ects
!. %onthly quality reports to executives for on)going and released software
. ,roduction of an annual corporate software status and quality report
2. Achieving D *%%G level /
GB% started to measure defect origins" defect potentials" and defect removal efficiency E?8:F
levels in the early 52!7s. These measures were among the reasons for GB%7s mar-et success in
both hardware and software. =igh quality products are usually cheaper to produce" are much
cheaper to maintain" and bring high levels of customer loyalty.
The original GB% ?8: studies used six months after release for calculating ?8:" but due to
updates that occur before six months that interval was difficult to use and control. The switch
from six month to 5!)day ?8: intervals occurred in 51A.
?efect removal efficiency is measured by accumulating data on all bugs found prior to release
and also on bugs reported by clients in the first 5! days of use. Gf developers found 5! bugs and
users reported ! bugs then ?8: is clearly 5!0.
15
The Gnternational 4oftware Benchmar- 4tandards $roup EG4B4$F uses only a /! day interval
after release for measuring ?8:. The author measures both /!)day and 5!)day intervals.
>nfortunately the 5!)day defect counts average about four to five times larger than the /!)day
defect counts" due to installation and learning curves of software which delay normal usage until
late in the first month.
A typical /!)day G4B4$ count of ?8: might show 5! bugs found internally and 2 bugs found in
/! days" for a ?8: of 52.120.
A full 5!)day count of ?8: would still show 5! bugs found internally but ! bugs found in three
months for a lower ?8: of only 5!.!!0.
Although a fixed time interval is needed to calculate ?8:" that does not mean that all bugs are
found in only 5! days. Gn fact the 5!)day ?8: window usually finds less than 5!0 of the bugs
reported by clients in one calendar year.
Bug reports correlate strongly with numbers of production users of software applications. >nless
a software pac-age is something li-e <indows 1 with more than "!!!"!!! users on the first day"
it usually ta-es at least a month to install complex applications" train users" and get them started
on production use.
Gf there are less than ! users the first month" there will be very few bug reports. Therefore in
addition to measuring ?8:" it is also significant to record the numbers of users for the first three
months of the application7s production runs.
Gf we assume an ordinary information technology application the following table shows the
probable numbers of reported bugs after one" two" and three months for !" !!" and !!! users@
16
efects 'y 1sers for Three +onths
%" %"" %"""
+onth 1sers 1sers 1sers
/ 3
2 / 5 1
/ 3 2 2A
As it happens the central column of !! users for three months is a relatively common pattern.
Bote that for purposes of measuring defect removal efficiency a single month of usage tends to
yield artificially high levels of ?8: due to a normal lac- of early users.
*ompanies such as GB% with continuous quality data are able to find out many interesting and
useful facts about defects that escape and are delivered to clients. 6or example for financial
software there will be extra bug reports at the end of standard fiscal years" due to exercising
annual routines.
Also of interest is the fact that about 50 of bug reports are &invalid' and not true bugs at all.
4ome are user errors" some are hardware errors" and some are bugs against other software
pac-ages that were mista-enly reported to the wrong place. Gt is very common to confuse bugs
in operating systems with bugs in applications.
As an example of an invalid defect report" the authorSs company once received a bug report
against a competitive product" sent to us by mista-e. :ven though this was not a bug against our
software" we routed it to the correct company and sent a note bac- to the originator as a courtesy.
Gt too- about an hour to handle a bug against a competitive software pac-age. Beedless to say
invalid defects such as this do not count as technical debt or cost of quality E*O;F. =owever
they do count as overhead costs.
An interesting new metaphor called &technical debt' was created by <ard *unningham and is
now widely deployed" although it is not deployed the same way by most companies. 4everal
software quality companies such as Opti%yth in 4pain" *A4T 4oftware" and 4martBear feature
technical debt discussions on their web sites.
The concept of technical debt is intuitively appealing. 4hortcuts made during development that
lead to complex code structures or to delivered defects will have to be fixed at some point in the
future. <hen the time comes to fix these problems downstream" the costs will be higher and the
schedules longer than if they had been avoided in the first place.
17
The essential concept of technical debt is that questionable design and code decisions have
increasing repair costs over time. As a metaphor or interesting concept technical debt has much
to recommend it.
But the software industry is far from sophisticated in understanding finance and economic topics.
Gn fact for more than 5! years the software industry has tried to measure quality costs with &lines
of code' and &cost per defect' which are so inaccurate as to be viewed as professional
malpractice for quality economics.
Also" many companies only measure about /20 of software pro+ect effort and /10 of software
defects. Omitting unpaid overtime" managers" and specialists are common gaps. Omitting bugs
found in requirements" design" and by unit testing are common quality omissions.
>ntil the software industry adopts standard charts of accounts and begins to use generally
acceptable accounting principles E$AA,F measures of technical debt will vary widely from
company to company and not be comparable.
Technical debt runs head on into the general ineptness of the software world in understanding
and measuring the older cost of quality E*O;F in a fashion that matches standard economic
assumptions. *ost per defect penali#es quality. (ines of code penali#e modern high)level
languages and of course ma-e requirements and design defects invisible. ?efect repair costs per
function point provide the best economic indicator. =owever the new 4BA, metric for non)
functional requirements needs to be incorporated.
The main issues with technical debt as widely deployed by the author7s clients are that it does
not include or measure some of the largest quality costs in all of software history.
About /50 of large software systems are cancelled and never delivered at all. The most
common reason for cancellation is poor quality. But since the cancelled pro+ects don7t get
delivered" there are no downstream costs and hence no technical debt either. The costs of
cancelled pro+ects are much too large to ignore and +ust leave out of technical debt.
The second issue involves software that does get delivered and indeed accumulates technical
debt in the form of changes that need to be repaired. But some software applications have such
bad quality that clients sue the developers for damages. The costs of litigation and the costs of
any damages that the court orders software vendors to pay should be part of technical debt.
<hat about the consequential damages that poor software quality brings to the clients who have
been harmed by the upstream errors and omissionsP *urrently technical debt as used by most
companies is limited to internal costs borne by the development organi#ation.
6or example suppose a bug in a financial application caused by rushing through development
costs the software vendor .!!"!!! to fix a year after release" and it could have been avoided for
only .!"!!!. The expensive repair is certainly technical debt that might have been avoided.
18
Bow suppose this same bug damaged ! companies and caused each of them to lose ./"!!!"!!!
due to having to restate prior year financial statements. <hat about the ./!"!!!"!!! in
consequential damages to users of the softwareP These damages are currently not considered to
be part of technical debt.
Gf the court orders the vendor to pay for the damages and the vendor is charged ./!"!!!"!!! that
probably should be part of technical debt. (itigation costs and damages are not currently
included in the calculations most companies use for technical debt.
6or financial debt there is a standard set of principles and practices called the &$enerally
Accepted Accounting ,rinciples' or $AA,. The software industry in general" and technical debt
in particular" need a similar set of &4oftware $enerally Accepted Accounting ,rinciples' or
4$AA, that would allow software pro+ects and software costs to be compared in a uniform
fashion.
As this article is being written the >nited 4tates $AA, rules are being phased out in favor of a
newer set of international financial rules called Gnternational 6inancial 8eporting 4tandards
EG684F. =ere too software needs a set of &4oftware Gnternational 6inancial 8eporting 4tandards'
or 4G684 to ensure accurate software accounting across all countries.
4oftware engineers interested in technical debt are urged to read the $AA, and G684 accounting
standards and familiari#e themselves with normal cost accounting as a precursor to applying
technical debt.
The ma+or $AA, principles are relevant to software measures and also to technical debt@
. ,rinciple of regularity
2. ,rinciple of consistency
/. ,rinciple of sincerity
A. ,rinciple of permanence of methods
5. ,rinciple of non)compensation or not replacing a debt with an asset
3. ,rinciple of prudence
2. ,rinciple of continuity
1. ,rinciple of periodicity
5. ,rinciple of full disclosure
!. ,rinciple of utmost good faith
The ma+or software metric associations such as the Gnternational 6unction ,oint >ser $roup
EG6,>$F and the *ommon 4oftware %etric Gnternational *onsortium E*O4%G*F should both be
participating in establishing common financial principles for measuring software costs" including
cost of quality and technical debt. =owever neither group has done much outside of basic si#ing
of applications. 6inancial reporting is still ambiguous for the software industry as a whole.
Gnterestingly the countries of Bra#il and 4outh Iorea" which require function point metrics for
government software contracts" appear to be somewhat ahead of the >nited 4tates and :urope in
19
melding financial accounting standards with software pro+ects. :ven in Bra#il and 4outh Iorea
costs of quality and technical debt remain ambiguous.
%any companies are trying to use technical debt because it is an intriguing metaphor that is
appealing to *6O7s and *:O7s. =owever without some form of 4G684 or standardi#ed
accounting principles every company in every country is li-ely to use technical debt with random
rules that would not allow cross)country" cross)company" or cross)pro+ect comparisons.
2armf#l Practices to 'e A*oided
4ome of the observations of harmful practices stem from lawsuits where the author has wor-ed
as an expert witness. ?iscovery documents and depositions reveal quality flaws that are not
ordinarily visible or accessible to standard measurements.
Gn every case where poor quality was alleged by the plaintiff and proven in court" there was
evidence that defect prevention was lax" pre)test defect removal such as inspections and static
analysis were bypassed" and testing was either perfunctory or truncated to meet arbitrary
schedule targets.
These poor practices were unfortunate because a synergistic combination of defect prevention"
pre)test defect removal" and formal testing leads to short schedules" low costs" and high quality at
the same time.
The most severe forms of schedule slips are due to starting testing with excessive numbers of
latent defects" which stretch out testing intervals by several hundred percent compared to original
plans. =armful and dangerous practices to be avoided are@
. Bypassing pre)test inspections
2. Bypassing static analysis
/. Testing by untrained" uncertified amateurs
A. Truncating testing for arbitrary reasons of schedule
5. The &good enough' quality fallacy
3. >sing &lines of code' for data normali#ation Eprofessional malpracticeF
2. >sing &cost per defect' for data normali#ation Eprofessional malpracticeF
1. 6ailure to measure bugs at all
5. 6ailure to measure bugs before release
!. 6ailure to measure defect removal efficiency E?8:F
. :rror)prone modules E:,%F with high defect densities
2. =igh cyclomatic complexity of critical modules
/. (ow test coverage of critical modules
A. Bad)fix in+ections or new bugs in bug repairs themselves
5. Outsource contracts that do not include quality criteria and ?8:
3. ?uplicate test cases that add costs but not test thoroughness
2. ?efective test cases with bugs of their own
20
Gt is an unfortunate fact that poor measurement practices" failure to use effective quality
predictions before starting -ey pro+ects" and bypassing defect prevention and pre)test defect
removal methods have been endemic problems of the software industry for more than A! years.
,oor software quality is li-e the medical condition of whooping cough. That condition can be
prevented via vaccination and in today7s world treated effectively.
,oor software quality can be eliminated by the &vaccination' of early estimation and effective
defect prevention. ,re)test defect removal such as inspections and static analysis are effective
therapies. ,oor software quality is a completely treatable and curable condition.
Gt is technically possible to lower defect potential from around 5.!! per function point to below
2.!! per function point. Gt is also technically possible to raise defect removal efficiency E?8:F
from today7s average of about 150 to at least 550. These changes would also shorten schedules
and reduce costs.
21
(ll#strating )oftware efect Potentials and efect ,emo*al .fficiency 3,.4
6igure one shows overall software industry results in terms of two dimensions. The vertical
dimension shows defect potentials or the probable total number of bugs that will occur in
requirements" design" code" documents" and bad fixes@
Bote that large systems have much higher defect potentials than small applications. Gt is also
harder to remove defects from large systems.
5ig#re %0 1.). ,anges of )oftware efect Potentials and efect ,emo*al .fficiency 3,.4
22
6igure 2 shows the relationship between software methodologies and software defect potentials
and defect removal@
5ig#re $0 +ethodologies and )oftware efect Potentials and ,emo*al
23
Bote that &quality strong' methods have fewer defects to remove and remove a much higher
percentage of software defects than the &quality wea-7 methods.
6igure / shows the relationship between various forms of defect removal and effective quality
results@
5ig#re &0 )oftware efect ,emo*al +ethods and ,emo*al .fficiency
Bote that testing by untrained amateur developers is neither efficient in finding bugs nor cost
effective. A synergistic combination of defect prevention" pre)test defect removal" and formal
testing gives the best quality results at the lowest costs and the shortest schedules.
These three illustrations show basic facts about software defects and defect removal efficiency@
?efect volumes increase with application si#e
;uality)strong methods can reduce defect potentials
4ynergistic combinations of defect prevention" pre)test removal and testing are needed.
Achieving a high level of software quality is the end product of an entire chain of methods that
start with F defect measurements" 2F defect estimation before starting pro+ects" /F careful defect
prevention" AF pre)test inspections and static analysis" and ends with 5F formal testing using
mathematically designed test cases.
24
All five lin-s of this chain are needed to ensure optimal software quality. Omitting any of the
lin-s will lead to poor quality" higher costs" and longer schedules than including all five lin-s.
/#antifying a 6est-Case )cenario for efect ,emo*al .fficiency
To illustrate the principles of optimal defect prevention" pre)test removal" and test defect removal
table shows sample outputs from 4oftware 8is- %aster 9 for a best)case scenario. This
scenario assumes "!!! function points" a top)gun team" *%%G level 5" hybrid methodology" the
Ob+ective)* programming language" and a monthly burdened compensation rate of .!"!!!@
Ta'le %0 6est-Case )cenario O1TP1T ATA
,e7#irements defect potential %&8
esign defect potential 9:%
Code defect potential ;;!
oc#ment defect potential %&9
Total efect Potential %,!%!
Per f#nction point %.!$
Per <LOC &$.$"
efect Pre*ention .fficiency ,emainder
6ad
5i=es Costs
JA? 220 %,$:$ 9 .21"!52
;6? /!0
;;
; 8 ./5"3//
,rototype 2!0
!%
& $ .2"!A5
%odels 310
$$
> 9 .A2"31A
4ubtotal ;:?
$&
8 %9 .22"A5
Pre-Test ,emo*al .fficiency ,emainder
6ad
5i=es Costs
?es- chec- 220
%!
% $ ./"225
4tatic analysis 550
!
; % .2"12/
Gnspections 5/0

9 " .2/"25
4ubtotal >;?

: & .5A"1/5
25
Test ,emo*al .fficiency ,emainder
6ad
5i=es Costs
>nit /20

8 " .22"/5!
6unction /50

$ " ./5"1/5
8egression A0

$ " .5"521
*omponent /20

% " .52"2!A
,erformance A0

% " .//"/33
4ystem /30

% " .3/"2A2
Acceptance 20

% " .5"225
4ubtotal ;!?

% " .21/"1A5
Costs
P,.-,.L.A). CO)T) %,!&8 & @9":,">>
PO)T-,.L.A). ,.PA(,)
3T.C2N(CAL
.6T4

% " @:9;
+A(NT.NANC.
OV.,2.A @8:,989
CO)T O5 /1AL(TA 3CO/4 @99&,&"$
efects deli*ered

%
2igh se*erity

"
)ec#rity flaws

"
2igh se*erity ? %%.9;?
?elivered ,er 6, ".""%
=igh severity per 6, "."""
4ecurity flaws per 6, "."""
?elivered ,er I(O* "."%8
=igh severity per I(O* ".""$
4ecurity flaws per I(O* ".""%
C#m#lati*e >>.>:?
,emo*al .fficiency
26
This scenario utili#es a sophisticated combination of defect prevention and pre)test defect removal as well as formal
testing by certified test personnel. Bote that cumulative ?8: is 55.530" which is about as good as ever achieved.
/#antifying a Borst-Case )cenario for efect ,emo*al .fficiency
To illustrate the principles of inadequate defect prevention" pre)test removal" and test defect
removal table 2 shows sample outputs from 4oftware 8is- %aster 9 for a worst)case scenario.
This scenario assumes "!!! function points" a novice team" *%%G level " waterfall
methodology" the Java programming language" and a monthly burdened compensation rate of
.!"!!!@
Ta'le $0 Borst-Case )cenario
O1TP1T
ATA
,e7#irements defect potential &$!
esign defect potential 9;8
Code defect potential %,%">
oc#ment defect potential %8"
Total efect Potential $,%:"
Per f#nction point
$.
%:
Per <LOC
8".
9"
efect Pre*ention .fficiency ,emainder 6ad 5i=es Costs
JA - not #sed !0
$,%
9; " .!
/5 - not #sed !0
$,%
9: " .!
,rototype 2!0
%,!
$9 $$ .2"!A5
+odels - not #sed !0
%,!
88 " .!
4ubtotal %>?
%,!
8& $% .2"!A5
Pre-Test ,emo*al .fficiency ,emainder 6ad 5i=es Costs
?es- chec- 2/0
%,&
8$ :! .5"2/A
)tatic analysis - not #sed !0
%,8
"; " .!
(nspections - not #sed !0
%,8
"! " .!
4ubtotal %>?
%,8
"! :! .5"2/A
27
28
Test ,emo*al .fficiency ,emainder 6ad 5i=es Costs
>nit 210
%,"
%& 9% .55"52
6unction /0
!
&8 !& ./A"55
8egression !0
!
$: &: .5/"!AA
*omponent 210
9
8> 99 .5"A2
,erformance 30
9
:; $; ./!"!!5
4ystem /20
8
"9 8% .!2"/5A
Acceptance /0
&
;; %> .25"!51
4ubtotal !$?
&
;; &"8 .A55"35
Costs
P,.-,.L.A). CO)T)
$,%
88 &!% @9&$,8!"
PO)T-,.L.A). ,.PA(,)
3T.C2N(CAL
.6T4
&
;; %> @9":,$8;
+A(NT.NANC.
OV.,2.A @$>;,::!
CO)T O5 /1AL(TA 3CO/4 @%,&&!,&;9
efects deli*ered
8
"!
2igh se*erity
%
"&
)ec#rity flaws

8:
2igh se*erity ? $9.$9?
?elivered ,er 6, ".8"!
=igh severity per 6, ".%"&
4ecurity flaws per 6, "."8:
?elivered ,er I(O* !.:&>
=igh severity per I(O* %.>$>
4ecurity flaws per I(O* ".;:;
C#m#lati*e ;%.%8?
,emo*al .fficiency
29
Bote that with the worst)case scenario defect prevention was sparse and pre)test defect removal
omitted the two powerful methods of inspections and static analysis. *umulative ?8: was only
1.A0 which is below average and below minimum acceptable quality levels. Gf this had been
an outsource pro+ect then litigation would probably have occurred.
Gt is interesting that the best)case and worst)case scenarios both used exactly the same testing
stages. <ith the best)case scenario test ?8: was 120 while the test ?8: for the worst)case
scenario was only 220. The bottom line is that testing needs the support of good defect
prevention and pre)test defect removal.
Bote the ma+or differences in costs of quality E*O;F between the best)case scenarios. The best)
case *O; was only .55/"/!2 while the *O; for the worst)case scenario was more than double"
or ."//2"/15.
The technical debt differences are even more stri-ing. The best)case scenario had only
delivered defect and a technical debt of only .351. 6or the worst)case scenario there were /11
delivered defects and repair costs of .5!3"2A1.
6or software" not only is quality free but it leads to lower costs and shorter schedules at the same
time.
)#mmary and Concl#sions on )oftware /#ality
The software industry spends more money on finding and fixing bugs than for any other -nown
cost driver. This should not be the case. A synergistic combination of defect prevention" pre)test
defect removal" and formal testing can lower software defect removal costs by more than 5!0
compared to 2!2 averages. These same synergistic combinations can raise defect removal
efficiency E?8:F from the current average of about 150 to more than 550.
Any company or government group that averages below 550 in cumulative defect removal
efficiency E?8:F is not adequate in software quality methods and needs immediate
improvements.
Any company or government group that does not measure ?8: and does not -now how
efficient they are in finding software bugs prior to release is in urgent need of remedial quality
improvements.
<hen companies that do not measure ?8: are studied by the author during on)site benchmar-s"
they are almost always below 150 in ?8: and usually lac- adequate software quality
methodologies. Gnadequate defect prevention and inadequate pre)test defect removal are strongly
correlated with failure to measure defect removal efficiency.
30
,hil *rosby" the vice president of quality for GTT" became famous for the aphorism &quality is
free.' 6or software quality is not only free but leads to shorter development schedules" lower
development costs" and greatly reduced costs for maintenance and total costs of ownership
ET*OF.
31
,eferences and ,eadings on )oftware /#ality
Bec-" IentL Test)?riven ?evelopmentL Addison <esley" Boston" %AL 2!!2L G4BB !@
!/2A35/!L 2A! pages.
Blac-" 8exL %anaging the Testing ,rocess@ ,ractical Tools and Techniques for %anaging
=ardware and 4oftware TestingL <ileyL 2!!5L G4BB)! !A2!A!A55L 322 pages.
*helf" Ben and Jetley" 8aoulL &Diagnosing Medical Device Software Defects Using Static
Analysis'L *overity Technical 8eport" 4an 6rancisco" *AL 2!!1.
*hess" Brian and <est" JacobL 4ecure ,rogramming with 4tatic AnalysisL Addison <esley"
Boston" %AL 2!!!2L G4BB /@ 521)!/2A2A221L 32A pages.
*ohen" (ouL ;uality 6unction ?eployment Q =ow to %a-e ;6? <or- for HouL ,rentice =all"
>pper 4addle 8iver" BJL 555L G4BB !@ !2!3///!2L /31 pages.
*rosby" ,hilip B.L ;uality is 6reeL Bew American (ibrary" %entor Boo-s" Bew Hor-" BHL 525L
22! pages.
:verett" $erald ?. And %c(eod" 8aymondL 4oftware TestingL John <iley O 4ons" =obo-en"
BJL 2!!2L G4BB 521)!)A2)25/2)2L 23 pages.
$ac-" $aryL %anaging the Blac- =ole@ The :xecutives $uide to 4oftware ,ro+ect 8is-L
Business :xpert ,ublishing" Thomson" $AL 2!!L G4BB!@ )5/53!2)!)5.
$ac-" $aryL Applying Six Sigma to Software Implementation ProjectsL
http@RRsoftware.isixsigma.comRlibraryRcontentRc!A!55b.asp.
$ilb" Tom and $raham" ?orothyL 4oftware GnspectionsL Addison <esley" 8eading" %AL 55/L
G4BB !@ !2!3/1A.
=allowell" ?avid (.L Six Sigma Software Metrics, Part 1.L
http@RRsoftware.isixsigma.comRlibraryRcontentR!/5!a.asp.
Gnternational Organi#ation for 4tandardsL G4O 5!!! R G4O A!!!L
http@RRwww.iso.orgRisoRenRiso5!!!)A!!!Rindex.html.
Jones" *apers and Bonsignour" OlivierL The :conomics of 4oftware ;ualityL
Addison <esley" Boston" %AL 2!L G4BB 521)!)/)25122!)5L 512 pages.
Jones" *apersL 4oftware :ngineering Best ,racticesL %c$raw =ill" Bew Hor-L 2!!L G4BB 521)
!)!2)323)1L33! pages.
Jones" *apersL &%easuring ,rogramming ;uality and ,roductivityL GB% 4ystems JournalL Nol.
2" Bo. L 521L pp. /5)3/.
32
Jones" *apersL ,rogramming ,roductivity ) Gssues for the :ightiesL G::: *omputer 4ociety
,ress" (os Alamitos" *AL 6irst edition 51L 4econd edition 513L G4BB !)113T!31)5L
G::: *omputer 4ociety *atalog 31L A15 pages.
Jones" *apersL &A Ten)Hear 8etrospective of the GTT ,rogramming Technology *enter'L
4oftware ,roductivity 8esearch" Burlington" %AL 511.
Jones" *apersL Applied 4oftware %easurementL %c$raw =ill" /rd edition 2!!1L G4BB 521K!)
!2)5!2AA)/L 332 pages.
Jones" *apersL *ritical ,roblems in 4oftware %easurementL Gnformation 4ystems %anagement
$roup" 55/L G4BB )535!5)!!!)5L 55 pages.
Jones" *apersL 4oftware ,roductivity and ;uality Today )) The <orldwide ,erspectiveL
Gnformation 4ystems %anagement $roup" 55/L G4BB )535!5)!!)2L 2!! pages.
Jones" *apersL Assessment and *ontrol of 4oftware 8is-sL ,rentice =all" 55AL G4BB !)/)
2AA!3)AL 2 pages.
Jones" *apersL Bew ?irections in 4oftware %anagementL Gnformation 4ystems %anagement
$roupL G4BB )535!5)!!5)2L 5! pages.
Jones" *apersL ,atterns of 4oftware 4ystem 6ailure and 4uccessL Gnternational Thomson
*omputer ,ress" Boston" %AL ?ecember 555L 25! pagesL G4BB )15!)/21!A)1L 252
pages.
Jones" *apersL 4oftware ;uality Q Analysis and $uidelines for 4uccessL Gnternational Thomson
*omputer ,ress" Boston" %AL G4BB )15!/2)123)3L 552L A52 pages.
Jones" *apersL :stimating 4oftware *ostsL 2
nd
editionL %c$raw =ill" Bew Hor-L 2!!2L 2!!
pages..
Jones" *apersL &The :conomics of Ob+ect)Oriented 4oftware'L 4,8 Technical 8eportL 4oftware
,roductivity 8esearch" Burlington" %AL April 552L 22 pages.
Jones" *apersL &Becoming Best in *lass'L 4,8 Technical 8eportL 4oftware ,roductivity
8esearch" Burlington" %AL January 551L A! pages.
Jones" *apersL &4oftware ,ro+ect %anagement ,ractices@ 6ailure Nersus 4uccess'L
*rosstal-" October 2!!A.
Jones" *apersL &4oftware :stimating %ethods for (arge ,ro+ects'L *rosstal-" April 2!!5.
Ian" 4tephen =.L %etrics and %odels in 4oftware ;uality :ngineering" 2
nd
edition L Addison
<esley (ongman" Boston" %AL G4BB !)2!)2255)3L 2!!/L 521 pages.
33
(and" 4usan IL 4mith" ?ouglas BL <al#" John UL ,ractical 4upport for (ean 4ix 4igma 4oftware
,rocess ?efinition@ >sing G::: 4oftware :ngineering 4tandardsL <ileyBlac-wellL 2!!1L
G4BB !@ !A2!2!1!1L /2 pages.
%osley" ?aniel J.L The =andboo- of %G4 Application 4oftware TestingL Hourdon ,ress" ,rentice
=allL :nglewood *liffs" BJL 55/L G4BB !)/)5!2!!2)5L /5A pages.
%yers" $lenfordL The Art of 4oftware TestingL John <iley O 4ons" Bew Hor-L 525L G4BB !)
A2)!A/21)L 22 pages.
BandyalL 8aghavL %a-ing 4ense of 4oftware ;uality AssuranceL Tata %c$raw =ill ,ublishing"
Bew ?elhi" GndiaL 2!!2L G4BB !)!2)!3//21)5L /5! pages.
8adice" 8onald A.L =igh ;ualitiy (ow *ost 4oftware GnspectionsL ,aradoxicon ,ublishingl
Andover" %AL G4BB !)53A55/))3L 2!!2L A25 pages.
8oyce" <al-er :.L 4oftware ,ro+ect %anagement@ A >nified 6ramewor-L Addison <esley
(ongman" 8eading" %AL 551L G4BB !)2!)/!551)!.
<iegers" Iarl :.L ,eer 8eviews in 4oftware Q A ,ractical $uideL Addison <esley (ongman"
Boston" %AL G4BB !)2!)2/A15)!L 2!!2L 2/2 pages.
34

You might also like