You are on page 1of 87

QA in Hell!

Or: How to survive a career in software QA and still care about quality By Niall Lynch

Foreward
I had not thought much about a career in software development until I arrived early one morning at the vocational school where I was teaching to find the body of the admissions director hanging from the ceiling fan in his office. There are certain moments in your life when you feel that the universe is not only trying to tell you something, but that it is shouting it at you through a bullhorn. That morning was one of those times for me. What could possibly be worse than this? I asked myself rhetorically, and immediately grabbed for the help wanted section of that days Chicago Sun-Times that the recently deceased admissions director had helpfully left on his desk. I did not know it at the time, but I was about to find out what could be worse than finding a dead body swinging from the rafters of your workplace: A career in software quality assurance. Which is not to say that software development doesnt have its charms and perks, its good times and its moments of exhilaration. Its just that software development taught me that a dead body can be taken away by the proper authorities, never again to reappear in your life. It is a problem, however ghastly, that can be solved. The problem of software quality assurance, the problem I was paid to solve day in and day out for 17 years, on the other hand, is one that has no such solution. Indeed, as I was to discover, it is a problem that only becomes more intractable, more terrifying the more successful you are at addressing it. Fortunately, on that cold and tragic morning, these insights lay far, far in my future, and all that beckoned to me then was the bright, glowing promise of a

Adventures In Quality Assurance - page 2 of 86

career in the squeaky clean world of software development, a world peopled by whiz kids who were, presumably, not yet suicidal. At least thats what the ad in the Sun-Times told me. It seemed there was a software company that needed a manager of software quality assurance. I applied, interviewed, and within a month was newly installed as the Manager of Software Quality Assurance. That I was able to get such a job with zero experience in management, software development, or in quality assurance did strike me as odd at the time, but only odd in the sense of lucky or charmed or someone up there likes me. Little did I realize then that my ability to get a job in QA with minimal qualifications and experience was not a lucky break, but an omen. It was my first inkling that perhaps the software world was quite different from how I, and most of the people I knew, imagined it to be. The world of software is to this generation what the space program was to people in the 1960s. That is to say, a source of endless wonder, awe and optimism. Something shiny and ultramodern that defines how we think of ourselves and our world. A phenomenon that tells us that every year our world is going to get better, cooler, and more interesting for all involved. If you could go back in time to 1965 and tell people what was in store for the space program the space shuttle crashes; the collapse into ineptitude and irrelevance; and, worst of all, the almost total indifference people would come to feel for the whole enterprise no one would believe you. If you told them we would reach the moon first, but that, ultimately, it would seem like a pointless exercise, the people of 1965 would only get mad at you, and accuse you of being some kind of commie nut who was definitely not with the program.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 3 of 86

This book is, in a sense, the equivalent of that theoretical conversation back in 1965. Software books are invariably upbeat. They exude an aura of arcane authority, the promise of admission into secrets too deep for ordinary mortals. They relentlessly propagate the notion that the newest programs, the newest languages, the newest certifications are the new gateways to paradise, and that paradise is coded in Java. This is not that kind of book. I mention that now so that you will not be disappointed. As much as I would like to write such a book, I cannot. Moreover, it would be irresponsible for me to do so. Rather, this book is the fruit of many years of experience and reflection on the software industry as it actually is, on software as it is actually created. It is written from the point of view of quality assurance, but this is a perspective from which we can observe the decay of the software world in a nutshell, just as we can watch film of an exploding space shuttle and see in its catastrophe the state of our space program writ large. I write as someone who has seen it all, done it all, and has hundreds of T-shirts to prove it. I also write this book at a time when the unwavering luster of the software world is finally starting to fade. Software is no longer viewed as something out of science fiction that we can hardly believe we have in our hands. It is, rather, now seen as a commodity, something dull and uninteresting, no more capable of exciting our imaginations than the vacuum tubes in grandpas old radio. More than that, it has become a source of creeping anxiety. People now finally understand how great a risk we take as a society when we put more and more of our lives at the mercy of software. People are now starting to understand that all the technical wizardry and advancement that the software industry keeps
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 4 of 86

trumpeting over and over again in its press releases doesnt mean software will actually work for us, and that it is just as likely to work against us. Optimism is being replaced by frustration, and wonder by suspicion. This is not an accidental development. It is in fact the inevitable outcome of how the software industry does its work, though this is something it cannot bring itself to acknowledge. It is no more capable of seeing itself swinging from a ceiling fan than my friend the admissions director could see himself doing so in his youth. Yet, as I discovered that morning, it is all too possible a fate. I offer this book as a kind of intervention, an attempt to prevent such a horrible outcome. I hope it succeeds.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 5 of 86

Introduction
The British writer Horace Walpole once said, famously, that The world is a tragedy to those who feel, but a comedy to those who think. With some minor adjustments, this aphorism can be made to express one of the fundamental truths about the software industry: Software is a comedy to those who make it, and a tragedy for those who use it. The tragedy comes from the chronically poor quality software that consumers have no choice but to purchase and use. The comedy comes from how people in the software industry view product quality. Software quality is a bit like the weather. Everyone likes to talk about it, but it seems no one can do anything about it. This is not an accident, I assure you. It is rather a reality to which the software industry is perfectly conformed. So perfectly, in fact, that even the critique of this comedy from outside the industry has been completely anticipated, co-opted and neutralized. Any questioning of the status quo in re product quality is instantly transformed into questions of software process, and therefore cunningly misdirected into areas far from where the real problems lie. For this reason I am not presenting yet another book on process. Not because those books are not often quite useful, which they are. Nor because one can completely separate discussion of process from discussion of software quality, because at the end of the day you cant. Rather, because I have come to believe over the course of my 17 year career in software quality assurance that the obsession with process is being used as a kind of distraction from the deeper problems of software development. Everyone who works in product
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 6 of 86

development knows what these deeper problems are, and most also know how profoundly toxic they are to product quality, but it is easier and far safer politically to pretend that process is the real problem. In this respect, software process has unwittingly become an enabler of these deeper problems. Because of this, I believe that before I can offer you a better way of doing software quality assurance, I must first expose and anatomize how the institutional realities of commercial software development conspire knowingly, willingly to sabotage software quality in a systematic fashion. Absent this expos, there can be no real appreciation of how my proposals address what is really wrong with software quality assurance. This is why the book I have written falls into two very distinct parts. The first part is an essay on institutional anthropology, an analysis of how the entire corporate culture of software development creates the problem of software quality, and ensures its persistence. To this end, I will encompass in my analysis factors that often are ignored in strict discussions of software process, such as the role of the stock market, the budgeting and scheduling process, and, above all, the system of rewards and punishments that is actually in play in most software development environments. Unless one can place the problem of software quality firmly within the context of corporate politics and culture, its infernal persistence can never be fully understood. This essay will point many fingers, so fasten your seat belts Its going to be a bumpy ride. The second part of the book consists of a presentation of an alternative way of doing software quality assurance, and is more what readers of books on software quality assurance are used to. However, even this purely technical
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 7 of 86

section will rely upon and refer back to the analysis presented in the first part. For the idea of quality assurance I am presenting in the second half is itself a way of dealing with the realities I outline in the first. It is not a purely ideal view of how to do software quality assurance, because I believe such ideal proposals are doomed to failure. Consequently, though my concrete proposals for how to understand, organize and execute quality assurance may at first seem alien or counterintuitive, bear in mind that they are meant to both deflect and co-exist with larger institutional realities over which quality assurance has no control, and probably never will. The monster I describe in the first part has not disappeared in the second. Though one hopes it is a little more well-mannered by the end of the second. Lastly, this book, in its entirety, is a book about thinking. I will not be offering you five easy steps to best in class product quality. I will not provide you with templates you can rip out of the book and distribute to your beleaguered QA staff. My book is not an excuse not to think deeply about the problem of product quality; it is not something to be used as a kind of talisman to wave in front of questions that are inherently messy and difficult to make them disappear. In short, this book will not be doing the job for you. Rather, I will be inviting you to think along with me, to unravel together the gigantic knot of product quality, and follow its threads wherever they may lead. If you are willing to join me in this intellectual adventure, I believe you have much to gain and nothing to lose. However, if you would rather have five bullet points you can e-mail to your managers, there are many other books that are better suited to meet that need. And many other managers, I suspect as well.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 8 of 86

Let us wait a minute so that all those who cannot benefit from this book have closed it, placed it back on the empty business class seat next to them, and gone on with their business. There. Now the only ones left are people like us. Let us begin together.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Part I : The Success of Failure

Adventures In Quality Assurance - page 10 of 86

If Its Broke, Dont Fix It


This is a book about the nature of software quality assurance. As such, it is also necessarily a book about the failure of software quality assurance, a phenomenon of such long standing in the industry that even to note its existence is to fall immediately into clich. That most software developed in the world is of poor overall quality is well-known. That this low level of quality has remained consistent over time, even as other aspects of the software industry have matured and thrown off the growing pains of their youth, is also well-known. It is a fact so well-known that it is no longer the cause of surprise or even of curiosity. Like death and taxes, poor quality software is one of lifes few certainties. Of course, software process theorists and the odd industry luminary mount attacks every now and then on the problem, holding forth some new process or, even better, a new way of selling process, as the key to solving the problem. These attacks generate a flurry of interest for a year or two and then dissipate, to be overtaken in their turn by newer formulations of the same proposed solutions. And so the cycle continues. Meanwhile, software quality is not improved in any persistent fashion. The relationship between the software process improvement movement and the software industry can be illustrated by a simple parable. Imagine, if you will, that you have a very good friend who is a terrible driver. He is always getting into accidents. He is always getting citations for moving violations. Imagine as well that your friend realizes there is a problem. He sees the points on his license accumulating. He sees his car insurance bill skyrocketing. He knows that the cost of avoiding accidents and traffic violations is so very much
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 11 of 86

less than trying to fix them after the fact. Every time he gets a ticket or drives into someones front yard, he promises that he will do better. He is entirely committed to becoming a skilled driver. One day he comes to you for advice about how to solve his problem. This makes you happy, as you have seen both the danger his lack of driving skills poses to others and the constant anxiety it causes him. So when he comes to you to discuss a solution, you are ready to give him your full support. Beforehand, you make a list of driving schools you think will be perfect for him. You buy an excellent instructional DVD for him on how to drive safely. You even buy him a St. Christopher statue to mount on the dash of his car, just to be sure all bases are covered. When he arrives, you happily present these aids to him. Imagine your surprise when he takes one look at your entirely practical, solution-focused gifts, and sweeps them aside. Look, your friend says, a little impatiently. My problem isnt that I dont know how to drive. My problem is that I have a crappy car. The engine is too weak to get me out of other drivers way. The brakes arent that strong. The suspension doesnt keep the car under control. Theres no navigation system, which is why Im always going the wrong way down one-way streets. And there are no cup holders, so I have to hold my coffee with one hand while steering with the other. I was hoping you could give me some advice on what car I should buy to improve my driving. I was thinking maybe a Porsche or a Ferrari. Your heart would certainly sink if you heard this from your friend. You would also, as his friend, try to help him understand that the problem isnt the car, but the driver. That the deficit is not one of engines and brakes and cup holders,
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 12 of 86

but of driving skill itself, and if youre friend doesnt understand that, he will never get better. But your friend wont listen to you. He is seduced by fancy advertisements, by specialist magazines that tell him he needs a new car, by the cachet that attaches to having a faster, more powerful car. Most of all, he is seduced by the simplicity of the solution. He doesnt need to change. He doesnt need to invest time in improving himself. All he needs to do is buy something off the shelf that will solve all his driving problems. Our driver friend is the software industry in relation to product quality. The only difference is that, unlike in the parable above, few people in the industry seem to understand the fundamental mistake our friend is making. Or do they? One of the main purposes of this book is to attempt to explain why this is so. Since I am lazy and you are impatient, I will begin the book by telling you the answer: Software quality remains at consistently low levels because it is in almost everyones interest in the software industry for that situation to exist. It really is that simple. Every theoretical failing of quality process and quality outcome can be traced back to that simple fact. How could it be otherwise? After all, the massive cost penalties of poor quality are well documented, as is the endless agony of consumers who must discover and live with it. Everybody knows this. Yet the phenomenon continues unabated, undisturbed, eerily resistant to all of the well-meaning attempts to eradicate it. If you walk into your local bookstore and browse the Software/Computing section, you will see hundreds of books on how to use popular software programs. You will see hundreds more books on popular computing languages. You will see still hundreds more books on how to prepare for certification tests in
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 13 of 86

hardware and software. But you will not see any books on software quality assurance. You will find no guidebooks for how to ace your qualification test for Software Quality Assurance Tech Level IV because no such certifications exist. Does this strike you as odd? It should. For the composition of your average software book section faithfully reflects the priorities of the software industry itself, and the key career paths that exist within it. What we learn from perusing such sections is that for all the rhetoric about the importance of software quality, almost no attention is paid to the problem within the industry. Quality Assurance is not treated as, and indeed does not exist as, a technical specialty on par with software engineering, nor even of marketing. Why is this so? Again, if the software industrys claim to be passionately concerned with quality were true, how could such a state of affairs be allowed to exist? And who could possibly benefit from it? It is certainly of some relevance to my point that many, many people in the software industry have become millionaires even though they continue to produce low quality product. It is also certainly of some relevance that the people who write all of this poor quality software are nevertheless lionized as wizards of high-tech, as untouchable masters of an arcane art far beyond the ken of mortals, and generally are treated as among our societys best and brightest. I submit that none of this could have happened unless low product quality was, at the very least, irrelevant to being successful in the software business. If that conclusion is true, then it must also follow that the industry as such, despite its protestations to the contrary, must benefit from low product quality.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 14 of 86

I realize this will sound like heresy to many. Yet it is nevertheless the truth. Admittedly, it is a truth that can be difficult to discern, particularly if all you do is look at the quality assurance function itself, which is what most people do. But this immediate narrowing of focus, this reflexive direction of our attention to one, and only one, aspect of the software project process is itself a massive act of misdirection. It is, in fact, one of the main ways that the software development industry preempts meaningful analysis of how it benefits from poor product quality. That is why I am going to begin my discussion of software quality assurance by first analyzing the entire institutional context in which it operates, something that is rarely done. More sophisticated process theories will attempt to place the quality assurance function within the context of the product delivery organization, and its other functions, as a whole, which is admirable. But few if any attempt to extend their analysis further into the realms of corporate finance, politics and subterfuge. Lacking this broader perspective, process theorists can often identify what is being done wrong, at the level of the product delivery organization, without being able to identify the real reason why it is being done wrong. This is why many software process improvement theories default to a Socratic view of the problem. Which is to say, that people are doing the wrong things because they are ignorant that they are wrong. 1 All that needs to happen is for people to be taught the error of their ways, to have their false assumptions exposed, and they will want to do the right thing. Unfortunately, this assumption
Though of course its always easier to tell a paying customer that they are just misled by the ignorance of others instead of telling them they are the real problem. Socrates, it seems, also had keen insights into the marketing of software process consulting.
By Niall Lynch verlandosta@yahoo.com 310-829-2044
1

Adventures In Quality Assurance - page 15 of 86

is itself entirely false. Though it is one that is, perversely, encouraged by the software industry itself. Because if the cause isnt ignorance, then it can only bewell, you see my point. What I am going to do is to provide an analytical overview of what I will call, for the sake of brevity, the Standard System. By this term I have in mind not only a description of how most software development organizations operate, but why they operate they way they do. It is my contention that only after we understand, in detail, the workings of the Standard System will we be in a position to understand why software quality assurance fails, and how it can be made to succeed. In particular, we need to first identify the ways in which the Standard System, for all its manifest failings when judged from a theoretical perspective, nevertheless generates massive benefits for those who participate in it. To these questions we now turn.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 16 of 86

Im Failing As Fast As I Can


Most software development projects follow a waterfall development model, at least with respect to their scheduling milestones, even if they think they are doing something different. That is to say, most software development projects are structured so that specific phases of work are the responsibility of specific functional specialties, even if all functional groups are responsible for some work within each phase. The requirements definition phase is normally the responsibility of Product Marketing or Product Management. The software coding phase is the responsibility of Engineering. The testing phase is the responsibility of QA. And so forth. Because this is so, most software project processes require the responsible function to achieve some specific milestone before they can transfer project responsibility to the group responsible for the next phase. After that milestone has been met, responsibility for the projects remaining on schedule is transferred to another functional specialty. Once that has been achieved, in the eyes of upper management, at least, the fault for any project delays now lies with a different group. It is, or should be, obvious what an enormous temptation this system injects into the project process. This system gives each functional specialty an enormous incentive to declare themselves done with their phase as quickly as possible, and therefore remove themselves from the hot-seat of blame, even if all the work they actually need to accomplish in that phase is not really completed. The sooner Product Management can declare requirements done, the sooner

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 17 of 86

they can blame Engineering for any project delays. The sooner Engineering can declare code complete, the sooner they can blame QA for any project delays. Whereas QA, being the caboose on this bullet train of shifting responsibilities, has no one to hand things off to, except the customer. Nevertheless, the pressure on QA to declare itself done is just as great as for any other function, since it will just as surely be blamed for any project delays that occur during its phase. It is the customer, then, who inherits the often disastrous results of this institutional adaptation. You would think that this would, ultimately, cause blame to rebound back onto the group that delivered the software, but it rarely does. Or rather, it rarely does effectively. The beauty of this maladaptation is that it leaves most functions with a perfect way of deflecting blame. They simply say, I got all my deliverables done on time, and there were no problems with them, so blame must lie with the next group in the chain. You can see where this is headed. The last link in the chain is the one with the least ability to offer a plausible excuse, which is why QA generally takes the blame. Having declared itself done prematurely, it left many bugs undetected in the software, bugs which customers promptly find. This allows Engineering to say, If QA had found these bugs before the product shipped, we would have fixed them. Note how this excuse carefully avoids the question of how those bugs got into the code to begin with, which, again, is the beauty of the system. The searchlight shines only upon the last prisoner climbing out of the escape tunnel, not upon all those who fled before.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 18 of 86

This basic orientation of the Standard System has one very important overall effect on the software development process. It creates a process where there are no meaningful exit or entrance criteria for any of the early project phases. That is to say, there is nothing stopping people from just beginning.

Product Managers just sit down and start writing requirements. Engineers just start writing code. QA just starts running tests. There are few, if any, true mechanisms of validation in place that must be satisfied before work can begin on any function of a project, in any phase of the project. In essence, the Standard System habitually end-loads project risk, moving it all to the end of the project. This explains why software projects always seem to go so well at the beginning, and then jump off the rails in the final stretch. This explains not only why project delays often come as a complete surprise, but also why knowledge of delay comes only at the very last minute. In this fashion, the Standard System undermines rational risk mitigation, since such mitigation would force each function to spend more time validating its work, and thus assume a much greater risk of being held accountable for schedule delay during their phase. To put the point in a different way, the Standard System optimizes for political risk mitigation over project risk mitigation. If this sounds like a very perverse bargain, welcome to the world of commercial software development. There are two fundamental truths that emerge from the analysis above, truths which must be openly acknowledged before any group can understand its quality failings.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 19 of 86

First, the process described above is only a maladaptation from the point of view of abstract process theory. From the point of view of internal politics, career enhancement, and CYA-effectiveness, this process is a positive adaptation that delivers consistently positive results. Except, of course, for QA. Second, for the process described above to work, QA must fail. Let me say that again: QA must fail. Because if it doesnt, then it cannot provide cover for failings further up the product delivery food chain. So the persistent failure of QA is not an accident. It is not a result that persists because of peoples ignorance of its causes. Rather, it is a necessary output of the system, the sine qua non of its surreptitious success. These truths help us understand one otherwise puzzling aspect of software development culture: the persistent weakness of SQA, even though it is a vital project function. Though commercial software development is at least four decades old, SQA remains only half-professionalized, and sometimes not professionalized at all. People dont get degrees in SQA. SQA is rarely, if ever, a powerful department or group within a software development organization. Salaries for SQA staff and management often dont begin to match the salaries paid to engineers and product managers. SQA is universally viewed as a lower level function, and the best SQA staff usually try to leave it as quickly as possible to secure a job in engineering or product management. How could this situation exist, and persist, for so long, even though product quality is acknowledged by everyone as the key to customer satisfaction and, therefore, customer dollars? Its really not that big of a mystery. One only has to note that SQAs very role is defined as that of catching the mistakes of everyone further up the project
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 20 of 86

food chain. They are there to inherit the results of all the tasks that have been left undone earlier in the project, to try to fix them in the QA phase and thereby take responsibility for them when they are not. This cannot be a coincidence. Especially when we realize that a professionalized SQA function would wreck the system. If QA functioned the way everyone claims they want it to, it would uncover, for example, contradictory or inadequate requirements late in the process, necessitating a delay that could only be blamed upstream, on Product Management. It would uncover massive, systematic failings of the product architecture, failings for which only Engineering could take the blame. Its findings would necessitate massive rewriting of the code, which in turn would lead to significant schedule slip, the consequences of which could not be laid at QAs door. If we understand this, then the question, Why is QA not professionalized? answers itself. A professionalized QA would shine the spotlight upstream in the development process, exposing major shortfalls of fundamental expertise in groups far more powerful than QA. That cannot be allowed to happen. Moreover, the continual failure 2 of QA only gives other project functions a greater justification for dominating and controlling it. After all, they are not doing their job correctly, which means they cannot be trusted with the kind of authority and autonomy granted other project functions. The circle of life is complete.

I trust by this point the quotation marks around this word are self-explanatory.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 21 of 86

Two questions naturally arise at this point. The first is: Whats in it for QA? Why would QA put up with such a system, where they take all the blame for everyone elses mistakes? The answer is simply that good QA people usually dont. They leave QA as quickly as possible once they realize its a game they cannot win. Some good QA people stay on, usually for personal reasons, but normally the QA function winds up with only the least committed, least welltrained QA staff in place. And for such people there is a perverse kind of protection in the system. Since they are never given true autonomy, they never have to take true responsibility. Sure, they get beaten up every now and then when a product blows up in the field, but the other departments are too dependent on a non-functioning QA group to allow that to go too far. Moreover, QA itself can point to its relative lack of authority as an all-purpose excuse. As I pointed out above, each failure of QA leads to a diminishment in what is expected of QA, a situation which plays into the hands of those QA staff who do not want the responsibility and true accountability that would come with real power. The result is, generally, a QA staff that is carefully selected by the system to thrive on its injustices, and to reinforce them for its own benefit. The second question that arises is: Why on earth would executive management put up with such a system? After all, the endless streams of problems in the field produced by this system do nothing for their personal PR, cause endless headaches for them, and cost the company money and customer goodwill. Right? Well, not exactly. As with the other levels of the system, all is not quite as it seems at the executive level.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 22 of 86

The first step in answering this question lies in understanding that there is usually a huge cultural gap between the product delivery level of most software companies (i.e., the groups that actually specify, write, test, and support the software, and their respective management hierarchies), and the executive level. Most commercial software companies are not run by people with engineering backgrounds. The winnowing process that selects executive staff tends to weed out engineers, and favors people with backgrounds in sales or finance. 3 This creates an executive culture that is mystified by the engineering process, and not a little intimidated by it. There is often a thinly-veiled hostility between the executive level and the engineers they pay so well, because the engineers are always reminding them (with all the tact and understatement for which engineers are justly famous) how ignorant they are of what the engineers know and do. As a result, executive staff often regard the product delivery process as a black box that they are afraid to open, lest, like a shaken can of soda, it winds up spraying them with problems, failures and questions about C compilers. As a result, executive management generally want to know nothing of the details of product delivery. Thats what they pay the engineers to take care of. To interest themselves too closely in its functioning places them in a double peril: that of clashing with the engineering staff, and that of being tarred with its many failures. Neither is an appetizing prospect for most executive staff. These cultural peculiarities create a strong incentive for executive staff to let their product delivery groups run their own show with as little oversight as

Engineers usually lack the communication skills and business savvy to be compelling candidates for executive positions.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 23 of 86

possible. The important thing to understand is that this impulse only becomes stronger, not weaker, as quality disasters multiply in the field. In corporate life, disaster is a radioactive substance, to be approached only through layer upon layer of the proper shielding, which, in the corporate world, usually means subordinates. Preferably from groups other than ones own. The last thing you want to do is grasp it in your hand. Bizarre as it may sound, it is at the moment of greatest disaster that engineering groups generally have the most power over executive staff, and they know it. Indeed, we arrive here at one of the darkest truths of the Standard System: It is designed to produce such disasters, because they only enhance the power of those who are responsible for them. When a quality meltdown occurs in the field, executive management only want it to go away as quickly as possible. Unfortunately, that is a miracle that they, by themselves, are incapable of working. 4 Only the product delivery staffs can get the bad PR monkey off their backs - the very same product delivery staffs that caused the problem in the first place. So crisis time tends to be the time when everyone pulls together, eschewing finger-pointing, blame and pink slips in the name of solidarity. Crisis time is also when engineering and QA whip themselves to new heights of selfless devotion, working mega-overtime, often for months on end. In other words, the apparent failings of the Standard System produce a series of stellar opportunities for conspicuous self-sacrifice and heroism within the product delivery group. These in turn surround product delivery with a halo so bright, that afterwards it seems petty and ungrateful to probe too deeply into why the crisis happened in the first place.
4

Though it needs to be pointed out that this does not prevent them from taking credit for it after the fact.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 24 of 86

Moreover, these periods of gladiatorial exertion on behalf of the company are major bonding events within the product delivery group itself. Its the people youve worked with till six in the morning, sustained only by coffee, Cheetos and superhuman stamina, that you will consider your most trusted comrades. It is these crisis events that software people tend to remember most intensely, and most fondly, in their later years. They become something very like a narcotic for software people, however much they may complain about them afterwards. As incredible as it may seem, the heroism of product delivery during these crisis periods often generates concrete rewards for them. It is not at all unusual, once the smoke has cleared and the stock price has recovered its value, for product delivery staff to be given bonuses and public approbation in the form of awards. Indeed, it is often the people whose negligence contributed most directly to the disaster who are most greatly honored. This phenomenon is so consistent that it is difficult to avoid the conclusion that product delivery groups are happy to provoke crises in order to create opportunities to be noticed and rewarded. Nevertheless, what prevents executive management from punishing product delivery for its failings after the crisis is past? This question unveils the final piece of diabolical brilliance inherent in the Standard System. The system continually generates crises in the field, crises that eat up massive amounts of precious new product development time. So once an in-field crisis is past, a crisis of new product development instantly takes its place. Failure to meet announced product delivery dates will only lead to uncomfortable questioning from large customers and the investment community, and, ultimately, the threat
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 25 of 86

of a lower stock valuation. There is no time to mount inquisitions, no time to reengineer the engineering department, no time to exhaustively revamp product development. To do so would only push the company further behind, further into peril. This is the reason product development groups can fail again and again, and yet escape any kind of significant retribution from executive management. The software development system has immunized itself to such retribution, precisely by being so inefficient that it cannot be interrupted long enough to correct its structural failings without further risking valuable product delivery dates. These dates will, of course, be missed in their turn, but no one wants to admit that at the beginning, least of all executive management. The strategic leveraging of perpetual crisis is also the way that many software development groups immunize themselves against process improvements. The system leaves no time for such things, because everyone must keep racing, racing, racing to put out the next fire, meet the next delivery date, make the current quarters numbers. The system creates and sustains such a compelling urgency, such an irresistible momentum into its own future, that it is almost impossible to interrupt it long enough to change it. The system has booby-trapped itself, so that any attempts to replace it will cause the company to implode. In this respect, the software industry is very much like someone saddled with huge amounts of consumer debt. They would love to save money, but they cant because otherwise they cant make their debt payments. And the only way they can get through the month is to pile more debt onto the load they already
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 26 of 86

have. It is a self-perpetuating loop that quite effectively shuts off all the obvious escape routes. Many software engineering groups are like a credit card that can never be paid off, yet also can never be maxed out. . The only weapon executive management really has in its arsenal is the purchase of a new product delivery group through the acquisition of another software company. It is in fact the desperate need to cast off perpetually failing development groups that drives many software acquisitions, though this, of course, can never be openly acknowledged. At this point in the drama, we encounter the final, and crowning irony. Executive management winds up purchasing another development group that is every bit as committed to the system we have described, a group that is every bit as adept at all the strategies we have already analyzed. Yet executive management desperately wants to believe otherwise, especially after spending countless of millions of dollars on something they thought would solve their problem. This need leads in turn to the new development group being given free rein to reinstitute, and sometimes even improve upon, the dysfunctional functionality of the Standard System. Executive management is like the wife who divorces her alcoholic husband, only to wind up marrying a drug addict. The remote, hands-off approach that executive management takes to product development has another manifestation that is a key contributor to the Standard System. As we have seen, diving too deeply into the details of product development erodes an executives all-important plausible deniability, and so is to be avoided at all costs. Nevertheless, executives have to demonstrate that they are exercising some form of effective control over the projects that fall within
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 27 of 86

their portfolios. The solution that many executives seize upon is schedule oversight. They may not want to know anything about how things are getting done; they may not want to know anything about whether they are getting done, but they have an obsessive interest in when project milestones are met or not met. Schedule management is the one thing they are quite keen to know a lot about, since projects that are on time make them look good, whereas projects that are late make them look bad. I have put both those terms in quotation marks, since project milestones are seldom direct measurements of concrete progress. Rather, they are solely measurements of conformity to schedule. Their essentially self-referential quality is what, in turn, insulates executive management from having to get involved in the details of any developing disaster. Indeed, most product delivery managers learn very quickly how to manage projects so that they always appear to be on time, even if in reality they are not progressing in any substantive sense, since project managers well know that conformity to schedule is the single metric they will be judged by in the eyes of executive management. This is the real foundation, the true stimulus, of the phenomenon described at the beginning of this section, where the goal of each functional group is to declare themselves done as quickly as possible, so that blame for any schedule nonconformances will fall on the next group in the chain. In my own career I have seen this principle at work. I have seen projects that shipped within schedule, yet which then produced major quality and customer satisfaction disasters in the field, singled out internally as success stories. I have seen projects that shipped only a few weeks beyond their
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 28 of 86

committed ship date, which generated high in-field quality and customer satisfaction results, nevertheless tagged internally as failures because they did not meet their original ship date. In this fashion, executive management reinforces every aspect of the Standard System. They, like every other function, have no real stake in product quality, since that is not what their own success is going to be judged on. This prioritization is communicated with crystal clarity to the troops, and they in turn have internalized it so completely that it operates as an institutional reflex, something that does not require discussion or comparison with other optimizations. Again, you may find this a surprising reality. And, again, the antidote to disbelief is to understand how the system rewards conformance to its true priorities. You need first to understand what constitutes real power in a corporate hierarchy. When executives who are all formally at the same level, with the same title, want to compete with one another, they battle over two things: bodies and dollars. By bodies I mean the number of staff they control. By dollars I mean the amount of budget they control and the size of the product revenue streams that are accounted to their group. The most powerful executives are those who control the largest numbers of employees, have the largest budgets, and are accounted the largest revenue streams in the company. Logically, then, executives are going to manage their groups and their own careers so that they wind up with as large a share of all of those pies as they can wrest from the control of their colleagues.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 29 of 86

Given the corporate priorities I have outlined, it should not be difficult to guess what kinds of projects are likely to be rewarded with more bodies and more budget dollars: Those that demonstrate schedule compliance. Particularly in a publicly-held company, shipping on time is the most crucial factor, regardless of what other effects may follow. Projects that have faithfully met their schedule milestones are the ones that will be the best candidates for receiving more budget dollars and more staff. You may think that those products which generate the largest amount of revenue would automatically receive the largest budgets and staffs, but this is not always true. In many cases, the products that generate the most revenue are considered legacy projects, serving markets that are already saturated, and thus unlikely to generate significant new revenue. 5 Therefore, they are not seen as good candidates for higher levels of investment. Often it is projects that promise to create whole new markets, and whole new revenue streams, that are prioritized for new investment, even though they have neither at the moment. One of the most maddening paradoxes of the product development system is that success in the marketplace often leads to your project being disregarded in this fashion. If we analyze this paradox further, we see that if you are managing a cash-cow legacy project, efficiency is not necessarily in your best interests. Executive management is already looking for any reason to move your resources and budget dollars over to new, sexier projects, on which they have pinned all
5

These judgments on the part of executive management rarely have any basis in fact, but are rather publicly acceptable formulas for expressing the fact that executive staff are bored and need new distractions to keep them interested in the business.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 30 of 86

their hopes for new sources of revenue. So if you consistently demonstrate that you can deliver new versions of your legacy project with the staff and budget you already have, you will never receive any more. More than likely, an executive VP will instruct you to do more with less, on the assumption that you must have some slack in your organization if you are delivering product without drama or fanfare. Strange as it may seem to outsiders, quiet efficiency routinely makes upper management suspect that you have more than you need, otherwise, how would you be able to run your project so smoothly? On the other hand, the best way to attract more money and bodies to your project is to get into trouble. This is a particularly effective strategy if some huge percentage of the companys quarterly revenues come from your product. Because then the company has to give you whatever you need, otherwise they face stock price meltdown. So the truth of schedule compliance turns out to be a little more complex than it first appeared. Schedule compliance is the sine qua non of success in the corporate system; yet too much schedule compliance makes you a candidate for budget and resource cuts. This is why project managers learn that it is not necessarily a bad thing to cause a crisis every now and then, as a way of demonstrating that you dont have enough, that you need more, that you cant be taken for granted. Unfortunately, this strategy does nothing for product quality either, since schedule crisis almost always leads to quality crisis. I have tried to demonstrate two realities in this extended analysis of the Standard System. First, that the standard system is fueled, at every level, from top to bottom, by a will to dysfunction, or rather by what would seem to be
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 31 of 86

dysfunction and folly to outsiders. But my second point is that, looked at purely from an internal perspective, the system is not dysfunctional at all. On the contrary, it is supremely functional. It is a grand optimization for success that delivers the goods time and time again. Many process theorists have seen my first point quite clearly, yet relatively few of them have seen, or have allowed themselves to see, the second. Or to state the point in a more ironic way, many software process theorists dont want to acknowledge that the Standard System already possesses many of the hallmarks of good process. For one thing, it is eminently repeatable. Far more repeatable, in fact, than many of the more elaborate processes that are proposed as its replacement. Secondly, it is lightweight and, in its own perverse way, quite agile. It does not require extensive and expensive training. It does not add layer upon layer of bureaucracy to monitor its effectiveness. The Standard System can adapt to almost any new circumstance, and still maintain its essential characteristics. Everyone who participates in it knows what it is, and can articulate its basic features without much prompting. It also complies with one of the key requirements of many more theoretical processes: it is both a software process and a business process. The process is how the company does business, not something apart from it. Most importantly of all, the system is identical with its rewards. That is to say, it is not simply a way of doing things consistently, but also a way of consistently rewarding those who comply with it. In this respect, it has a leg up on certain other proposed software processes, whose benefits are often abstract at best and require quite a long time to materialize. Not so the Standard System.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 32 of 86

It is, by contrast, a veritable Dr. Feelgood, dispensing its rewards instantaneously. If we understand this, the persistence of the Standard System, in spite of all its manifest and consistent failings in many areas (such as product quality), ceases to be any kind of enigma. Rather, the relevant question becomes instead, Why would anyone want to replace it? Well, the honest answer is: Almost no one does. Process Improvement is a religion that certain management structures convert to periodically, only to abandon it after one or two iterations. They declare the experiment a success, and on that basis instantly revert to their original way of doing things. They are like the person who gets drunk the minute they get out of rehab, to celebrate the fact that theyve been cured of alcoholism. Of course, there are reasons why one might want to do things differently. Even people enmeshed in the depths of the Standard System realize this. Yet many powerful forces conspire to prevent the software industry from evolving. The most powerful of which is software engineering itself. Before we finish our analysis of the Standard System, it is worthwhile to focus more deeply on the crucial role software engineering plays in maintaining it, a role, I would argue, that is both unique and uniquely powerful.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 33 of 86

Lords of the Manor


So far we have talked about the software development system as a whole, attributing to each of its components a fairly equal share of responsibility and benefit. But this is not really true. As anyone who has worked in software development can attest, the engineering group is the sun around which all the other functions orbit. To be fair, this makes perfect sense. It is the engineers who write the code which forms the product that makes it possible for the company to earn its revenues. Only a very nave person would find this preeminence surprising or unfair. Yet it would be equally nave to believe that this position of centrality does not produce its own unique distorting effects on the system as a whole, and especially in the case of QA. We cannot get to the root of why QA remains unprofessionalized unless we first examine, in detail, the role Engineering plays in this phenomenon. The first step in this inquiry is to point out how software development differs from most other professions. Though in the popular imagination software engineers are classed with doctors, judges, architects and other high-level professions, 6 there is nevertheless a huge difference in how software engineers come to their careers. Surgeons dont begin their careers by first operating on people for years, and only then going to medical school. Judges dont cut their teeth by first trying a couple of dozen cases before going to law school. In all real professions, members cannot do much of anything until they are first

This popular association is due exclusively to perceived economic parity between software engineers and the traditional professions.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 34 of 86

certified by a professional body, after many years of preparation and training. In other words, in real professions mastery follows professionalization. Software engineers are nothing like this. Most of them began coding as teenagers, and many have produced functioning software, all by themselves, long before ever entering an electrical engineering program in college. To this day, many software companies will hire a software engineer who has no formal degree in the subject, purely on the strength of their demonstrated skill at writing code. This is not a bad thing. It is, in fact, one of the cool things about the software industry. It is still porous, still willing and able to recognize talent, irrespective of the paper it comes wrapped in. However, this career path also means that many software engineers feel complete in their understanding of their discipline before they ever get their first formal job in a software development group. More importantly, this is an expertise they believe they have acquired and demonstrated all by themselves, without the help of any other people or functions. As a result, there is a strong consciousness among software developers of not actually needing any of the other functions inherent in commercial software development. This is why so often they treat everyone else as a superfluous ninny who is only getting in their way. Why do they need formal requirements, from a non-engineer to boot, before writing software, when they have produced lots of software already without requirements or product managers? Why should they submit their code for testing, when they have written lots of software already, used by lots of people, without having some QA group first having to approve it?

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 35 of 86

Add to this the fact that many software companies are founded by a sole engineer, or very small group of engineers, who produce and sell their first product without any help from any other functions, and its not hard to understand why software engineers find it difficult to comprehend why anyone else should muscle in on their territory. Software engineers often believe they have acquired and proven their expertise prior to any experience of professionalization or socialization into a larger business environment. This fact explains a lot of things about software engineers. Why, for

example, they find it so difficult to cooperate even with other software engineers. Why, for example, to this day software engineers cannot understand each others code. Each engineer writes in their own dialect, which is largely incomprehensible to any other engineer. The path to professionalization in software development is fundamentally solitary, and the software industry continues to reap the consequences of this inescapable fact. 7 This profound sense on the part of software engineers of the superfluity of all other software development functions means, in turn, that they find it impossible to attribute real autonomy to them. If they can be given any real purpose, any real use, it can only be defined in relation to the needs of software developers themselves. Consequently, software engineering has a strong tendency to view other software development functions as their servants. Software engineers as individuals, and software engineering as an institutional entity within a corporation, habitually feel it is their right to define the duties of
7

If you doubt this, institute the XP (Extreme Programming) process in your organization. One of its key features is pair programming, where programmers are actually forced to cooperate with each other. This usually leads to high drama worthy of an extended episode of Dynasty.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 36 of 86

other functions, and to exclude from that definition anything that does not suit their immediate needs. Some functions are able to resist this colonization to a degree, because they have very strong uplinks into sales and/or finance. This is sometimes true of Product Management, for example. However, QA does not have this advantage. As a result, it is the most completely colonized of all the other software functions. QA is normally completely dominated by Engineering. It is not uncommon for QA to lack its own management hierarchy, that mirrors that of Engineering at every level. QA usually reports to the head of Engineering, not to the head of the business unit. In many organizations, QA does not even exist as a separate department. Rather, QA staff are distributed to each project, and report up through the project hierarchy. QA staff are often chosen by Engineering managers, according to criteria they deem sufficient, though QA almost never has similar veto power over engineering staff. And on and on. In institutional terms, Engineerings domination of QA means that QA is never really institutionalized in the first place. It exists only as a para-function, lacking its own hierarchy, departmental infrastructure, or budget. Moreover, QA never develops mature interfaces and relationships with any other project functions that are equally necessary to product quality, such as Product Management. Often attempts to develop these other interfaces arouse the immediate wrath of Engineering, who sees them as a form of betrayal, an attempt on QAs part to engineer its own independence. In functional terms, Engineerings domination of QA means that QAs methods and duties are not defined in terms of the needs of QA itself, nor in
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 37 of 86

terms of what the company may require in terms of quality, but rather solely in terms of what Engineering needs QA to be and to do. In many software organizations, QA is nothing more than Engineerings valet. It can only do what Engineering wants it to do, and no more. This leads in turn to a situation where Engineers offload onto QA staff many tasks and responsibilities that logically should rest with engineers themselves. This is why, for example, many engineers feel they need QA staff to immediately test every line of code they write to make sure it works. Why the engineer should not be primarily responsible for this is a question that is rarely asked. In professional terms, this domination means that QA staff really have no career path. They realize early on that QA is not a player, and that they themselves will not become players if they stay in it. Hence the career path of many of the most talented people in QA isstraight out of QA, and as quickly as possible. In philosophical terms, this domination ensures that QA never develops a clear idea of its own discipline. It enforces upon QA a definition of its work that is completely task oriented, not goal oriented. This is why QA, in many software organizations, is reduced to a group of people who do nothing but push buttons and find bugs (one hopes), without being able to relate that data to any meaningful concept of product quality in general, nor even in terms of the specific product under test. This is why, for example, the release criteria of most software projects consist solely of bug metrics (No open Category A issues; No

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 38 of 86

more than three open Category B issues, etc.), even though bug metrics, by themselves, tell you next to nothing about product quality. 8 There is one area, however, where Engineering prefers to let QA take the drivers seat. That is the area of accountability for the quality of the released product. Though QA is denied any meaningful authority, it is nevertheless required to assume complete accountability. Meanwhile Engineering reserves to itself complete authority, while minimizing its own accountability to a vast degree. Is it any wonder, given this convenient arrangement, that Engineering often feels fiercely possessive of its QA staff, and vehemently resists any attempt to wrest control of QA out of their grip? After all, good help is hard to find. These claims are easy to illustrate from my own experience. For example, soon after becoming Director of QA for a medium-sized company, I was assigned a specific project that my boss wanted me to oversee personally. I met with the Engineering Manager and his team, and we worked out a very realistic project schedule that everyone was happy with. We finished the requirements phase and entered the coding phase. I busied myself with being sure the QA staff had a good test plan, that tests were being written, that relevant equipment was being obtained, etc. Everything was going fine, until the week when Engineering was to declare itself code complete. The Engineering Manager told me that his group would be six weeks late in achieving the code complete milestone. I told him we needed to notify management immediately that the project was going to be a month and a half late. He had trouble understanding my point. Why will it

If you doubt this claim, simply recall to mind the number of software products that are released with zero open Category A issues, only to immediately manifest nothing but Category A issues in the field.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 39 of 86

be late? he said. I pointed out to him that the delay in reaching code complete was a delay in when we could begin testing, hence the release date would have to be moved forward by the same amount as the delay in reaching code complete. So youre saying that QA cant make the date? he replied. Yes, thats what Im saying. Because Engineering has not made its dates. Needless to say, he blew a fuse. He immediately shot off an e-mail to upper management, informing them that the project was going to be late, because of QA. I had no choice but to send a clarifying response, pointing out that it was in fact Engineering that was going to be six weeks late, and that QA had not lengthened its own time estimates for the testing phase by a single day. Fortunately, upper management accepted my facts and conclusions. The Engineering Manager was not only livid at this result, but clearly bewildered. It was as though he just couldnt understand what was happening. This is important. The Engineering Manager in this case was genuinely convinced that it was QA that was making the project late, not Engineering. He was not engaging in political maneuvering. It simply never occurred to him that Engineering would have to be accountable for its inability to meet its milestone commitments within schedule. There is only one explanation for this attitude: Engineering had habitually, over a long period of time, been able to blame QA for its own inability to get its work done on time. There is another important detail in this story. Engineering clearly thought that its own schedule was infinitely expandable, while QAs was infinitely
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 40 of 86

compressible. The unquestioning assumption on Engineerings part was that QA could cut six weeks from its test schedule without affecting product quality in any way. Yet Engineering would never have made that assumption about its own work. Though this seems at first to be an assumption about QA, it is really, and fundamentally, an assumption about quality itself. Engineering could assume a six week extension of its own schedule, because it could specify what exactly it would accomplish in that six weeks time. It could list not just what tasks would be accomplished, but what the results of completing those tasks would be. It could do this because Engineering had a very specific and complete understanding of its own discipline. Conversely, it could expect QA to lop six weeks of its schedule because there is no real operational definition of product quality, such that one could unambiguously describe the effect on the quality outcome of those missing six weeks. This is in turn an artifact of defining QA in terms of its tasks, rather than its goals. This claim may seem obviously false to many of my readers. They are probably thinking at this point, What do you mean theres no definition of product quality? There is in fact a very clear one! No bugs. Or at least, no major ones. Or at least, no major ones that the reviewers will find. Or at least, no major bugs the reviewers find that we cant quickly fix and send out as a patch And so forth. Though many people believe that QA has a well-defined mission and goal, it generally does not. Most definitions of product quality are not, in fact, definitions at all. They are, on the contrary, simple expressions of desire, wishes that attach themselves to nothing truly knowable or measurable. This can be
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 41 of 86

easily confirmed by perusing almost any QA project plan. In the section where QA must define its quality goals, one normally will find only vague stipulations, like, Best in class quality, or Zero major defects. Often there is an attempt to pseudo-objectify the definition of quality, by expressing it in terms of bug metrics. Yet these metrics themselves are fools gold they are not attached to any meaningful definition of test coverage nor, more importantly, to any meaningful validation of product requirements, and so, by themselves, quantify nothing but the illusions they are meant to support. Perhaps another anecdote will illustrate this point more clearly. At another company, where I was also the QA Director, my first project was to organize the testing of the installation programs for the companys first multi-tier (i.e., clientserver) product. Within minutes of arriving on my first day, I received an e-mail from the VP of engineering, stating that this was my top priority and I needed to have it done ASAP. No problem, I wrote back. All I need from engineering is the installation specification, detailing all the files that are installed for each installation type, and where they are installed for each installation type, and which directories and subdirectories are created as well. I waited for my response. And waited. The clock on my desk ticked off valuable hours. Finally, at the end of the day, I got a response from the VP, who informed me that no such specification existed, and that engineering was not going to produce one because that would slow the project down. 9 He ended his reply by demanding to know how far I had progressed in testing installation that day. I answered by explaining to him that no testing had occurred, because,
9

Go ahead, its OK to laugh.


By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 42 of 86

logically and operationally, it is impossible to test a products subsystem when you have no idea what it is supposed to do. How would we know if it was installing the wrong files, to the wrong locations? I asked. How would we know if it was failing to install necessary files? Etc. I sent the reply off, convinced my logic would convince him. This time the VP required only a few minutes to reply. It was brief and to the point. Go fuck yourself, it said. Shocked? Dont be. Such a reply is far from uncommon, even from VPs who should know better than to express such sentiments in writing. I had to escalate the problem to our mutual boss, who immediately saw the logic of my request. Our boss was also unpleasantly surprised that no specification for installation existed. Needless to say, the VP of Engineering became my sworn enemy from that day forward. This story illustrates in a pithy nutshell all the key institutional realities I have described above. First, it illustrates how Engineering views QA as a group whose job it is to do only what they tell it to do. Second, it illustrates how engineering often offloads onto QA core responsibilities of the engineering function itself such as determining what installation is actually supposed to do under the guise of moving things along (i.e., so that they can offload responsibility for schedule slip onto QA as quickly as possible). Third, it illustrates how Engineering has no concept of product quality, since there is no other way they could think the quality of an installation program could be assured without QA having a clue regarding how it was supposed to operate. Fourth, it illustrates the extreme hostility QA will encounter from Engineering
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 43 of 86

when it asserts itself as an independent function, with its own goals, its own needs, and its own demands on Engineering. It is important to understand that this institutional domination of QA by Engineering requires that no real definition of product quality exist. Because if it did come into existence, then that definition could be used to judge objectively the work of Engineering, and also to generate a list of deliverables and responsibilities that Engineering owed to QA. Engineering has no interest in either, and so imposes on QA definitions of product quality that are purely notional and impossible to operationalize. Engineerings domination of QA is so complete, and so unthinkingly accepted by other project functions, that it has been able to perpetrate an amazing feat of institutional ventriloquism. No one pauses in puzzlement at the fact that the Quality Assurance function is named Quality Assurance. No one finds anything odd or contradictory about this name. Which is quite surprising actually, when it is obvious that QA does not assure the quality of anything. Nor could it even if it were ideally constituted and practiced. The quality of software is a function of the quality of the code that comprises it. Does Quality Assurance write the product code? No, of course not. Who does? Engineering. Logically, then, Engineering should be held formally responsible for quality assurance. Yet they are not. The QA function has become, in effect, a ventriloquists dummy operated by Engineering, saying what Engineering cannot say directly, and taking responsibility for all of Engineerings failings. Hey, wait a minute! you are probably saying at this point. If the purpose of QA is not quality assurance, then what the heck is it? Excellent question. A
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 44 of 86

question that, furthermore, opens the door to understanding the true nature and role of software quality assurance.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 45 of 86

The Lost World


If everything I have said about the current deplorable state of Quality Assurance is true, we must either have recourse to cynicism or to hope. We may look at the Standard System, in all its fiendish glory, and say, Yes, but thats still where millions are made, and thats still how theyre made, so really theres no point in trying to change it. The cynical option accepts that professionalizing QA is a pipedream that offers only theoretical benefits, and prefers to grab at the very tangible benefits the Standard System dispenses right now, today. The cynical person will content herself with this bargain, and assuage any lingering sense of doubt or guilt by posting the odd Dilbert cartoon on the wall of their cubicle to show they have not sold out to the system, even if they have. It is difficult to dispute that the cynical option has a lot to recommend it. It is empirically sound, eminently pragmatic and still leaves one with the ability to complain endlessly about the system one secretly accepts. Whats not to like? The problem with the cynical option is that it sabotages the future of software development. The Standard System is like a country that has lived under a corrupt and inefficient government for centuries. Everyone knows it is corrupt and inefficient, and everyone likes to bemoan these facts. Nevertheless, the corruption is spread around fairly evenly, so that everyone gets at least a little taste. The inefficiencies of the system mean no one has to work all that hard to stay in the game. Its not as though you have to actually be good at something in order to succeed. You just need to know the right people, grease the right palms, and things take care of themselves. All in all, not a bad life.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 46 of 86

This kind of feudal society can continue for a long time, as it has in the world of software development. But sooner or later two things are going to happen that make the feudal way of doing things progressively fatal to the society in question. The first is competition from more efficient societies. Its all well and good to muddle through, as long as you only have to deal with yourselves. The minute you must compete against more efficient societies, you not only begin to lose, but each loss also erodes your ability to ever win in the future. This has already begun to happen in the software industry. Where once outsourcing to Third-World countries was only whispered about, or tried out almost as a lark, now it is being widely discussed as a serious option. First World engineers may scoff at this possibility, but it is a very real one. What many First World engineers dont seem to understand is that the outsourced software doesnt have to be better, in technical or quality terms. Why? Because, for decades, First World software engineers have delivered abysmally low levels of quality, so its not as though the bar is set all that high to begin with. Consequently, outsourced software doesnt have to be better. It just has to be cheaper. If its going to be late, incomplete and riddled with bugs anyway, why not pay one quarter the price for it? This is the simple fact that all of the huffing and puffing among First World software engineers in re outsourcing conveniently ignores. Corrupt, feudal systems can last a long time as long as they dont have to compete on cost. Because, whatever benefits they may deliver, they are actually quite expensive to maintain. This is the main problem with the First World
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 47 of 86

software industry. Though it generates plenty of money for those involved, it does so at a very high cost. Therefore, cost efficiency becomes its main Achilles heel, one that is increasingly being noticed. Another way to state the problem is that the social solidarity within the Standard System between upper management and their product delivery organizations is breaking down. Globalization, and the rise of India and China as technical powers, are now tempting upper management with even greater rewards than they can eek from their highly-paid, low-performing software development groups. Consequently, they no longer see any benefit in being complicit in those failings. On the contrary, if they can get at least the same level of inefficiency for less money, why would they not jump at the chance? The First World software industry is like an hermetically sealed economy that has just woken up to NAFTA. Suddenly it has to compete in terms of factors it could comfortably ignore before. Many First World software development organizations are like the dinosaurs ten seconds before the asteroid hit: Supreme in their world, and supremely oblivious to their imminent extinction. The second factor that is making the Standard System untenable is the exponential growth in both the complexity of the problems software is being called upon to solve, and the complexity of the software required to solve them. Recall that most software development strategies and techniques were developed for the creation of single-user, desktop software, since that is what most commercial software companies historically started out making. Even with desktop software, there is quite a lot of complexity involved. However, in most cases it can be safely ignored, at least from the point of view of being able to
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 48 of 86

make money. The problem arises when the level of complexity left latent in the software development process begins to exceed the manifest workings of the development process itself. When this occurs, we begin to see truly catastrophic failures of the development process, and in the products it produces. This is one way of describing what happens to most software companies when they decide to augment their product portfolio with client-server products aimed at the enterprise, not just at the individual desktop user. This evolution is in itself inevitable, since the profit margins are so very much higher with enterprise software. Yet a desktop software companys first enterprise project is always a study in tragedy. Many companies have in fact been destroyed by their efforts to transition from desktop to enterprise software. This occurs because the level of complexity in a client-server system is exponentially higher than in a desktop system. By complexity I dont just mean that client-server software contains more features, which is usually the case. Rather, I mean something much more significant. Enterprise software confronts any software development organization with the problem of emergent properties. These are properties of a software system that only manifest (or emerge) when the entire system is integrated and working together over time. Performance is an emergent property, as is reliability. Because of their nature, emergent properties cannot be coded as separate features. Nor can they be tested in isolation from the rest of the system. Consequently, most software development organizations have no clue how to handle them. This is so because of what I like to call the feature fallacy of software development. Software tends to be looked at by all involved in
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 49 of 86

creating it as a collection of isolated features. This is how software is specified by Product Management; this is how it is coded by Engineering; and this is how it is tested by QA. Software is viewed and developed in a very atomized fashion. It is not viewed, nor developed, holistically. This is why, historically, the most serious software problems manifest during system integration testing. Bugs in emergent properties are not only the most difficult to find, due to the complexity of the testing required to uncover them, but also are the most difficult to diagnose, and the most difficult to retest. Consequently, they are the greatest threat to schedule compliance. Most software development groups deal with the problem of emergent properties by ignoring them. Or, if they think of them at all, testing for them is prioritized last. Which is odd if you consider they are the greatest risk. Nevertheless, the mindset of most software organizations is that you first verify all the features, and once thats done, and only if you still have time, would you want to spend a day or two testing performance and reliability. It should come as no surprise, therefore, that performance and reliability are the two weakest links in enterprise software, and tend to fail catastrophically on the first release of a companys first enterprise product. I have seen software companies spend twice the calendar time and person hours trying to fix performance and reliability issues in the field than it took to code and test the initial release in the first place. Naturally, this experience does not actually change anything about how emergent software properties are approached in development, for reasons I have already discussed at length.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 50 of 86

This first complexity phase boundary is normally the crucible that winnows out the weakest software companies and sends them crashing back to the desktop world. Companies that succeed at this boundary are the ones that tend to continue evolving in the marketplace, in spite of chronic problems in the area of emergent properties. How can this be so? How can such companies succeed if they are still struggling with this fundamental problem? The answer, as usual, is simple. They dont really solve the problem at a conceptual or process level. Rather, they power through the problem, using massive amounts of manpower, time and money. And they do this for each and every release. Recall from my previous analysis that this is not necessarily a bad thing for those involved, since the intractable nature of the problem is precisely what justifies demands for more staff and money to address it, and therefore enhances the powerbase of those running software development on such projects. However, the industry is now faced with a second complexity phase boundary, one that involves the same kind of exponential increase in complexity as that which lies at the desktop/enterprise boundary. This second phase lies at the boundary between products and meta-products. The software industry is now increasingly being asked to create software systems that integrate, manage, and normalize the operation and output of any number of other products at once. This is occurring because industry at large has already invested heavily in acquiring enterprise-wide software systems, and now finds itself facing the problem of how to get these disparate enterprise-wide systems to operate together, and to generate an overall picture of the enterprises operations. Therefore issues of interoperation, data normalization, data fusion, operational
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 51 of 86

transparency and decision support are first and foremost in the minds of corporate and government customers. It is important to understand that such large clients are not simply asking for this kind of meta-productizing of the systems they already have in place. They are asking for a meta-product solution that will work with any new enterprise systems they have not yet purchased, and which may indeed not actually exist at the time. Which means in turn that the software industry is being asked to solve two very challenging problems: that of meta-emergent properties, and that of speculative software development. The first is a question of how to manage the emergent properties of a meta-system, and how those metaemergent properties might inflect the whole range of emergent properties throughout its client enterprise systems. The second is a question of how to develop software so that it can interoperate with and manage the output of software systems that have not yet been developed. Moreover, it is at this second complexity phase boundary that the industry finally reaches a level of complexity where it simply cannot power through with hordes of bodies and years of development time. Because the amount of bodies and time necessary to solve these problems approaches infinity. The industry has finally reached a point where it must stop and think, something it generally hates to do. It must actually solve these problems at a fundamental level. It can no longer muddle through on a wing, a prayer and a web-distributed patch. These two problems together pose a huge challenge to the survival of the software industry in its current form. They threaten its continued prosperity, at least for those involved directly in product delivery. They also threaten its
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 52 of 86

continued relevance, if the industry cannot deliver the kinds of solutions the marketplace is now demanding. For these reasons, it is entirely in the interests of the software industry to stop rushing headlong from release to release, and from bonus to bonus, and actually spend some time thinking about how it does what it does. In particular, it needs to get a grip on what software quality really is, and what it isnt.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 53 of 86

The Cloud of Unknowing


If, then, we accept the option of hope rather than of cynicism and commit ourselves to remedying the distortions inherent in the Standard System with respect to Quality Assurance, where do we begin? Certain remediations will be clear from the preceding analysis. The first one would be to institutionalize QA as a separate department within your organization. In many software organizations, QA does not form its own department, but is merely a project function, with all QA staff reporting up to the head of each project. This system is highly inefficient, since the QA staff on each project has to reinvent the wheel each time. Moreover, with this type of organization it is impossible to enforce standardized test procedures and methods, since each group is working independently. This creates in turn a situation where the work of QA on each project is not standardized, and so no meaningful comparisons can be made of QA performance across the board. Consequently, this creates a situation where no meaningful baseline for QA performance can be derived. There is really nothing good to be said about this type of QA organization. It works against not only the interests of QA, but also against the interests of project stakeholders at all levels. Having created QA as a separate department, you must then also make it hierarchically independent of and equal to Engineering. There is no real improvement in QAs professionalization if the QA department still reports to Engineering. This only perpetuates the feudal relations between the two groups,

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 54 of 86

where QA can only be and do what Engineering wants it to. The QA department needs to report to whomever Engineering reports to. If you are not willing to do these first two things, then, please, put this book down right now and go about your business. We have nothing more to say to one another. All the process in the world will not give you properly functioning QA if QA is not itself first properly institutionalized and organized. It is as simple as that. Another way to state the issue is to say that QA must have a clearly defined role. By the term role I do not mean a set of activities that QA happens to be tasked with performing. Rather, I have in mind by the use of this term something that QA uniquely knows how to do, and is uniquely responsible for. To define QAs role as that of simply running tests is not the same thing as defining the unique role of QA in the product development process. After all, just about anyone can run tests. It is not uncommon, when a project is in crisis, for people from other functional groups to be drafted into running tests. One never sees the same phenomenon with respect to writing code or creating marketing plans, because people accept that Engineering and Marketing are disciplines, not simply activities. Then there is the issue of responsibility and accountability. One way to define a functions unique role is to define what it is that those in that role are uniquely responsible for. One can have a unique role, even if the activities that fall under that role could theoretically be performed by others, as long as responsibility for the outcome of those activities cannot be delegated outside the role. Product Management is a good example of this situation. Any reasonably
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 55 of 86

intelligent software person can, with a small amount of training, learn how to write well-formed, logical, internally-consistent and actionable product requirements. However, only a Product Manager can be accountable for the adequacy of those requirements to the needs of that products target market. This is the key difference between someone who is just good at writing requirements, and someone who is accountable for creating the right requirements, at the right time, for the right market. All right then, someone is probably saying right now, havent you answered the question about QA and testing? Sure, anyone can run tests, but ultimately its QAs responsibility for the outcomes of those tests. Therefore, we can say that QAs unique role is running tests. This reasoning is outwardly logical, but it misses a central point. We can distinguish the writing of product requirements, for example, from taking responsibility for the adequacy of product requirements because we can point to some larger goal that both must serve, a goal that exists apart from the activity under discussion. Accountability always lies with those responsible meeting of that larger goal. In the case of Product Management, that larger goal is creating a product that will sell into a particular market. Writing requirements is but a means to achieving that goal, not an end in itself. But what exactly is the larger goal of executing software tests? Well, quality, of course! is the obvious answer. And what is product quality? Er, well, its all the tests passing. Here we realize the essentially circular nature of the definition of product quality. In the example of Product Management, the argument is not circular because the goal of PM is not defined as completing
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 56 of 86

requirements, but as those requirements having, in addition, a definite, measurable outcome in the market place. The criterion for effective product requirements is not they were all implemented, but rather that, having been implemented, they conquered or created a new market for the product in question. Yet this is not what we say about product quality. We define product quality as a set of tasks (running tests) having been successfully completed. There is no larger goal that can be used to determine whether the passing of a certain set of tests accomplished what QA is uniquely responsible for. This is one of the key reasons that hundreds of software products are released every year, with every test having been passed, yet which nevertheless demonstrate very poor quality in the field. Another reason for this difference is that in the case of product requirements we can ask reasonable, well-formed questions about their adequacy. 10 We can in fact test their adequacy rather directly before they are ever accepted or implemented simply by talking to major customers or by looking at market surveys and other market data generated by outside organizations. Yet in the case of product quality, there is no corresponding notion of adequacy. How could there be, when there is no larger goal to the activity of running tests? One of the greatest, and most persistent, unknowns in almost any software testing project is whether the set of tests being run is adequate to the quality problem posed by the product.

10

Which does not mean, of course, that such questions are actually asked.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 57 of 86

Indeed, if asked to demonstrate this adequacy, many software testing organizations would instantly be struck mute. In fact, one of the dirty secrets of software testing is that most QA groups test what is easiest for them to test, not what is most necessary for them to test. In most product delivery organizations, the adequacy of the test plan must be taken largely on faith by the other functions, because there does not exist a general concept of adequacy that could be used to establish its comprehensiveness. The lack of a testable notion of adequacy in Quality Assurance is the major artifact of its non-prefessionalization. QAs feudal servitude to Engineering has prevented it from developing a clear concept of its own role, and therefore has no clear concept of the content of its own expertise. As I have pointed out before, hitting buttons and finding bugs is not the definition of an expertise, not the definition of a role, and not the definition of a meaningful goal. Yet this is really all software testing is in most software development organizations. The most dire consequence of this situation is that software development has no meaningful, truly testable, notion of quality. Indeed, as I have pointed out, the definition of product quality is purposely kept as vague and ideal as possible, precisely so that it cannot function as a fulcrum for analysis and accountability. For if quality becomes something transparent, something knowable by all involved, then the accountability for choosing to ship a product with a certain well-defined level of quality also becomes a shared choice. But this upsets one of the pillars of the Standard System: that of putting all accountability on QA, even as any real authority is denied to it. No one really wants to know what the

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 58 of 86

true level of achieved product quality is, because that plausible deniability is what protects them from the consequences of poor quality. Keeping this in mind, it becomes sadly clear that we cannot professionalize QA simply by reversing the most obvious injustices of the Standard System. This is not sufficient because one of the main results of the Standard System has been to deprive not only QA as a function, but the industry as a whole, of a meaningful concept of product quality. We cannot magically derive it just by reversing the mechanisms that have prevented its appearance. This means, in turn, that institutionalizing QA as a separate department, free of and equal to Engineering, is a necessary, but not a sufficient step in professionalizing the QA function. Even after that has occurred, a deep, dark black whole remains to be filled in, the hole that marks the spot where a proper understanding of QAs unique expertise should be. I will devote the remainder of this inquiry to filling in that hole for you.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 59 of 86

Laughter in the Dark


I could just tell you what real Quality Assurance was, but that would be something of an anticlimax. Moreover, people tend to accept truths more readily if those truths are conclusions they themselves have reached, especially if those conclusions may at first seem novel or out of the ordinary. So rather than just lay out what I think QA actually is and does, I think it would be better if I derived those definitions from the basic realities of commercial software development. Lets begin, then, by saying what we dont mean by product quality. By product quality here I do not mean customer satisfaction, though the two are related in significant ways. The difficulty arises when we realize that customer satisfaction rests on a far broader base, including the quality of the sales experience, the quality of technical support, licensing terms, whether the products feature set meets their needs, etc. Customer satisfaction is not really a single thing, but an aggregation of all aspects of quality throughout the entire business process. Product quality is an important component of that aggregation, but it cannot be defined from so general a perspective. Moreover, by product quality I do not mean process efficiency. Not because the two cannot be related in mutually signficant ways, but because they can be defined independently of one another. You can certainly achieve very high, very consistent product quality with low process efficiency. On the other hand, it is entirely possible to achieve very good process efficiency with consistently low product quality. Indeed, that is one of the key features of the Standard System. Though it needs to be said that one of the key parameters

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 60 of 86

necessary to calculate process efficiency can only be provided by a correctly functioning QA group: the actual level of product quality achieved. For without knowing that, it is impossible to calculate the cost of achieving that level of quality, in terms of dollars, bodies or schedule time. I must also repeat that by product quality I do not mean pre-release bug metrics. If customer satisfaction and process efficiency are too general to provide a good definition of the term, then pre-release bug metrics are far too restricted to enlighten us. This is so because, as I have pointed out already, bug metrics by themselves do not validate the adequacy of the test coverage. Having zero open bugs 11 tells you nothing about what the testing actually covered or left out, and to what degree of depth, etc. Bug metrics simply a measure of QA effort, not of quality itself. The focus on bug metrics also distorts our understanding of product quality because it leads us, willy nilly, to define product quality negatively, as the absence of things going wrong. Yet just because we have not found something obviously going wrong, doesnt mean quality is present in the product. The obesssion with bug metrics enforces a minimalist, negative definition of product quality that itself needs to be overcome if product quality is to be correctly understood. For some reason, it is easier to illustrate common sense in software development by appealing to our experience outside of software development, and this is the case for product quality as well. Suppose, for example, you have

Even this case needs to be defined ideally, since many bugs are closed for release, even though they have not been fixed, nor even investigated.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

11

Adventures In Quality Assurance - page 61 of 86

hit some kind of jackpot. You have won the lottery, broken the bank in Vegas or sold your companys stock just in time. You now have a pile of cash, and are able to go out and buy the car youve always dreamed about. You are at your local Ferrari dealer, sitting in the supercar of your wildest fantasies, waiting for a sales rep to notice you. While you wait, you fiddle with the controls and feel the materials of the seat and dashboard, imagining how youll feel driving the car to the next Linux Users Group meeting. Suddenly, a piece of trim falls off in your hand. Congratulations! Youve found a bug. Should you buy the car? Some people at this point would storm off to the Lexus dealer, complaining about shoddy workmanship, convinced their fantasy car was a piece of crap. These people are usually the engineers. Others would pretend it hadnt really happened and would quietly try to reattach the piece of trim, before the dealer noticed and charged them for it. These are usually the sales people. Suppose again that the piece of trim hadnt fallen off, that nothing obvious had gone wrong during your inspection of the car. Would that mean you should buy it? Many people would think so. But would they be right? The reality is that in both scenarios any conclusion would lack an adequate basis, because having a piece of trim fall off tells you nothing about the actual quality of the car. Unfortunately, nothing going wrong doesnt really tell you anything about the quality of the car either. The dilemma is that you simply cant find out enough during a test drive to determine the true level of quality of a car, so you must base your decision on accidental details that may or may not have any greater significance or predictive power. Unfortunately, the same is

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 62 of 86

true of most software testing. Finding or not finding bugs, all by itself, is simply not an adequate basis for determining product quality. The easiest way to get at a good definition of product quality is to ask ourselves how we define product quality in other areas of life. Lets take movies, for example. Suppose you are taking a date to the movies. Your significant other wants to see an historical drama, while you want to see an action adventure movie full of exploding cars and imploding heads. Since your signficant other is the first one you have ever had, and since you are 35 years old, you come to the reasonable conclusion that its best to see the historical drama, even though you find this type of movie tedious and silly. You watch the movie with your significant other and, even though it doesnt really tickle your fancy, and has subtitles to boot, you are nevertheless able to admire it in an abstract way. The production values are very high. The cinematography is stunning. The acting is good. The characters are diverting, even though none of them wears a cat suit. The plot has some interesting twists and turns, at least enough to keep you from dozing off. When its over, you can honestly tell your date that it wasnt bad, even though it wasnt the kind of movie you would have seen on your own. This prosaic example illustrates something very important about product quality: It can be perceived independently of your personal likes or dislikes. It is possible to determine that a product has quality, even if it is not a product that you are necessarily interested in, or that meets your specific needs. You may find driving a Lexus far too dull, but you could admit it was a well-made car. You may not like pop music, but you can still tell that Celine Dion has a good voice.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 63 of 86

You may look awful in a bikini, but still be able to see how others might look just fine. We can distinguish between product quality and our own satisfaction with a product because, at least in the examples cited, we can determine what objective requirements have to be met in order for those products to be considered good. There is some standard, some specification, that exists independently of our desires, that we can use to judge the adequacy of what we are experiencing. When we say, X was good, but its not something I like, we are really saying, I can see how X satisifes the requirements for X, but those requirements dont define something I like or need. These standards are also positive in nature. They define concrete achievements that must be manifested in the work or object in question to be defined as good. When was the last time you heard a movie reviewer say, This is a really good movie that you must see because the projector never broke down, the soundtrack was in sync with the action, the air conditioning was working fine in the theater, and the reels were shown in their proper order,? Never, right? In fact, you would think such a reviewer was either a fool or very, very ironic to give such a review. Yet this is basically how software quality is defined in most organizations. This practice continues unchallenged precisely because, unlike in the case of films, there really is no positive standard of quality. All right then, you might be thinking now, what is a positive definition of product quality? In the case of a commercial software product, that objective standard can only be one thing: the products formal requirements. If this is true, then product
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 64 of 86

quality in turn can only be one thing: demonstrable conformity of the software under test to its formal requirements. Note how different this standard is from that of finding or not finding bugs. Product quality, in the sense that I have described it, is a positive ascertainment of capabilities, functions, parameters and outputs. It is far more stringent than simply determining whether something goes wrong. There is another important difference. Defining product quality as conformance to requirements shifts the focus away from code, and to the need that the product is supposed to meet in the marketplace. This may sound like a fine distinction, but in reality it is not. Most software testing efforts are organized around the internal structure of the code itself. What the vast majority of software testers actually test are code functions, modules and subsystems. In other words, the structure of the test effort mirrors exactly the structure of the code itself. This is, of course, logical and necessary. But it is in no way sufficient to assure product quality. Why? Because in addition to determining whether the code behaves as Engineering thinks it should, it is also necessary to make the separate determination whether the code, as written, actually meets the requirements it was written to fulfill. This is a key point. It is entirely possible for a product under test to behave the way Engineering thinks it should, and still not meet the requirements defined for it. I have personally seen this happen on many occasions. It is truly a sad experience for a Product Manager to discover that a product does not in fact contain all the features specified, or does not operate in all the environments

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 65 of 86

specified, and to discover this only after the product has been released. Yet this is not an unusal occurrence. To be fair, much software testing is code-oriented because there are no formal requirements. It needs to be acknowledged that Engineering is often called upon unfairly to supply what Product Management has failed to, just so that the product can be made. This is one area where Engineering is itself victimized by the need for speed inherent in the Standard System. In the absence of well-formed, comprehensive requirements from Product Management, other functions have no choice but to default to the technical specification created by Engineering. 12 At this point it is useful to introduce some more specific terminology to help us express the point at hand. Let us call the process by which the QA group determines whether the software behaves according to the Engineering spec verification. Let us call the process by which the QA group further determines whether the software under test, and its technical specification, fulfill the product requirements validation. With those terms and definitions and mind, we can say that most software testing efforts only perform verification, not validation. This is another reason why bug metrics alone tell you nothing, because, at best, they are telling you whether the software behaves as Engineering thinks it should. If no validation is being performed, then the product development process has no way of determining whether the product being produced will meet the needs of its intended target market. Which means, in turn, that the product

Though to be fair in unfairness, it also needs to be pointed out that often even a technical specification is lacking, a lack which turns QA into a form of fortune telling.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

12

Adventures In Quality Assurance - page 66 of 86

development process is eating up huge amounts of time and money, without being able to do the one thing management really wants it to do. All most product development efforts can do is say, Well, were pretty sure the product does what we think its supposed to do. Not a whole lot of bang for the buck, is it? There is a final general conclusion to be drawn from the definition of product quality as demonstrated conformance to requirements. QA must have an interface with Product Management that is every bit as robust and authoritative as its interface with Engineering. This is obvious if we accept that it is QAs job to tell Product Management whether its requirements are being met. Yet this is almost never the case. Recall my earlier description of Engineerings possessive attitude toward QA, of its tendency to treat QA as its property, to be jealously guarded against any influence from outside Engineering. This attitude, and this institutional reality, is what has kept QA from developing such a strong interface with Product Management, since any attempt to do so is commonly seen as a weird kind of treason or unfaithfulness to Engineering. In this respect, the Engineering-QA relationship has a sad, and sadly real, resemblance to My Fair Lady, though few of those involved are fair, or ladies. The captive nature of QA; the lack of well-formed requirements; the inability to distinguish between verification and validation: All these circumstances conspire to create a situation where the best one can hope for from software testing is some form of adequate code coverage. Indeed, the notion of code coverage is a quality criterion often invoked by engineers, and it
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 67 of 86

is not surprising, in light of the above, that this would be their default standard. However, code coverage is also an inadequate quality standard, for two reasons. First, and obviously, code coverage tells us nothing about requirements coverage. Code coverage belongs to verification, not validation, and cannot take its place. Second, the whole notion of code coverage is every bit as ambiguous as the criterion of bug metrics. Though it sounds like a simple idea, the code coverage criterion really begs more questions than it answers. Chief among them is: What do you mean by coverage? How do we know when code has been adequately covered? Has code been covered when it has been run once? Twice? Three times? In which environments? In what temporal order in relation to other code modules? According to which functional trajectories? Etc. There are no ready answers to any of these questions. Relying on the code coverage criterion is a little like thinking you have tested the quality of the plumbing system in your new house by flushing all the toilets. Certainly, its a useful thing to do, and it may ease your mind about each individual toilet. But you still dont know anything about the quality of the system as such. You will notice that this is the same question, and the same failing, that bedevilled the bug metrics criterion. Neither criterion provides a verifiable notion of adequacy, and therefore fails as a criterion for quality. 13

13

This is not to say that running each line or module of code is not a useful basic test at the engineering stage. One hopes an engineer has done this before submitting any code for test. But this procedure does not, by itself, constitute quality assurance.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 68 of 86

The Parent Trap


So far we have determined two things about the nature of effective quality assurance. First, that product quality consists of verifying, in some systematic fashion, the product requirements; and that, second, in order to do this, we must import into our definition of quality assurance the distinction between verification and validation, between knowing the code behaves the way the engineers think it should, and knowing that the code written by the engineers meets the requirements defined by Product Management. These are two large and important steps in our inquiry, steps that bring us to the threshold of an even more fundamental truth about product quality. The easiest way to uncover this truth is, paradoxically, to turn our attention away from the subject of quality assurance itself, and examine in more detail the corporate financial imperatives it must co-exist with and adapt to on a daily basis. This is in itself a fairly radical step, since one can read books on software quality assurance by the score without ever stumbling across a serious analysis of this topic. Many books on the subject are written as though software products are developed by a clandestine Communist society located in Silicon Valley, with outposts in Washington D.C. and Cape Canaveral, a society from which the imperatives of quarterly financial results have been banished, like girls from a treehouse. Consequently, many software process books have an unreal quality about them, however sensible they may be on any specific topic. Yet, as any rank and file quality assurance person can tell you, these financial realities are the reality of quality assurance, the immovable objects that secretly define QAs

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 69 of 86

native habitat and its survival strategies within it. For is it not true that the perennial software quality assurance dilemma is as follows: The end of the companys fiscal quarter is approaching, which is also incidently the committed release date of the product. The company must make this ship date, or its stock will suffer immediate and perhaps irreparable harm. Yet the quality of the product is not good enough to ship. What do you do? This is the situation that every QA manager and project lead faces throughout their careers. Indeed, it is the situation faced by almost every product delivery team, time and time again. Yet, unhelpfully, this is a situation that is usually abstracted out of many software process improvement recipes. This is not surprising when we realize that many software processes were developed either by the US Government or by very large companies developing software for internal use, not for external customers. Because of this quirk of history, the financial demands of a publicly held company are treated as extraneous to problems of process, or, at best, demands that will disappear if process is done correctly. Yet this is certainly not true in either case. Nothing about the way publicly held companies raise and maintain their stock prices is going to change any time soon. There is going to be no magical cancelling of the quarterly financial reporting calendar, no miraculous wave of compassion and fiscal forebearance on the part of impatient investors. Because this is true, we must accept in turn that, at least for publicly held companies, these financial imperatives must be placed at the heart of any real understanding of how to assure product quality. Yet accepting this also turns out to be the key that unlocks the door to a real understanding of product quality.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 70 of 86

How can this be? The existence of a hard date, one that cannot be changed without catastrophic consequences and, furthermore, one which was selected in the first place for purely financial reasons, is always presented as the one thing software development people wish they could do away with. One hears this complaint time and time again from both engineers and QA staff. On the face of it, this complaint is difficult to fault. Product delivery management is often caught in a cruel game where, at the beginning of a project they are asked by upper management to provide detailed project schedules and firm product release dates. These are then almost immediately thrown back at product delivery management with the curt demand, You must make this particular date, period. Product delivery management then dutifully prunes the feature set in order to meet the hard date, and resubmits their schedule. Upper management again throws it back, this time with Sales baying at the moon in the background, demanding that the date be met, with the originally stipulated feature set (otherwise there is no reason to have a new release). Oh, and by the way, with no new staff. And for another thing, with best in class quality, for sure. Just do it isnt just the motto of a sport shoe company. This creates a situation where everyone in product delivery is sent rushing pell mell, like Keystone Cops, to make the date. Everyone just does it. It is in the midst of this frenzy of activity for its own sake that any rational project tracking practices tend to get cast aside, and the project slides into a black hole of unknowability. It goes dark from an in-process perspective, only to emerge at the very end, in whatever shape it happens to be in, though this is itself usually
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 71 of 86

the Big Unknown, to be determined later by customers. However, the date is met and the company is saved. It should be obvious based on my previous analysis of the Standard System that a projects slipping as quickly as possible into an unknown state benefits everyone involved. There is no upside to transparency, if all it will reveal is that things are going wrong to an upper management group that doesnt want to hear about it. What good does it do for you to be able to know that the project is weeks late? Or that quality coverage is going to be spotty at best? Or that major features are only going to be half-coded by the ship date? Who really wants to hear that news? More importantly, who really wants to deliver it? No one is the answer to both questions. Yet even from the point of view of the product delivery team, it is clearly dysfunctional to pretend that the hard date problem is a nuisance to be done away with, for the simple reason that the ways in which product delivery responds to that problem are inherently unsuited to coping with it successfully. Ignoring the problem on an operational level has led, and continues to lead, to purely political coping strategies that only exacerbate the problem, both politically and operationally. The operational problems are obvious. The political ramifications of a purely political accommodation with the hard date problem are not quite so obvious, but they are nevertheless very serious. Ask yourself: Why does upper management constantly insist on a hard date, and one that seems to have little to do with the necessities and realities of the product development process itself? Are they just being dictatorial and clueless, which is the standard diagnosis from the product delivery side?
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 72 of 86

Perhaps. Then again, perhaps not. A more accurate analysis of the hard date phenomenon leads us to the simple logic of expectations. If your product delivery group consistently fails to make dates; if it consistently fails even to be able to provide key commitment points within the development schedule, 14 what possible confidence can any executive have in the preferred date options offered by product delivery? And if they can have no confidence in the product delivery groups ability to generate its own reliable dates and milestones, what do they have to lose by providing these through pure stipulation? If you have children, you should be familiar with this logic. Suppose you want your child to mow the lawn. Your child responds they will get to that just as soon as they are done with their homework, and will have it done by the end of the day. You wake up the next morning, and the lawn greets you in all its unmowed glory. You ask your child why its not mowed. Your child says, Well, uh, my homework was harder than I expected, so I had to work later, and I didnt want to mow the lawn late at night cuz that would wake up the neighbors. OK. That is at least a rational explanation. You sense your child might be playing you, but for now youll give them the benefit of the doubt. You tell them to please mow the lawn by sunset today (you have picked up on the fact that you need to make your deadline more precise, the better to remove another excuse for not meeting it.). At this point, youre willing to take a small amount of the blame on yourself for not taking certain realities into account.

Surprising as it may seem, engineering staffs often refuse to make hard and fast commitments to internal milestones, i.e., a commitment to delivery a particular subset of functionality at a particular date or date range. Often what you hear is, Well have everything done at the end, and then well be done. Not very confidence inspiring.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

14

Adventures In Quality Assurance - page 73 of 86

You come home from work the next day, just after sunset, and you see the lawn is not mowed. This time you confront your child a little more strongly. Why isnt the lawn mowed? you ask. Well, uh, my homework was pretty heavy today too, but I still tried to mow the lawn like you asked. But it had just been watered by the sprinkler system, and I knew that would clog up the mower. So Ill do it tomorrow morning. You glower at your child, now convinced you are being played. Why wasnt the lawn mowed before the sprinklers came on? Why the strategic delay until it was no longer feasible? Ok, youve had it now. You sit your child down and say, Look, I dont care what it takes, I dont care what the consequences are to your schoolwork, but I want that lawn mowed by 3 pm tomorrow, or else. No questions, no excuses. Your child becomes very alarmed, But, but, I cant do it tomorrow! I have soccer practice tomorrow afternoon! If I miss that I might get cut from the team! Earlier in the history of your lawn mowing crusade, you might have been sensitive to this very real obstacle. But youve been played too much to sympathize now. Too many obviously phoney excuses have been thrown at you, and at this point you really dont care. In fact, you realize having to suffer a little as a result of laziness might make your child more willing to do your bidding the first time. This is the basic process in software companies. Consistent failure, over time, to set expectations correctly and fulfill them leads to the imposition of arbitrary deadlines by the powers that be. These arbitrary deadlines may be obviously problematic in themselves, but youve lost all credibility, and so no one wants to hear about it now. In this respect, the relation between upper
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 74 of 86

management and their product delivery organizations is exactly like that between an impatient parent and a recalcitrant child. Perhaps the answer is to grow up?

Lets take a novel approach to the hard date problem. Instead of bemoaning its existence, lets ask ourselves what things look like if we accept it as an unalterable aspect of software development. Lets go further and ask ourselves what our work would look like if we defined it completely in relation to the reality of the hard date, instead of trying to define around it or thinking that we could do our jobs right if only the hard date problem didnt exist. If we do that, we will find that things become interesting fairly quickly. What emerges from this thought experiment in the case of product quality is a rather shocking discovery: Quality is a major project variable. This may seem like a startling statement, but really, how could it be otherwise? The amount of quality youre going to get is going to vary according to all the same factors that affect every other major project parameter: expertise, staffing level, time and budget. We have no problem, for example, recognizing that a products feature set is a product variable, depending on how many programmers you have, how much time you give them, how much money for equipment they have, etc. Yet, as I have pointed out previously, the Standard System resists admitting that quality is also and equally a variable. Quality can only be this platonic ideal, that never varies, regardless of how every other project parameter may vary. In this respect it is like virginity or coolness: You either have it or you dont, there is no in-between. Yet, regardless of the political value of adhering to the Standard Systems view of quality, the hard truth remains that quality is a major project variable. The
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 75 of 86

traditional triangle of trade-offs (time, resources and feature set) so beloved of project managers is actually a rectangle. A project team must be able to decide what level of quality is acceptable for any given project, and must accept that that level comes at a cost, which must also be weighed in the balance. This is not so radical as it might seem at first. In practice, every seasoned product delivery person understands that, beyond a certain point, making gains in product quality becomes cost prohibitive, and wont really have a noticeable effect on customer satisfaction or sales. Everyone knows that there is a sweet spot for product quality, below which you have catastrophe and above which you have needless delays with no financial benefit to the company. Yet no one really wants to factor this practical wisdom into their projects, in an explicit and actionable way, from the very beginning. They prefer to let this reality remain tacit, and formally latent until the very, very last stages of the project, when the final bugs need to be categorized and adjudicated. Why? Because it is heresy in many organizations to openly discuss the level of quality the whole project team is going to commit to, since anything less than best in class quality is anathema to upper management. Nevertheless, as with the hard date problem in general, the consistent insistence by upper management on the most meaningless of quality criteria also has a logical basis, even if that logic is often obscured by the political value of this meaningless. As with dates, product delivery groups are rarely, if ever, capable of providing meaningful measures of product quality, and, therefore, are unable to provide meaningful decision support data on product quality for upper management to factor into their judgments. Quality is a black hole of useful
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 76 of 86

information, and consequently quality by stipulation becomes, again, the only logical choice left to upper management. What may seem like willful childishness on the part of upper management is actually a logical result of product deliverys inability to make meaningful commitments, provide meaningful data, and generate meaningful measurements of key project parameters, of which quality is one. By failing to do so, product delivery plays into the hands of upper management who, for political reasons, dont necessarily want to see this data in the first place. The operational failings of each part of the company compound the political opportunism of every other, which in turn circles back and further damages operational efficiency. However, the circle can be broken. The breaking begins with the realization that if quality is a variable whose value can be meaningfully defined, then one of QAs key responsibilities is the quantification of quality at the beginning of a project. This implies in turn the abilitiy to express product quality in terms that can themselves be quantified in the first place. We see that a properly functioning QA group must be capable of describing possible quality outcomes in an analytical fashion, so that each outcome can be meaningfully evaluated in relation to the business goals of the project. In other words, one of the key responsibilities, and one of the key skills, of QA is the modelling potential quality outcomes. It must be able to present alternative scenarios of quality coverage, describe the pros and cons of each in relation to other major project parameters (schedule compliance, resource levels, projected in-field quality, etc.).

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 77 of 86

This means in turn that QA must be able to conceptualize quality coverage; it must have a way of breaking down quality coverage into coherent units and sub-units, such that the meaning of coverage can be precisely known for any proposed quality outcome. Here is where our definition of product quality as conformance to requirements provides the information QA needs to effectively model quality outcomes. If quality is validation of requirements, then quality coverage can be defined as the depth and breadth to which requirements are validated. Thus, the way QA models quality is by defining the quality space for any given requirement, 15 and proposing how much of the theoretical quality space needs to be covered in order to assure the necessary level of quality. In other words, quality modelling can be defined as fitting various levels of actual quality coverage over the background of the total theoretical coverage that could be obtained given world enough and time, and providing forecasts of the likely result on other key project parameters for any given quality optimization. If we understand this, then we are able to give more precise definition to what QAs true area of expertise is, at least in part. It is the ability to model quality accurately and in a way that provides decision support for the projects quality commitment. This is a very different understanding of what QA is and does. Quite different from its reduction either to purely technological knowledge or to pushing buttons and finding bugs. But this does not exhaust QAs unique responsiblities and skill set. Along with modelling final quality outcomes as a basis for the projects overall quality
15

I show how this is done in Part II.


By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 78 of 86

commitments, QA must also be able to monitor and report out effectively on inprocess quality thoughout the development effort. That is to say, it must be able to know with certainty whether the quality outcome so modelled and agreed upon is in fact being achieved on a weekly basis. This is a very different emphasis from what one normally encounters. Recall that in the Standard System, product quality is only determined and therefore knowable at the end of a project. It is the toy surprise rattling around inside the empty box of Cracker Jacks. Because of this historical orientation, it may at first seem puzzling what benefit is gained by tracking quality so minutely, and at phases of a project when such data may seem worthless. The answer is, of course, the very hard date problem that we have been considering. It is the very arbitraryness of the ship date, and the ease with which it can magically shift backward in time, 16 that makes it necessary for the Quality Assurance function to be able to provide a ship/no ship recommendation on any given day, and the data to back that up, not just on the final day of the project. If QA can do that, then it can provide rational analysis of the risk of shipping on any given day, analysis that can be backed up with real data and well-supported projections of customer satisfaction in the field. Here we get to the heart of the difference between software testing, as it is currently practiced, and quality assurance. Software testing is task-based. Quality assurance is goal based. Software testing is meant to assure a particular outcome at the end of a project (though it usually fails to do this). Quality

This is not an uncommon occurrence, especially if a large new customer deal suddenly pops up, one that can only be finalized if an earlier release date is promised.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

16

Adventures In Quality Assurance - page 79 of 86

Assurance provides a continuous window into in-process quality throughout the lifespan of a project. Software testing can produce data, but that data cannot be inherently related to a meaningful analysis of overall risk, and thus cannot provide a basis for effective decision support. Quality assurance can do both. Ok, but how?

Spin Cycle
The first part of the answer to this question consists of referring back to what was said earlier about the definition of product quality: demonstrated conformance to product requirements. If we accept this to be the case then, logically, the goal of Quality Assurance must be to track, analyze and report out on the validation of the requirements defined for the product under test. This insight leads in turn to the equally logical conclusion that, therefore, QAs tests themselves must be directly traceable back to specific product requirements, such that the success or failure of any given set of tests means the validation or non-validation of a specific requirement. If we accept this, then we can see how the outcomes of QAs testing can become accurate information on product quality, from a day-to-day perspective. Test pass/failure data can be translated back into validation of specific requirements, which, when correllated with the test schedule itself (i.e., how many tests have actually been run), can provide an overall picture of both quality state and trends. Lets pause for a moment to reflect on how different this is from normal software testing in the Standard System.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 80 of 86

First, and foremost, in my definition product quality is seen as something positive in and of itself, the concrete manifestation of specific capabilities, whereas in the Standard System it is seen merely as the absence of problems or bugs. Second, in my definition QAs tests must be traceable back to requirements, whereas in the Standard System if traceability exists at all (and it rarely does), tests trace back to code structures and systems, not to requirements. Third, in my definition product quality is a persistent product attribute that exists, and whose level can be measured, throughout the project lifecycle, whereas in the standard system it is defined as an outcome we can only know at the end of the project cycle. Fourth, in my definition the main task of quality assurance is analysis, whereas in the Standard System it is execution. Of course tests are still executed in a true quality assurance effort, but the results of these tests can be related to an overall analytical framework that can actually tell you, in quantifiable terms: 1. What your operational definition of achievable quality actually is. 2. At any given point in the process how far youve come with respect to your operational defition of product quality, and 3. How far you still need to travel to reach that goal. If you have a system and a methodology in place that gives you all three types of knowledge listed above, then the hard date problem is transformed, because you have placed that problem at the center of your quality assurance
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 81 of 86

efforts from the very beginning. In a software testing culture, you just test as much as you can until the date arrives, and then you hope for the best. When asked whether the product is ready to ship, you may feel that testing has been inadequate, but you cant really articulate how much more testing would be necessary to satisfy your doubts. You cant say which specific requirements have been validated and which have not, you can only say how many open bugs you have. Since you have no firm grasp of how testing maps back to requirements, its impossible for you to say how much more testing may be necessary, nor can you say how n amount of extra testing will yield x amount of extra quality, and in what product areas. Because of this, you hesitate to press the issue. You satisfy yourself with saying something like, Well, wed like to do more testing, but we think its OK now, and cross your fingers. In a properly run quality assurance environment, you would know the answers to all these question, and you would be able to provide real decision support data with respect to product quality. Because you could, you would face two scenarios, either of which you could live with. In the first scenario upper management reviews your quality assessment, accepts the quality risk profile you have provided, and decides to ship the product. If the product in the field then goes on to manifest quality problems in the areas you indicated required more testing, no one will be surprised and you will not be blamed. The executives will have released the product knowing they were likely to have problems in those areas, and Technical Support and Sales have also been briefed and trained to respond to them, prior to the products ship.
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 82 of 86

This first scenario illustrates one of the major differences between quality assurance and software testing, and their respective outcomes. There is a huge difference between shipping a product, with bright assurances of best in class quality, only to have it manifest problems in the field that come as complete surprises, and shipping a product where the same problems manifest in the field, but you knew they would beforehand. Here is where we see how quality assurance is inherently anaytical. It keeps product quality in a known state throughout the process, and can give an accurate assessment of the level of achieved product quality even if that level of quality may not be optimal from other perspectives. Whereas in the software testing model, the controlling idea is that the purpose of the QA group is to achieve one, and only one outcome: best in class quality, but is unable to tell you exactly what that is in a non-circular way. The other scenario is that upper management looks at your analysis and quality risk profile, and decides that the risk to customer satisfaction and sales is too great. Because you have provided a detailed quality breakdown by requirement and capability, upper management has the freedom to select specific areas of the product to be fine-tuned for a higher quality level, and is able to know how much more time and money reaching that level will cost them. The result is that the team is given a specific amount of extra time to achieve a specific amount of extra quality. If your team delivers on that promise, then you are in the clear. This second scenario illustrates another important difference between quality assurance and software testing. In quality assurance, quality and effort
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 83 of 86

metrics can be correllated and projected with some degree of precision. Whereas in software testing this is not possible, creating a situation where the QA function is unable to say exactly how much time it will take to get to a certain level of quality, or to know that such a level of quality is unattainable within financially responsible parameters. Since QA cannot do this in a software testing environment, upper management has no reason to listen to a no ship recommendation. If you are able to provide this kind of risk analysis and decision support, with the data to back it up, you will see a huge change in the attitude of upper management towards ship dates. You will suddenly discover that these blind, irrational, dictatorial authority figures are actually quite willing to make hard decisions about product ship dates, if you can provide them with accurate and timely data on the risk associated with those decisions. You will discover that their relucatance in the past to listen to your inarticulate pleas to delay shipment of the product was due to the unlimted risk you were asking them to take on your behalf, with no reliable assurance that everything would not blow up in their faces. Remember, to executives risk is radiation. You cant ask them to walk into a reactor without the proper shielding. Of course, the analysis above is somewhat ideal, since it assumes that the engineering function is also capable of quantifying, projecting and delivering on its commitments, which is often not the case. There may be intractable, systemic bugs that engineering cannot diagnose or solve, in which case all the QA forecasting in the world wont make that problem disappear. Engineering may have written spaghetti code that they are afraid to touch, for fear of introducing
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 84 of 86

more bugs caused by the fixes. They may have no grasp of how to deal with problems in emergent properties of the product, such as performance and security. Inherent conflicts in the products requirements may only have emerged at a late date in the project, and resolving them might require a whole new project. Etc. As I pointed out earlier, QA cant change the quality of the requirements or the coding. But they can, potentially, know how good the code is and how resilient it is. Even in a situation where Engineering is unable to deliver a finished product, QA, in a quality assurance environment, will be able to know that, and know it fairly quickly. It can know this because it will be able to see that certain requirements just arent being validated, even though new iterations of code are being submitted to QA over and over again. This in turn will allow it to identify systemic code problems that are just being patched, not solved. Which means, in turn, that a properly constituted and run QA department has the ability to see that a project is churning, that is, trapped in a loop from which it cannot extricate itself, just as the churn is beginning. This is important, because one of the reasons so many projects are so very late is that the project team refuses to acknowledge they are trapped in such a loop, often for months at a time. Every week, the code will be ready the following week. This promise never changes as the weeks flow merrily by. The project is always one week away from being done, forever. Everyone in the project knows this is occurring, yet there is a strong psychological taboo against acknowledging it. Moreover, the perennial optimism of Engineers does not allow them, usually, to acknowledge they have reached an impasse. During the height
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 85 of 86

of project churn, bug counts may actually fall, since there is nothing new to test, even though validation rates are dismally low. This is why the ability to track validation of requirements provides a powerful, and psychologically objective tool, for determining when a project has entered Groundhog Day territory. Absent this data, a project will run on hope right off a cliff. Moreover, since it is usually during the churn phase that a project team loses the confidence of upper management, avoiding such a phase, or exiting it as quickly as possible, is a key way that project teams can retain the confidence of upper management, and thus find a meaningful voice in ship decisions. This, then, is the theoretical basis for my view of Quality Assurance. I have tried to present a comprehensive analysis not just of quality assurance and its failings, but of the entire institutional and financial system that generates and supports them. I have outlined, at a high level, what the basic principles of quality assurance really are (and are really not), and have shown how these principles would affect the most common problems of software development. I know in doing so I have left many detail questions unanswered. I have skipped this level of detail because I did not want to get bogged down in operational minutia at the very beginning. I wanted to give you a chance to see my philosophy, in all its parts and ramifications, with crystal clarity. Now that I have done that, however, I am happy to devote the second half of this book to a detailed exposition on how to apply this view of Quality Assurance, concretely, to the work of software development. In Part II I will, I hope, flesh out how such principles would work in practice; describe what institutional and logistical realities they entail; and provide specific examples of
By Niall Lynch verlandosta@yahoo.com 310-829-2044

Adventures In Quality Assurance - page 86 of 86

my theories applied to concrete, everyday problems of quality assurance. In other words, if you are not a QA person, you may want to stop reading at this point. Your plane flight, after all, may be a short one. In any event, let us now turn our attention from ideas to realities, from principles to details, and from hope to happiness.

By Niall Lynch verlandosta@yahoo.com 310-829-2044

You might also like