You are on page 1of 176

QUALITY ENGINEERING MODULE (QEM)

JQB 10103 INTRODUCTIONS TO QUALITY


MOHD SAIFUL IZWAAN BIN SAADON
BSc (Hons), MSc (UKM)

DEPARTMENT OF QUALITY ENGINEERING UNIVERSITI KUALA LUMPUR MALAYSIAN INSTITUTE OF INDUSTRIAL TECHNOLOGY

2009

CONTENTS
CHAPTER 1 Quality Perspective 1.1 1.2 1.3 Quality Definition Quality in US Quality in Japan 5- 6 6- 8 8 - 11 PAGES

CHAPTER 2 Quality Guru 1.1 1.2 1.3 W. Edward Deming Joseph M. Juran Kaoru Ishikawa 13 - 19 20 - 25 26 - 28

CHAPTER 3 Quality Improvement 3.1 3.2 3.3 3.4 3.5 Kaizen 5S TQM 7 Tools Towards Quality Improvement 7 New Tools Towards Quality Improvement 30 - 33 34 - 37 38 - 42 43 - 73 74 - 78

CHAPTER 4 Quality System 4.1 4.2 4.3 ISO 9000 GMP Halal Certificate 81 - 94 94 97 97 - 99

CHAPTER 5 Quality Awards 5.1 5.2 5.3 Deming Prize MBNQA PMQA 101 - 107 108 - 115 115 - 122

CHAPTER 6 Quality Inspection 6.1 6.2 Sampling Techniques Inspecture Procedure 124 - 155 156 - 174

CHAPTER 1

Quality Perspective

1.1 1. W. Edward Deming (1948)

Quality Definition

Determine by the customer 2. W. Edward Deming (1952) Making every people does whatever that they had agreed to do and they do it right on the first attempt. 3. Kaoru Ishikawa (1952) Meet Customer Satisfaction 4. Joseph M. Juran (1954) Suitable for its purpose and usage 5. Feigenbaum (1960) Product with the most advantages and made according to the customer specification.

6.

Crosby (1970) Meet and obey the rules and requirements

7.

In the end the most important thing is that a product or a service must: Meet the Customer Satisfaction

1.2

Quality in US

The quality movement can trace its roots back to medieval Europe, where craftsmen began organizing into unions called guilds in the late 13th century.

Until the early 19th century, manufacturing in the industrialized world tended to follow this craftsmanship model. The factory system, with its emphasis on product inspection, started in Great Britain in the mid-1750s and grew into the Industrial Revolution in the early 1800s. In the early 20th century, manufacturers began to include quality processes in quality practices. After the United States entered World War II, quality became a critical component of the war effort: Bullets manufactured in one state, for example, had to work consistently in rifles made in another. The armed forces initially inspected virtually every unit of product; then to simplify and speed up this process without compromising safety, the military began to use sampling techniques for inspection, aided by the publication of military-specification standards and training courses in Walter Shewharts statistical process control techniques. The birth of total quality in the United States came as a direct response to the quality revolution in Japan following World War II. The Japanese welcomed the input of Americans Joseph M. Juran and W. Edwards Deming and rather than concentrating on inspection, focused on improving all organizational processes through the people who used them. By the 1970s, U.S. industrial sectors such as automobiles and electronics had been broadsided by Japans high-quality competition. The U.S. response, emphasizing not only statistics but approaches that embraced the entire organization, became known as total quality management (TQM). By the last decade of the 20th century, TQM was considered a fad by many business leaders. But while the use of the term TQM has faded somewhat, particularly in the United States, its practices continue. In the few years since the turn of the century, the quality movement seems to have matured beyond Total Quality. New quality systems have evolved from the

foundations of Deming, Juran and the early Japanese practitioners of quality, and quality has moved beyond manufacturing into service, healthcare, education and government sectors.

1.3

Quality in Japan

The quality movement in Japan began in 1946 with the U.S. Occupation Force's mission to revive and restructure Japan's communications equipment industry. General Douglas MacArthur was committed to public education through radio. Homer Sarasohn was recruited to spearhead the effort by repairing and installing equipment, making materials and parts available, restarting factories, establishing the equipment test laboratory (ETL), and setting rigid quality standards for products (Tsurumi 1990). Sarasohn recommended individuals for company presidencies, like Koji Kobayashi of NEC, and he established education for Japan's top executives in the management of quality. Furthermore, upon 8

Sarasohn's return to the United States, he recommended W. Edwards Deming to provide a seminar in Japan on statistical quality control (SQC). Deming's 1950 lecture notes provided the basis for a 30-day seminar sponsored by the Union of Japanese Scientists and Engineers (JUSE) and provided the criteria for Japan's famed Deming Prize. The first Deming Prize was given to Koji Kobayashi in 1952. Within a decade, JUSE had trained nearly 20,000 engineers in SQC methods. Today Japan gives high rating to companies that win the Deming prize; they number about ten large companies per year. Deming's work has impacted industries such as those for radios and parts, transistors, cameras, binoculars, and sewing machines. In 1960, Deming was recognized for his contribution to Japan's reindustrialization when the Prime Minister awarded him the Second Order of the Sacred Treasure. In 1954, Dr. Joseph M. Juran of the United States raised the level of quality management from the factory to the total organization. He stressed the importance of systems thinking that begins with product designs, prototype testing, proper equipment operations, and accurate process feedback. Juran's seminar also became a part of JUSE's educational programs. Juran provided the move from SQC to TQC (total quality control) in Japan. This included companywide activities and education in quality control (QC), QC circles and audits, and promotion of quality management principles. By 1968, Kaoru Ishikawa, one of the fathers of TQC in Japan, had outlined the elements of TQC management:

quality comes first, not short-term profits the customer comes first, not the producer customers are the next process with no organizational barriers decisions are based on facts and data management is participatory and respectful of all employees management is driven by cross-functional committees covering product planning, product design, production planning, purchasing, manufacturing, sales, and distribution (Ishikawa 1985)

By 1991, JUSE had registered over 331,000 quality circles with over 2.5 million participants in its activities. Today, JUSE continues to provide over 200 courses per year, including five executive management courses, ten management courses, and a full range of technical training programs. One of the innovative TQC methodologies developed in Japan is referred to as the "Ishikawa" or "cause-and-effect" diagram. After collecting statistical data, Ishikawa found that dispersion came from four common causes, as shown below.

Cause-and-effect diagram (Ishikawa 1982) Materials often differ when sources of supply or size requirements vary. Equipment or machines also function differently depending on variations in their own parts, and they operate optimally for only part of the time. Processes or work methods have even greater variations. Finally, measurement also varies. All of these variations affect a product's quality. Ishikawa's diagram has lead Japanese

10

firms to focus quality control attention on the improvement of materials, equipment, and processes. JTEC panelists observed statistical process control (SPC) charts, often with goal lines extending into 1995, in a few of the factories they visited in 1993. For example, at Ibiden, process control was apparent in its laminated process board manufacture, where there was extensive use of drawings and descriptions of the processes necessary to do the job. Companies that were competing for the Deming Prize made extensive use of such charts, and companies that had received ISO 9000 certification also posted the process information required for each machine. However, the panel was surprised at the relatively limited use of SPC charts within the factories visited. The Japanese believe that the greatest benefit occurs when defect detection is implemented within the manufacturing sequence, thus minimizing the time required for detection, maximizing return on investment, and indirectly improving product reliability.

11

CHAPTER 2 Quality Guru

12

2.1

W. Edward Deming

a. History William Edwards Deming (October 14, 1900 December 20, 1993) was an American statistician, professor, author, lecturer, and consultant. Deming is widely credited with improving production in the United States during World War II, although he is perhaps best known for his work in Japan. There, from 1950 onward he taught top management how to improve design (and thus service), product quality, testing and sales (the last through global markets) through various methods, including the application of statistical methods. b. Deming philosophy synopsis The philosophy of W. Edwards Deming has been summarized as follows: "Dr. W. Edwards Deming taught that by adopting appropriate principles of management, organizations can increase quality and simultaneously reduce costs (by reducing waste, rework, staff attrition and litigation while increasing customer loyalty). The key is to practice continual improvement and think of manufacturing as a system, not as bits and pieces.

13

In the 1970s, Dr. Deming's philosophy was summarized by some of his Japanese proponents with the following 'a'-versus-'b' comparison: (a) When people and organizations focus primarily on quality, defined by the following ratio,

quality tends to increase and costs fall over time. (b) However, when people and organizations focus primarily on costs, costs tend to rise and quality declines over time. c. The Deming System of Profound Knowledge "The prevailing style of management must undergo transformation. A system cannot understand itself. The transformation requires a view from outside. The aim of this chapter is to provide an outside viewa lensthat I call a system of profound knowledge. It provides a map of theory by which to understand the organizations that we work in. "The first step is transformation of the individual. This transformation is discontinuous. It comes from understanding of the system of profound knowledge. The individual, transformed, will perceive new meaning to his life, to events, to numbers, to interactions between people. "Once the individual understands the system of profound knowledge, he will apply its principles in every kind of relationship with other people. He will have a basis for judgment of his own decisions and for transformation of the organizations that he belongs to. The individual, once transformed, will:

Set an example; Be a good listener, but will not compromise; Continually teach other people; and

14

Help people to pull away from their current practices and beliefs and move into the new philosophy without a feeling of guilt about the past."

Deming advocated that all managers need to have what he called a System of Profound Knowledge, consisting of four parts: 1. Appreciation of a system: understanding the overall processes involving suppliers, producers, and customers (or recipients) of goods and services (explained below); 2. Knowledge of variation: the range and causes of variation in quality, and use of statistical sampling in measurements; 3. Theory of knowledge: the concepts explaining knowledge and the limits of what can be known (see also: epistemology); 4. Knowledge of psychology: concepts of human nature. Deming explained, "One need not be eminent in any part nor in all four parts in order to understand it and to apply it. The 14 points for management in industry, education, and government follow naturally as application of this outside knowledge, for transformation from the present style of Western management to one of optimization." "The various segments of the system of profound knowledge proposed here cannot be separated. They interact with each other. Thus, knowledge of psychology is incomplete without knowledge of variation. "A manager of people needs to understand that all people are different. This is not ranking people. He needs to understand that the performance of anyone is governed largely by the system that he works in, the responsibility of management. A psychologist that possesses even a crude understanding of variation as will be learned in the experiment with the Red Beads could no longer participate in refinement of a plan for ranking people.

15

The Appreciation of a system involves understanding how interactions (i.e. feedback) between the elements of a system can result in internal restrictions that force the system to behave as a single organism that automatically seeks a steady state. It is this steady state that determines the output of the system rather than the individual elements. Thus it is the structure of the organization rather than the employees, alone, which holds the key to improving the quality of output. The Knowledge of variation involves understanding that everything measured consists of both "normal" variation due to the flexibility of the system and of "special causes" that create defects. Quality involves recognizing the difference in order to eliminate "special causes" while controlling normal variation. Deming taught that making changes in response to "normal" variation would only make the system perform worse. Understanding variation includes the mathematical certainty that variation will normally occur within six standard deviations of the mean. The System of Profound Knowledge is the basis for application of Deming's famous 14 Points for Management, described below. d. PDCA

16

e. Deming's 14 points Deming offered fourteen key principles for management for transforming business effectiveness. The points were first presented in his book Out of the Crisis. 1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and stay in business, and to provide jobs. 2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change. 3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place. 4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust. 5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease cost. 6. Institute training on the job. 7. Institute leadership. The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers. 8. Drive out fear, so that everyone may work effectively for the company. 9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service. 10. Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low 17

productivity belong to the system and thus lie beyond the power of the work force. 11. a. Eliminate work standards on the factory floor. Substitute leadership. b. Eliminate management by objective. Substitute workmanship. 12. a. Remove barriers that rob the hourly worker of his right to pride. b. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. 13. Institute a vigorous program of education and self-improvement. 14. Put everyone in the company to work to accomplish the transformation. The transformation is everyone's work. "Massive training is required to instill the courage to break with tradition. Every activity and every job is a part of the process. f. Seven Deadly Diseases The Seven Deadly Diseases (also known as the "Seven Wastes"): 1. Lack of constancy of purpose. 2. Emphasis on short-term profits. 3. Evaluation by performance, merit rating, or annual review of performance. 4. Mobility of management. 5. Running a company on visible figures alone. 6. Excessive medical costs. 7. Excessive costs of warranty, fueled by lawyers who work for contingency fees. g. A Lesser Category of Obstacles: 1. Neglecting long-range planning. 2. Relying on technology to solve problems. 3. Seeking examples to follow rather than developing solutions. 4. Excuses, such as "Our problems are different."

18

Dr. Deming's advocacy of the Plan-Do-Check-Act cycle, his 14 Points, and Seven Deadly Diseases have had tremendous influence outside of manufacturing and have been applied in other arenas, such as in the relatively new field of sales process engineering.

19

2.2

Joseph M. Juran

Joseph Moses Juran (December 24, 1904 February 28, 2008) was a 20th century management consultant who is principally remembered as an evangelist for quality and quality management, writing several influential books on those subjects. He was the brother of Academy Award winner Nathan H. Juran. a. Early life Juran was born to a Jewish family in 1904 in Brila, Romania, and later lived in Gura Humorului. In 1912, he immigrated to America with his family, settling in Minneapolis, Minnesota. Juran excelled in school, especially in mathematics. He was a chess champion at an early age, and dominated chess at Western Electric. Juran graduated from Minneapolis South High School in 1920. In 1924, with a bachelor's degree in electrical engineering from the University of Minnesota, Juran joined Western Electric's Hawthorne Works. His first job was troubleshooting in the Complaint Department. In 1925, Bell Labs proposed that Hawthorne Works personnel be trained in its newly-developed statistical sampling and control chart techniques. Juran was chosen to join the Inspection

20

Statistical Department, small group of engineers charged with applying and disseminating Bell Labs' statistical quality control innovations. This highly-visible position fueled Juran's rapid ascent in the organization and the course of his later career. In 1926, he married Sadie Shapiro, and they subsequently had four children: Robert, Sylvia, Charles and Donald. They had been married for over 81 years when he died in 2008. Juran was promoted to department chief in 1928, and the following year became a division chief. He published his first quality related article in Mechanical Engineering in 1935. In 1937, he moved to Western Electric/AT&T's headquarters in New York City. As a hedge against the uncertainties of the Great Depression, he enrolled in Loyola University Chicago School of Law in 1931. He graduated in 1935 and was admitted to the Illinois bar in 1936, though he never practiced Law. During the Second World War, through an arrangement with his employer, Juran served in the Lend-Lease Administration and Foreign Economic Administration. Just before war's end, he resigned from Western Electric, and his government post, intending to become a freelance consultant. He joined the faculty of New York University as an adjunct Professor in the Department of Industrial Engineering, where he taught courses in quality control and ran round table seminars for executives. He also worked through a small management consulting firm on projects for Gilette, Hamilton Watch Company and Borg-Warner. After the firm's owner's sudden death, Juran began his own independent practice, from which he made a comfortable living until his retirement in the late 1990s. His early clients included the now defunct Bigelow-Sanford Carpet Company, the Koppers Company, the International Latex Company, Bausch & Lomb and General Foods.

21

b. Works in Japan The end of World War II compelled Japan to change its focus from becoming a military power to becoming an economic one. Despite Japan's ability to compete on price, its consumer goods manufacturers suffered from a long-established reputation of poor quality. The first edition of Juran's Quality Control Handbook in 1951 attracted the attention of the Japanese Union of Scientists and Engineers (JUSE) which invited him to Japan in 1952. When he finally arrived in Japan in 1954 Juran met with ten manufacturing companies, notably Showa Denko, Nippon Kgaku, Noritake, and Takeda Pharmaceutical Company.[7] He also lectured at Hakone, Waseda University, saka, and Kyasan. During his life he made ten visits to Japan, the last in 1990. Working independently of W. Edwards Deming (who focused on the use of statistical quality control), Juranwho focused on managing for qualitywent to Japan and started courses (1954) in Quality Management. The training started with top and middle management. The idea that top and middle management need training had found resistance in the United States. For Japan, it would take some 20 years for the training to pay off. In the 1970s, Japanese products began to be seen as the leaders in quality. This sparked a crisis in the United States due to quality issues in the 1980s. c. Pareto Principle It was in 1941 that Juran discovered the work of Vilfredo Pareto. Juran expanded the Pareto principle applying it to quality issues (for example, 80% of a problem is caused by 20% of the causes). This is also known as "the vital few and the trivial many". In later years Juran has preferred "the vital few and the useful many" to signal that the remaining 80% of the causes should not be totally ignored.

22

d. Management Theory When he began his career in the 1920s the principal focus in quality management was on the quality of the end, or finished, product. The tools used were from the Bell system of acceptance sampling, inspection plans, and control charts. The ideas of Frederick Winslow Taylor dominated. Juran is widely credited for adding the human dimension to quality management. He pushed for the education and training of managers. For Juran, human relations problems were the ones to isolate. Resistance to changeor, in his terms, cultural resistancewas the root cause of quality issues. Juran credits Margaret Mead's book Cultural Patterns and Technical Change for illuminating the core problem in reforming business quality. He wrote Managerial Breakthrough, which was published in 1964, outlining the issue. Juran's vision of quality management extended well outside the walls of the factory to encompass non-manufacturing processes, especially those that might

23

be thought of as service related. For example, in an interview published in 1997[9] he observed: The key issues facing managers in sales are no different than those faced by managers in other disciplines. Sales managers say they face problems such as "It takes us too long...we need to reduce the error rate." They want to know, "How do customers perceive us?" These issues are no different than those facing managers trying to improve in other fields. The systematic approaches to improvement are identical. ... There should be no reason our familiar principles of quality and process engineering would not work in the sales process. e. Juran's Trilogy He also developed the "Juran's trilogy," an approach to cross-functional management that is composed of three managerial processes: quality planning, quality control and quality improvement.

24

f. Transferring Quality Knowledge Between East and West During his 1966 visit to Japan, Juran learned about the Japanese concept of Quality Circles which he enthusiastically evangelized in the West.] Juran also acted as a matchmaker between U.S. and Japanese companies looking for introductions to each other.

25

2.3

Kaoru Ishikawa

Kaoru Ishikawa (Ishikawa Kaoru) (1915-1989) was a Japanese University professor and influential quality management innovator best known in North America for the Ishikawa or cause and effect diagram (also known as Fishbone Diagram) that are used in the analysis of industrial process. a. Biography Born in Tokyo, the oldest of the eight sons of Ichiro Ishikawa. In 1939 he graduated University of Tokyo with an Engineering degree in applied chemistry. His first job was as a naval technical officer (1939-1941) then moved on to work at the Nissan Liquid Fuel Company until 1947. Ishikawa would now start his career as an associate professor at the University of Tokyo. He then undertook the Presidency of the Musashi Institute of Technology in 1978. In 1949, Ishikawa joined the Japanese Union of Scientists and Engineers (JUSE) quality control research group. After World War II Japan looked to transform its industrial sector, which in North America was then still perceived as a producer of cheap wind-up toys and poor quality cameras. It was his skill at mobilizing a lot of people towards a specific common goal that was largely responsible for Japan's quality-improvement initiatives. He translated, integrated and expanded the management concepts of Dr. Deming and Dr. Juran into the Japanese system.

26

After becoming a full professor in the Faculty of Engineering at The University of Tokyo (1960) Ishikawa introduced the concept of quality circles (1962) in conjunction with JUSE. This concept began as an experiment to see what effect the "leading hand" (Gemba-cho) could have on quality. It was a natural extension of these forms of training to all levels of an organization (the top and middle managers having already been trained). Although many companies were invited to participate, only one company at the time, Nippon Telephone & Telegraph, accepted. Quality Circles would soon become very popular and form an important link in a company's Total Quality Management System. Ishikawa would write two books on quality circles (QC Circle Koryo and How to Operate QC Circle Activities). Among his efforts to promote quality were, the Annual Quality Control Conference for Top Management (1963) and several books on Quality Control (the Guide to Quality Control was translated into English). He was the chairman of the editorial board of the monthly Statistical Quality Control. Ishikawa was involved in international standardization activities. 1982 saw the development of the Ishikawa diagram which is used to determine root causes. b. Quality Contributions

User Friendly Quality Control Fishbone Cause and Effect Diagram - Ishikawa diagram Implementation of Quality Circles Emphasized the 'Internal Customer' Shared Vision

c. Awards and recognition

1972 American Society for Quality's Eugene L. Grant Award

27

1977 Blue Ribbon Medal by the Japanese Government for achievements in industrial standardization 1988 Walter A. Shewhart Medal 1988 Awarded the Order of the Sacred Treasures, Second Class, by the Japanese government.

d. Ishikawa Diagram

28

CHAPTER 3 Quality Improvement

29

3.1

KAIZEN

Kaizen (Japanese for "improvement") is a Japanese philosophy that focuses on continuous improvement throughout all aspects of life. When applied to the workplace, Kaizen activities continually improve all functions of a business, from manufacturing to management and from the CEO to the assembly line workers. By improving standardized activities and processes, Kaizen aims to eliminate waste (see Lean manufacturing). Kaizen was first implemented in several Japanese businesses during the country's recovery after World War II, including Toyota, and has since spread to businesses throughout the world. a. Introductions Kaizen is a daily activity, the purpose of which goes beyond simple productivity improvement. It is also a process that, when done correctly, humanizes the

30

workplace, eliminates overly hard work ("muri"), and teaches people how to perform experiments on their work using the scientific method and how to learn to spot and eliminate waste in business processes. The philosophy can be defined as bringing back the thought process into the automated production environment dominated by repetitive tasks that traditionally required little mental participation from the employees. People at all levels of an organization can participate in kaizen, from the CEO down, as well as external stakeholders when applicable. The format for kaizen can be individual, suggestion system, small group, or large group. At Toyota, it is usually a local improvement within a workstation or local area and involves a small group in improving their own work environment and productivity. This group is often guided through the kaizen process by a line supervisor; sometimes this is the line supervisor's key role. While kaizen (at Toyota) usually delivers small improvements, the culture of continual aligned small improvements and standardization yields large results in the form of compound productivity improvement. Hence the English usage of "kaizen" can be: "continuous improvement" or "continual improvement." This philosophy differs from the "command-and-control" improvement programs of the mid-twentieth century. Kaizen methodology includes making changes and monitoring results, then adjusting. Large-scale pre-planning and extensive project scheduling are replaced by smaller experiments, which can be rapidly adapted as new improvements are suggested. In modern usage, a focused kaizen that is designed to address a particular issue over the course of a week is referred to as a "kaizen blitz" or "kaizen event". These are limited in scope, and issues that arise from them are typically used in later blitzes.

31

b. History In Japan, after World War II, American occupation forces brought in American experts in statistical control methods and who were familiar with the War Department's Training Within Industry (TWI) training programs to restore the nation. TWI programs included Job Instruction (standard work) and Job Methods (process improvement). In conjunction with the Shewhart cycle taught by W. Edwards Deming, and other statistics-based methods taught by Joseph M. Juran, these became the basis of the kaizen revolution in Japan that took place in the 1950s. c. Implementation The Toyota Production System is known for kaizen, where all line personnel are expected to stop their moving production line in case of any abnormality and, along with their supervisor, suggest an improvement to resolve the abnormality which may initiate a kaizen.

The PDCA cycles The cycle of kaizen activity can be defined as:

standardize an operation measure the standardized operation (find cycle time and amount of inprocess inventory) gauge measurements against requirements 32

innovate to meet requirements and increase productivity standardize the new, improved operations continue cycle.

This is also known as the Shewhart cycle, Deming cycle, or PDCA. Masaaki Imai made the term famous in his book, Kaizen: The Key to Japan's Competitive Success. Apart from business applications of the method, both Anthony Robbins and Robert Maurer have popularized the kaizen principles into personal development principles. The basis of Robbins' CANI (Constant and Never-Ending Improvement) method in kaizen is discussed in his Lessons in Mastery series. In their book The Toyota Way Fieldbook, Brijesh Rawat, Jeffrey Liker, and David Meier discuss the Kaizen Blitz and Kaizen Burst (also called a Kaizen Event) approaches to continuous improvement. A Kaizen Blitz, or rapid improvement, is a focused activity on a particular process or activity. The basic concept is to identify and quickly remove waste. Another approach is that of Kaizen Burst, a specific Kaizen activity on a particular process in the value stream.

33

3.2

5S

5S (sometimes called 5C) is a reference to a list of five Japanese words which, transliterated and translated into English, start with the letter S and are the name of a methodology. This list is a mnemonic for a methodology that is often incorrectly characterized as "standardized cleanup", however it is much more than cleanup. 5S is a philosophy and a way of organizing and managing the workspace and work flow with the intent to improve efficiency by eliminating waste, improving flow and reducing process unevenness.

34

a. Introductions 5S is a method for organizing a workplace, especially a shared workplace (like a shop floor or an office space), and keeping it organized. It's sometimes referred to as a housekeeping methodology, however this characterization can be misleading, as workplace organization goes beyond housekeeping (see discussion of "Seiton" below). The key targets of 5S are workplace morale, safety and efficiency. The assertion of 5S is, by assigning everything a location, time is not wasted by looking for things. Additionally, it is quickly obvious when something is missing from its designated location. Advocates of 5S believe the benefits of this methodology come from deciding what should be kept, where it should be kept, and how it should be stored. This decision making process usually comes from a dialog about standardization which builds a clear understanding, between employees, of how work should be done. It also instills ownership of the process in each employee. In addition to the above, another key distinction between 5S and "standardized cleanup" is Seiton. Seiton is often misunderstood, perhaps due to efforts to translate into an English word beginning with "S" (such as "sort" or "straighten"). The key concept here is to order items or activities in a manner to promote work flow. For example, tools should be kept at the point of use, workers should not have to repetitively bend to access materials, flow paths can be altered to improve efficiency, etc. b. The 5S's

Phase 1 Seiri,Sorting: Going through all the tools, materials, etc., in the plant and work area and keeping only essential items. Everything else is stored or discarded.

35

Phase 2 Seiton,Straighten or Set in Order: Focuses on efficiency. When we translate this to "Straighten or Set in Order", it sounds like more sorting or sweeping, but the intent is to arrange the tools, equipment and parts in a manner that promotes work flow. For example, tools and equipment should be kept where they will be used (i.e. straighten the flow path), and the process should be set in an order that maximizes efficiency.For every thing there should be place and every thing should be in its place.(demarcation and labeling of place) Phase 3 Seis, Sweeping or Shining or Cleanliness: Systematic Cleaning or the need to keep the workplace clean as well as neat. At the end of each shift, the work area is cleaned up and everything is restored to its place. This makes it easy to know what goes where and have confidence that everything is where it should be. The key point is that maintaining cleanliness should be part of the daily work - not an occasional activity initiated when things get too messy. Phase 4 Seiketsu,Standardizing: Standardized work practices or operating in a consistent and standardized fashion. Everyone knows exactly what his or her responsibilities are to keep above 3S's. Phase 5 Shitsuke,Sustaining the discipline: Refers to maintaining and reviewing standards. Once the previous 4S's have been established, they become the new way to operate. Maintain the focus on this new way of operating, and do not allow a gradual decline back to the old ways of operating. However, when an issue arises such as a suggested improvement, a new way of working, a new tool or a new output requirement, then a review of the first 4S's is appropriate. A sixth phase "Safety" is sometimes added. Purists, however, argue that adding it is unnecessary since following 5S correctly will result in a safe work environment. There will have to be continuous education about maintaining standards. When there are changes that will affect the 5S programme -- such as new equipment, 36

new products or new work rules -- it is essential to make changes in the standards and provide training. A good way to continue educating employees and maintaining standards is to use 5S posters and signs. b. The 5C

5C is another way of translating the list of five Japanese words into English terms starting with the letter C. The 5C's are: Phase 1 - Clearout and Classify: Seiri Phase 2 - Configure: Seiton Phase 3 - Clean and Check: Seis Phase 4 - Conformity: Seiketsu Phase 5 - Custom and Practice: Shitsuke

37

3.3

TQM

Total Quality Management (TQM) is a business management strategy aimed at embedding awareness of quality in all organizational processes. TQM has been widely used in manufacturing, education, call centers, government, and service industries, as well as NASA space and science programs. a. Definition When used together as a phrase, the three words in this expression have the following meanings:

Total: Involving the entire organization, supply chain, and/or product life cycle Quality: With its usual definitions, with all its complexities Management: The system of managing with steps like Plan, Organize, Control, Lead, Staff, provisioning and organizing

As defined by the International Organization for Standardization (ISO): "TQM is a management approach for an organization, centered on quality, based on the participation of all its members and aiming at long-term

38

success through customer satisfaction, and benefits to all members of the organization and to society." ISO 8402:1994 One major aim is to reduce variation from every process so that greater consistency of effort is obtained. (Royse, D., Thyer, B., Padgett D., & Logan T., 2006) In Japan, TQM comprises four process steps, namely: 1. Kaizen Focuses on "Continuous Process Improvement", to make processes visible, repeatable and measurable. 2. Atarimae Hinshitsu The idea that "things will work as they are supposed to" (for example, a pen will write). 3. Kansei Examining the way the user applies the product leads to improvement in the product itself. 4. Miryokuteki Hinshitsu The idea that "things should have an aesthetic quality" (for example, a pen will write in a way that is pleasing to the writer). TQM requires that the company maintain this quality standard in all aspects of its business. This requires ensuring that things are done right the first time and that defects and waste are eliminated from operations. Total Quality Management continues to evolve in the form of the Criteria for Performance Excellence which was first published in 1988. The criteria provide the basis for the Baldrige National Quality Program (BNQP) that is administered by the National Institute of Standards and Technology (NIST). Organizations benchmark against the criteria to assess how well their actions are aligned with their strategies. Results are examined to determine the effectiveness of their approaches and deployment of these strategies. Dr. Juran once stated that the Criteria for Performance Excellence is the embodiment of those philosophies and practices we call TQM.

39

b. Comprehensive Definition Total Quality Management is the organization-wide management of quality. Management consists of planning, organizing, directing, control, and assurance. Total quality is called total because it consists of two qualities: quality of return to satisfy the needs of the shareholders, and quality of products. c. Origins The origin of the expression Total Quality Management is unclear. "Total Quality Control" was the key concept of Armand Feigenbaum's 1951 book, Quality Control: Principles, Practice, and Administration. In a chapter titled "Total Quality Control" Feigenbaum grabs on to an idea that sparked many scholars' interest in the following decades. The expression Total Quality Control existed together with the Japanese expression "Company Wide Quality Control" (CWQC) and the differences between the two expressions were unclear. Major influencers for both expressions were W. Edwards Deming, Joseph Juran, Philip B. Crosby, and Kaoru Ishikawa, known as the big four. The expression Total Quality Management started to appear in the 1980s and there are two theories of its origin: One theory is that Total Quality Management was created as a misinterpretation from Japanese to English since no difference exists between the words "control" and "management" in Japanese.
[1]

. According to William Golomski (American

quality scholar and consultant, 1924-2002) TQM was first mentioned by Koji Kobayashi at NEC (Nippon Electrical Company) in his speech when he received the Deming Prize in 1974. The American Society for Quality says that the term Total Quality Management was used by the U.S. Naval Air Systems Command in 1984 to describe its Japanese-style management approach to quality improvement since they did not

40

like the word control in Total Quality Control. The word management should then have been suggested by one of the employees, Nancy Warren. This is consistent with the story that the United States Navy Personnel Research and Development Center began researching the use of statistical process control (SPC), the work of Juran, Crosby, and Ishikawa, and the philosophy of W. Edwards Deming to make performance improvements in 1984. This approach was first tested at the North Island Naval Aviation Depot. d. TQM and Contingency-Based Research Total Quality Management has not been independent of its environment. In the context of management accounting systems (MCSs), Sim and Killough (1998) show that incentive pay enhanced the positive effects of TQM on customer and quality performance. Ittner and Larcker (1995) demonstrated that product focused TQM was linked to timely problem solving information and flexible revisions to reward systems. Chendall (2003) summarizes the findings from contingency-based research concerning management control systems and TQM by noting that TQM is associated with broadly based MCSs including timely, flexible, externally focused information; close interactions between advanced technologies and strategy; and non-financial performance measurement. A discussion of TQM and pay is not complete without considering the work of Dr. W. Edward Deming. Deming's 14 points include 11. Eliminate Numerical Quotas and 12. Remove Barriers to Pride of Workmanship. It can be argued that incentive compensation, goals, and quotas are extrinsic motivators that interfere with pride of workmanship and are not consistent with the basic philosophy of TQM. Alfie Kohn's book, Punished by Rewards, discusses the effects of these extrinsic motivators and how they displace intrinsic motivation.

41

e. Possible Lifecycle Abrahamson (1996) argued that fashionable management discourse such as Quality Circles tends to follow a lifecycle in the form of a bell curve, possibly indicating a management fad.

42

3.4

7 Quality Control Tools

a.

Ishikawa Diagram

Ishikawa diagrams (also called fishbone diagrams or cause-and-effect diagrams) are diagrams that show the causes of a certain event. A common use of the Ishikawa diagram is in product design, to identify potential factors causing an overall effect. i. Overview Ishikawa diagrams were proposed by Kaoru Ishikawa[1] in the 1960s, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. It was first used in the 1960s, and is considered one of the seven basic tools of quality management, along with the histogram, Pareto chart, check sheet, control 43

chart, flowchart, and scatter diagram. See Quality Management Glossary. It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton. Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was "Jinba Ittai" or "Horse and Rider as One". The main causes included such aspects as "touch" and "braking" with the lesser causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door". Every factor identified in the diagram was included in the final design. ii. Causes Causes in the diagram are often based on a certain set of causes, such as the 6 M's, described below. Cause-and-effect diagrams can reveal key relationships among various variables, and the possible causes provide additional insight into process behaviour. Causes in a typical diagram are normally grouped into categories, the main ones of which are: The 6 M's Machine, Method, Materials, Measurements, Man and Mother Nature (Environment): Note: a more modern selection of categories are Equipment, Process, People, Materials, Environment, and Management. Causes should be derived from brainstorming sessions. Then causes should be sorted through affinity-grouping to collect similar ideas together. These groups should then be labeled as categories of the fishbone. They will typically be one of the traditional categories mentioned above but may be something unique to our application of this tool. Causes should be specific, measurable, and controllable.

44

iii. Appearance

A generic Ishikawa diagram showing general (red) and more refined (blue) causes for an event. Most Ishikawa diagrams have a box at the right hand side, where the effect to be examined is written. The main body of the diagram is a horizontal line from which stem the general causes, represented as "bones". These are drawn towards the left-hand side of the paper and are each labeled with the causes to be investigatedoften brainstormed beforehandand based on the major causes listed above. Off each of the large bones there may be smaller bones highlighting more specific aspects of a certain cause, and sometimes there may be a third level of bones or more. These can be found using the '5 Whys' technique. When the most probable causes have been identified, they are written in the box along with the original effect. The more populated bones generally outline more influential factors, with the opposite applying to bones with fewer "branches". Further analysis of the diagram can be achieved with a Pareto chart. The Ishikawa concept can also be documented and analyzed through depiction in a matrix format.

45

b.

Pareto Chart

A Pareto chart is a special type of bar chart where the values being plotted are arranged in descending order. The graph is accompanied by a line graph which shows the cumulative totals of each category, left to right. The chart is named after Vilfredo Pareto, and its use in quality assurance was popularized by Joseph M. Juran and Kaoru Ishikawa. The Pareto chart is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. These charts can be generated in Microsoft Office or OpenOffice as well as many free software tools found online. Typically on the left vertical axis is frequency of occurrence, but it can alternatively represent cost or other important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure; because the reasons are in decreasing order, the cumulative function is a concave function. The purpose is to highlight the most important among a (typically large) set of factors. In quality control, the Pareto chart often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, etc.

46

The Pareto chart was developed to illustrate the 80-20 Rule, that 80 percent of the problems stem from 20 percent of the various causes. The check sheet is a simple document that is used for collecting data in real-time and at the location where the data is generated. The document is typically a blank form that is designed for the quick, easy, and efficient recording of the desired information, which can be either quantitative or qualitative. When the information is quantitative, the checksheet is sometimes called a tally sheet. A defining characteristic of a checksheet is that data is recorded by making marks ("checks") on it. A typical checksheet is divided into regions, and marks made in different regions have different significance. Data is read by observing the location and number of marks on the sheet. 5 Basic types of Check Sheets:

Classification: A trait such as a defect or failure mode must be classified into a category. Location: The physical location of a trait is indicated on a picture of a part or item being evaluated. Frequency: The presence or absence of a trait or combination of traits is indicated. Also number of occurrences of a trait on a part can be indicated. Measurement Scale: A measurement scale is divided into intervals, and measurements are indicated by checking an appropriate interval. Check List: The items to be performed for a task are listed so that, as each is accomplished, it can be indicated as having been completed.

47

c.

Check Sheet

The check sheet is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram.

d.

Control Chart

The control chart, also known as the Shewhart chart or process-behaviour chart, in statistical process control is a tool used to determine whether a manufacturing or business process is in a state of statistical control or not. i. Overview If the chart indicates that the process is currently under control then it can be used with confidence to predict the future performance of the process. If the chart indicates that the process being monitored is not in control, the pattern it reveals can help determine the source of variation to be eliminated to bring the process back into control. A control chart is a specific kind of run chart that allows significant change to be differentiated from the natural variability of the process.

48

This is key to effective process control and improvement. On a practical level the control chart can be seen as part of an objective disciplined approach that facilitates the decision as to whether process performance warrants attention or not. The control chart is one of the seven basic tools of quality control (along with the histogram, Pareto chart, check sheet, cause-and-effect diagram, flowchart, and scatter diagram). ii. History The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920 they had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control." Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

49

Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.[2] In 1924 or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and then became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander of the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s. More recent use and development of control charts in the Shewhart-Deming tradition has been championed by Donald J. Wheeler. iii. Chart Details A control chart consists of the following:

Points representing measurements of a quality characteristic in samples taken from the process at different times [the data] A centre line, drawn at the process characteristic mean which is calculated from the data 50

Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which the process output is considered statistically 'unlikely'

The chart may contain other optional features, including:

Upper and lower warning limits, drawn as separate lines, typically two standard deviations above and below the centre line Division into zones, with the addition of rules governing frequencies of observations in each zone Annotation with events of interest, as determined by the Quality Engineer in charge of the process's quality

However in the early stages of use the inclusion of these items may confuse inexperienced chart interpreters.

iv. Chart usage If the process is in control, all points will plot within the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs,

51

a control chart "signaling" the presence of a special-cause requires immediate investigation. This makes the control limits very important decision aids. The control limits tell you about process behaviour and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process' design simply can't deliver the process characteristic at the desired level. Control charts omit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however. The purpose of control charts is to allow simple detection of events that are indicative of actual process change. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated. The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it's clear that the process is truly in control. Note that with three sigma limits, one expects to be signaled

52

approximately once out of every 370 points on average, just due to commoncauses. v. Choice of limits Shewhart set 3-sigma limits on the following basis.

The coarse result of Chebyshev's inequality that, for any probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.

The finer result of the Vysochanskii-Petunin inequality, that for any unimodal probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).

The empirical investigation of sundry probability distributions reveals that at least 99% of observations occurred within three standard deviations of the mean.

Shewhart summarised the conclusions by saying: ... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating. Though he initially experimented with limits based on probability distributions, Shewhart ultimately wrote: Some of the earliest attempts to characterise a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterised such a state. When the normal law was found to be inadequate, then generalised functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.

53

The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman-Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past .... He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors: 1. Ascribe a variation or a mistake to a special cause when in fact the cause belongs to the system (common cause). (Also known as a Type I error) 2. Ascribe a variation or a mistake to the system (common causes) when in fact the cause was special. (Also known as a Type II error) vi. Calculation of Standard Deviation As for the calculation of control limits, the standard deviation required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation. An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, an estimator which tends to be less influenced by the extreme observations which typify specialcauses. vii. Rules for detecting signals The most common sets are:

The Western Electric rules The Wheeler rules (equivalent to the Western Electric zone tests[3]) The Nelson rules

54

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 7, 8 and 9 all being advocated by various writers. The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data. viii. Alternative Bases In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart-Deming tradition. ix. Performance of Control Charts When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, then that cause should be eliminated if possible. It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. Since the control limits are evaluated each time a point is added to the chart, it readily follows that every control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.

55

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart. It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart and the CUSUM chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point. x. Criticisms Several authors have criticised the control chart on the grounds that it violates the likelihood principle. However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak. Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties

56

xi. Types of Charts Process Chart Process observation Quality characteristic XbarR chart measurement within one subgroup Quality characteristic XbarS chart Shewhart individuals control chart (ImR chart or XmR chart) Three-way chart Quality characteristic measurement within one subgroup Fraction nonconforming within one subgroup Number np-chart nonconforming within one subgroup Number of c-chart nonconformances within one subgroup Nonconformances per u-chart unit within one Independent Attributes Attributes or subgroup EWMA chart Exponentially weighted Independent Independent Attributes Large ( 1.5) Independent Attributes Independent Attributes Independent Variables Large ( 1.5) Large ( 1.5) Quality characteristic measurement for one observation Independent Variables Large ( 1.5) measurement within one subgroup Independent Variables Independent Variables relationships Process type

observations observations

Size of shift to detect Large ( 1.5) Large ( 1.5)

p-chart

Large ( 1.5)

Large ( 1.5) Small (< 57

moving average of quality characteristic measurement within one subgroup Cumulative sum of CUSUM chart quality characteristic measurement within one subgroup Quality characteristic measurement within one subgroup Quality characteristic one subgroup Autocorrelated Dependent of Variables variables Independent Attributes or variables Small (< 1.5) variables 1.5)

Time series model Regression Control Chart

Attributes or variables

N/A

measurement within process control

Large ( 1.5)

e.

Flowchart

A flowchart is common type of chart, that represents an algorithm or process, showing the steps as boxes of various kinds, and their order by connecting these with arrows. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields. i. History The first structured method for documenting process flow, the "flow process chart", was introduced by Frank Gilbreth to members of ASME in 1921 as the presentation Process ChartsFirst Steps in Finding the One Best Way. Gilbreth's tools quickly found their way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen began training business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences in Lake Placid, New York.

58

At 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and Gamble where he developed their Deliberate Methods Change Program. Another 1944 graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register Corporation, adapted the flow process chart to information processing with his development of the multi-flow process chart to displays multiple documents and their relationships. In 1947, ASME adopted a symbol set derived from Gilbreth's original work as the ASME Standard for Process Charts. According to Herman Goldstine, he developed flowcharts with John von Neumann at Princeton University in late 1946 and early 1947. Flowcharts used to be a popular means for describing computer algorithms. They are still used for this purpose; modern techniques such as UML activity diagrams can be considered to be extensions of the flowchart. However, their popularity decreased when, in the 1970s, interactive computer terminals and thirdgeneration programming languages became the common tools of the trade, since algorithms can be expressed much more concisely and readably as source code in such a language. Often, pseudo-code is used, which uses the common idioms of such languages without strictly adhering to the details of a particular one.

ii. Symbols A typical flowchart from older Computer Science textbooks may have the following kinds of symbols: Start and end symbols 59

Represented as lozenges, ovals or rounded rectangles, usually containing the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit enquiry" or "receive product". Arrows Showing what's called "flow of control" in computer science. An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to. Processing steps Represented as rectangles. Examples: "Add 1 to X"; "replace identified part"; "save changes" or similar. Input/Output Represented as a parallelogram. Examples: Get X from the user; display X. Conditional or decision Represented as a diamond (rhombus). These typically contain a Yes/No question or True/False test. This symbol is unique in that it has two arrows coming out of it, usually from the bottom point and right point, one corresponding to Yes or True, and one corresponding to No or False. The arrows should always be labeled. More than two arrows can be used, but this is normally a clear indicator that a complex decision is being taken, in which case it may need to be broken-down further, or replaced with the "pre-defined process" symbol. A number of other symbols that have less universal currency, such as:

A Document represented as a rectangle with a wavy base;

60

A Manual input represented by parallelogram, with the top irregularly sloping up from left to right. An example would be to signify data-entry from a form;

A Manual operation represented by a trapezoid with the longest parallel side at the top, to represent an operation or adjustment to process that can only be made manually.

A Data File represented by a cylinder

Flowcharts may contain other symbols, such as connectors, usually represented as circles, to represent converging paths in the flow chart. Circles will have more than one arrow coming into them but only one going out. Some flow charts may just have an arrow point to another arrow instead. These are useful to represent an iterative process (what in Computer Science is called a loop). A loop may, for example, consist of a connector where control first enters, processing steps, a conditional with one arrow exiting the loop, and one going back to the connector. Off-page connectors are often used to signify a connection to a (part of another) process held on another sheet or screen. It is important to remember to keep these connections logical in order. All processes should flow from top to bottom and left to right.

iii. Examples

61

A simple flowchart for computing factorial N (N!) A flowchart for computing factorial N (N!) where N! = (1 * 2 * 3 * ... * N). This flowchart represents a "loop and a half" a situation discussed in introductory programming textbooks that requires either a duplication of a component (to be both inside and outside the loop) or the component to be put inside a branch in the loop. iv. Types of flow charts There are many different types of flowcharts. On the one hand there are different types for different users, such as analysts, designers, engineers, managers, or programmers.[3] On the other hand those flowcharts can represent different types of objects. Sterneckert (2003) divides four more general types of flowcharts:[3]

Document flowcharts, showing a document flow through system Data flowcharts, showing data flows in a system System flowcharts showing controls at a physical or resource level Program flowchart, showing the controls in a program within a system

However there are several of these classifications. For example Andrew Veronis (1978) named three basic types of flowcharts: the system flowchart, the general

62

flowchart, and the detailed flowchart.[4] That same year Marilyn Bohl (1978) stated "in practice, two kinds of flowcharts are used in solution planning: system flowcharts and program flowcharts...".[5] More recently Mark A. Fryman (2001) stated that there are more differences. Decision flowcharts, logic flowcharts, systems flowcharts, product flowcharts, and process flowcharts are "just a few of the different types of flowcharts that are used in business and government.[6] v. Manual Any vector-based drawing program can be used to create flowchart diagrams, but these will have no underlying data model to share data with databases or other programs such as project management systems or spreadsheets. Some tools offer special support for flowchart drawing, e.g., ConceptDraw, Dia, SmartDraw, Visio, and OmniGraffle. With the advent of web technology, online flowchart solutions are becoming quite popular. DrawAnywhere is one example. It is completely web-based and no download is required. It has the similar ease of use and flexibility like a packaged software, but still does not meet power of a packaged software like Visio or SmartDraw. But these online flowchart solutions are good for academic or personal use. vi. Automatic Many software packages exist that can create flowcharts automatically, either directly from source code, or from a flowchart description language. For example, Graph::Easy, a Perl package, takes a textual description of the graph, and uses the description to generate various output formats including HTML, ASCII or SVG.

f.

Histogram

63

An example histogram of the heights of 31 Black Cherry trees. In statistics, a histogram is a graphical display of tabulated frequencies, shown as bars. It shows what proportion of cases fall into each of several categories: it is a form of data binning. The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. The intervals (or bands, or bins) are generally of the same size, and are most easily interpreted if they are. Histograms are used to plot density of data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram always equals 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot. A alternative to the histogram is kernel density estimation, which uses a kernel to smooth samples. This will construct a smooth probability density function, which will in general more accurately reflect the underlying variable. The histogram is one of the seven basic tools of quality control, which also include the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram.

64

i. Etymology The word histogram is derived from the Greek histos 'anything set upright' (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); and gramma 'drawing, record, writing'. ii. Examples As an example we consider data collected by the U.S. Census Bureau on time to travel to work (2000 census, [1], Table 2). The census found that there were 124 million people who work outside of their homes. This rounding is a common phenomenon when collecting data from people.

Histogram of travel time, US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table.

Data by absolute numbers Interval Width Quantity Quantity/width

65

0 5 10 15 20 25 30 35 40 45 60 90

5 5 5 5 5 5 5 5 5 15 30 60

4180 13687 18618 19634 17981 7190 16369 3212 4122 9200 6461 3435

836 2737 3723 3926 3596 1438 3273 642 824 613 215 57

This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million).

This type of histogram shows absolute numbers.

66

Histogram of travel time, US 2000 census. Area under the curve equals 1. This diagram uses Q/total/width from the table. Data by proportion Interval Width Quantity (Q) Q/total/width 0 5 4180 0.0067 5 5 13687 0.0221 10 5 18618 0.0300 15 5 19634 0.0316 20 5 17981 0.0290 25 5 7190 0.0116 30 5 16369 0.0264 35 5 3212 0.0052 40 5 4122 0.0066 45 15 9200 0.0049 60 30 6461 0.0017 90 60 3435 0.0005 This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram.

67

In other words a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies. They only place the bars together to make it easier to compare data. iii. Activities and Demonstrations The SOCR resource pages contain a number of hands-on interactive activities demonstrating the concept of a histogram, histogram construction and manipulation using Java applets and charts. iv. Mathematical Definition

An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1. In a more general mathematical sense, a histogram is a mapping mi that counts the number of observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:

68

v. Cumulative histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mi is defined as:

vi. Number of Bins and Width There is no "best" number of bins, and different bin sizes can reveal different features of the data. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. You should always experiment with bin widths before choosing one (or more) that illustrate the salient features in your data. The number of bins k can be calculated directly, or from a suggested bin width h:

The braces indicate the ceiling function.

Sturges' formula , which implicitly bases the bin sizes on the range of the data, and can perform poorly if n < 30. Scott's choice ,

69

where is the sample standard deviation. Freedman-Diaconis' choice , which is based on the interquartile range. vii. Continuous Data The idea of a histogram can be generalized to continuous data. Let (see Lebesgue space), then the cumulative histogram operator H can be defined by: H(f)(y) = with only finitely many intervals of monotony this can be rewritten as

h(f)(y) is undefined if y is the value of a stationary point. g. Scatter Plot

70

Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA. This chart suggests there are generally two "types" of eruptions: short-wait-short-duration, and long-wait-long-duration.

A 3D scatter plot allows for the visualization of multivariate data of up to four dimensions. The Scatter plot takes multiple scalar variables and uses them for different axes in phase space. The different variables are combined to form coordinates in the phase space and they are displayed using glyphs and colored using another scalar variable. A scatter plot is a type of display using Cartesian coordinates to display values for two variables for a set of data. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.[2] A scatter plot is also called a scatter chart, scatter diagram and scatter graph. i. Overview

71

A scatter plot only specifies variables or independent variables when a variable exists that is under the control of the experimenter. If a parameter exists that is systematically incremented and/or decremented by the experimenter, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables. A scatter plot can suggest various kinds of correlations between variables with a certain confidence level. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it suggests a negative correlation. A line of best fit (alternatively called 'trendline') can be drawn in order to study the correlation between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. Unfortunately, no universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships. One of the most powerful aspects of a scatter plot, however, is its ability to show nonlinear relationships between variables. Furthermore, if the data is represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns. The scatter diagram is one of the basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram and flowchart. ii. Example

72

For example, to display values for "lung capacity" (first variable) and how long that person could hold his breath (second variable), a researcher would choose a group of people to study, then measure each one's lung capacity (first variable) and how long that person could hold his breath (second variable). The researcher would then plot the data in a scatter plot, assigning "lung capacity" to the horizontal axis, and "time holding breath" to the vertical axis. A person with a lung capacity of 400 cc who held his breath for 21.7 seconds would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set, and will not help to determine what kind of relationship there might be between the two variables.

3.5

7 New Quality Control Tools

73

The Seven Management and Planning Tools have their roots in Operations Research work done after World War II and the Japanese Total Quality Control (TQC) research. In 1979 the book Seven New Quality Tools for Managers and Staff was published and in 1983 was translated into English. The seven tools include: 1. Affinity Diagram (KJ Method) 2. Interrelationship Diagraph (ID) 3. Tree Diagram 4. Prioritization Matrix 5. Matrix Diagram 6. Process Decision Program Chart (PDPC) 7. Activity Network Diagram

The seven tools

74

a. Affinity Diagram

This tool takes large amounts of disorganized data and information and enables one to organize it into groupings based on natural relationships. It was created in the 1960s by Japanese anthropologist Jiro Kawakita.

b. Interrelationship Diagraph

This tool displays all the interrelated cause-and-effect relationships and factors involved in a complex problem and describes desired outcomes. The process of creating an interrelationship diagraph helps a group analyze the natural links between different aspects of a complex situation. c. Tree Diagram

75

This tool is used to break down broad categories into finer and finer levels of detail. It can map levels of details of tasks that are required to accomplish a goal or task. It can be used to break down broad general subjects into finer and finer levels of detail. Developing the tree diagram helps one move their thinking from generalities to specifics. d. Prioritization Matrix

This tool is used to prioritize items and describe them in terms of weighted criteria. It uses a combination of tree and matrix diagraming techniques to do a pair-wise evalutaion of items and to narrow down options to the most desired or most effective.

e. Matrix Diagram

76

This tool shows the relationship between items. At each intersection a relationship is either absent or present. It then gives information about the relationship, such as its strength, the roles played by various individuals or measurements. Six differently shaped matrices are possible: L, T, Y, X, C, R and roof-shaped, depending on how many groups must be compared. f. Process Decision Program Chart (PDPC)

A useful way of planning is to break down tasks into a hierarchy, using a Tree Diagram. The PDPC extends the tree diagram a couple of levels to identify risks and countermeasures for the bottom level tasks. Different shaped boxes are used to highlight risks and identify possible countermeasures (often shown as 'clouds' to indicate their uncertain nature). The PDPC is similar to the Failure Modes and Effects Analysis (FMEA) in that both identify risks, consequences of failure, and contingency actions; the FMEA also rates relative risk levels for each potential failure point. 77

g. Activity Network Diagram

This tool is used to plan the appropriate sequence or schedule for a set of tasks and related subtasks. It is used when subtasks must occur in parallel. The diagram enables one to determine the critical path (longest sequence of tasks). (See also PERT diagram.)

78

79

80

CHAPTER 4

Quality System

4.1

ISO 9000

ISO 9000 is a family of standards for quality management systems. ISO 9000 is maintained by ISO, the International Organization for Standardization and is 81

administered by accreditation and certification bodies. Some of the requirements in ISO 9001 (which is one of the standards in the ISO 9000 family) include

a set of procedures that cover all key processes in the business; monitoring processes to ensure they are effective; keeping adequate records; checking output for defects, with appropriate and corrective action where necessary; regularly reviewing individual processes and the quality system itself for effectiveness; and facilitating continual improvement

A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is "ISO 9001 certified" or "ISO 9001 registered". Certification to an ISO 9001 standard does not guarantee any quality of end products and services; rather, it certifies that formalized business processes are being applied. Although the standards originated in manufacturing, they are now employed across several types of organizations. A "product", in ISO vocabulary, can mean a physical object, services, or software. i. ISO 9000 Series of Standards ISO 9000 includes standards:

ISO 9000:2005, Quality management systems Fundamentals and vocabulary. Covers the basics of what quality management systems are and also contains the core language of the ISO 9000 series of standards. A guidance document, not used for certification purposes, but important reference document to understand terms and vocabulary related to quality management systems.

82

ISO 9001:2008 Quality management systems Requirements is intended for use in any organization regardless of size, type or product (including service). It provides a number of requirements which an organization needs to fulfil if it is to achieve customer satisfaction through consistent products and services which meet customer expectations. It includes a requirement for the continual (i.e. planned) improvement of the Quality Management System, for which ISO 9004:2000 provides many hints.

This is the only implementation for which third-party auditors may grant certification. It should be noted that certification is not described as any of the 'needs' of an organization as a driver for using ISO 9001 (see ISO 9001:2000 section 1 'Scope') but does recognize that it may be used for such a purpose (see ISO 9001:2000 section 0.1 'Introduction').

ISO

9004:2000

Quality

management

systems

Guidelines

for

performance improvements. covers continual improvement. This gives you advice on what you could do to enhance a mature system. This standard very specifically states that it is not intended as a guide to implementation. There are many more standards in the ISO 9001 series (see "List of ISO 9000 standards" from ISO), many of them not even carrying "ISO 900x" numbers. For example, some standards in the 10,000 range are considered part of the 9000 group: ISO 10007:1995 discusses Configuration management, which for most organizations is just one element of a complete management system. ISO notes: "The emphasis on certification tends to overshadow the fact that there is an entire family of ISO 9000 standards ... Organizations stand to obtain the greatest value when the standards in the new core series are used in an integrated manner, both with each other and with the other standards making up the ISO 9000 family as a whole".

83

Note that the previous members of the ISO 9000 series, 9001, 9002 and 9003, have all been integrated into 9001. In most cases, an organization claiming to be "ISO 9000 registered" is referring to ISO 9001. ii. Contents of ISO 9001 ISO 9001:2008 Quality management systems Requirements is a document of approximately 30 pages which is available from the national standards organization in each country. Outline contents are as follows:

Page iv: Foreword Pages v to vii: Section 0 Introduction Pages 1 to 14: Requirements
o o o

Section 1: Scope Section 2: Normative Reference Section 3: Terms and definitions (specific to ISO 9001, not specified in ISO 9000)

Pages 2 to 14
o o o o o

Section 4: Quality Management System Section 5: Management Responsibility Section 6: Resource Management Section 7: Product Realization Section 8: Measurement, analysis and improvement

In effect, users need to address all sections 1 to 8, but only 4 to 8 need implementing within a QMS.

Pages 15 to 22: Tables of Correspondence between ISO 9001 and other standards Page 23: Bibliography

84

The standard specifies six compulsory documents:


Control of Documents (4.2.3) Control of Records (4.2.4) Internal Audits (8.2.2) Control of Nonconforming Product / Service (8.3) Corrective Action (8.5.2) Preventive Action (8.5.3)

In addition to these, ISO 9001:2008 requires a Quality Policy and Quality Manual (which may or may not include the above documents). iii. Summary of ISO 9001:2008 in Informal Language

The quality policy is a formal statement from management, closely linked to the business and marketing plan and to customer needs. The quality policy is understood and followed at all levels and by all employees. Each employee needs measurable objectives to work towards.

Decisions about the quality system are made based on recorded data and the system is regularly audited and evaluated for conformance and effectiveness.

Records should show how and where raw materials and products were processed, to allow products and problems to be traced to the source. You need a documented procedure to control quality documents in your company. Everyone must have access to up-to-date documents and be aware of how to use them.

To maintain the quality system and produce conforming product, you need to provide suitable infrastructure, resources, information, equipment, measuring and monitoring devices, and environmental conditions.

You need to map out all key processes in your company; control them by monitoring, measurement and analysis; and ensure that product quality 85

objectives are met. If you cant monitor a process by measurement, then make sure the process is well enough defined that you can make adjustments if the product does not meet user needs.

For each product your company makes, you need to establish quality objectives; plan processes; and document and measure results to use as a tool for improvement. For each process, determine what kind of procedural documentation is required (note: a product is hardware, software, services, processed materials, or a combination of these).

You need to determine key points where each process requires monitoring and measurement, and ensure that all monitoring and measuring devices are properly maintained and calibrated.

You need to have clear requirements for purchased product. You need to determine customer requirements and create systems for communicating with customers about product information, inquiries, contracts, orders, feedback and complaints.

When developing new products, you need to plan the stages of development, with appropriate testing at each stage. You need to test and document whether the product meets design requirements, regulatory requirements and user needs.

You need to regularly review performance through internal audits and meetings. Determine whether the quality system is working and what improvements can be made. Deal with past problems and potential problems. Keep records of these activities and the resulting decisions, and monitor their effectiveness (note: you need a documented procedure for internal audits).

You need documented procedures for dealing with actual and potential nonconformances (problems involving suppliers or customers, or internal problems). Make sure no one uses bad product, determine what to do with bad product, deal with the root cause of the problem and keep records to use as a tool to improve the system.

86

iv. 1987 version ISO 9000:1987 had the same structure as the UK Standard BS 5750, with three 'models' for quality management systems, the selection of which was based on the scope of activities of the organization:

ISO 9001:1987 Model for quality assurance in design, development, production, installation, and servicing was for companies and organizations whose activities included the creation of new products.

ISO 9002:1987 Model for quality assurance in production, installation, and servicing had basically the same material as ISO 9001 but without covering the creation of new products.

ISO 9003:1987 Model for quality assurance in final inspection and test covered only the final inspection of finished product, with no concern for how the product was produced.

ISO 9000:1987 was also influenced by existing U.S. and other Defense Standards ("MIL SPECS"), and so was well-suited to manufacturing. The emphasis tended to be placed on conformance with procedures rather than the overall process of managementwhich was likely the actual intent. v. 1994 version ISO 9000:1994 emphasized quality assurance via preventive actions, instead of just checking final product, and continued to require evidence of compliance with documented procedures. As with the first edition, the down-side was that companies tended to implement its requirements by creating shelf-loads of procedure manuals, and becoming burdened with an ISO bureaucracy. In some companies, adapting and improving processes could actually be impeded by the quality system.

87

Vi. 2000 version

The portuguese ISO 9001certification image ISO 9001:2000 combines the three standards 9001, 9002, and 9003 into one, called 9001. Design and development procedures are required only if a company does in fact engage in the creation of new products. The 2000 version sought to make a radical change in thinking by actually placing the concept of process management front and center ("Process management" was the monitoring and optimizing of a company's tasks and activities, instead of just inspecting the final product). The 2000 version also demands involvement by upper executives, in order to integrate quality into the business system and avoid delegation of quality functions to junior administrators. Another goal is to improve effectiveness via process performance metrics numerical measurement of the effectiveness of tasks and activities. Expectations of continual process improvement and tracking customer satisfaction were made explicit. The ISO 9000 standard is continually being revised by standing technical committees and advisory groups, who receive feedback from those professionals who are implementing the standard vii. 2008 version ISO 9001:2008 only introduces clarifications to the existing requirements of ISO 9001:2000 and some changes intended to improve consistency with ISO 14001:2004. There are no new requirements. A quality management system being upgraded just needs to be checked to see if it is following the clarifications introduced in the amended version.

88

viii. Certification ISO does not itself certify organizations. Many countries have formed accreditation bodies to authorize certification bodies, which audit organizations applying for ISO 9001 compliance certification. Although commonly referred to as ISO 9000:2000 certification, the actual standard to which an organization's quality management can be certified is ISO 9001:2000. Both the accreditation bodies and the certification bodies charge fees for their services. The various accreditation bodies have mutual agreements with each other to ensure that certificates issued by one of the Accredited Certification Bodies (CB) are accepted worldwide. The applying organization is assessed based on an extensive sample of its sites, functions, products, services and processes; a list of problems ("action requests" or "non-compliances") is made known to the management. If there are no major problems on this list, the certification body will issue an ISO 9001 certificate for each geographical site it has visited, once it receives a satisfactory improvement plan from the management showing how any problems will be resolved. An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals recommended by the certification body, usually around three years. In contrast to the Capability Maturity Model there are no grades of competence within ISO 9001. ix. Auditing Two types of auditing are required to become registered to the standard: auditing by an external certification body (external audit) and audits by internal staff trained for this process (internal audits). The aim is a continual process of review and assessment, to verify that the system is working as it's supposed to, find out where it can improve and to correct or prevent problems identified. It is considered healthier for internal auditors to audit outside their usual management line, so as to bring a degree of independence to their judgments. 89

Under the 1994 standard, the auditing process could be adequately addressed by performing "compliance auditing":

Tell me what you do (describe the business process) Show me where it says that (reference the procedure manuals) Prove that that is what happened (exhibit evidence in documented records)

How this led to preventive actions was not clear. The 2000 standard uses the process approach. While auditors perform similar functions, they are expected to go beyond mere auditing for rote "compliance" by focusing on risk, status and importance. This means they are expected to make more judgments on what is effective, rather than merely adhering to what is formally prescribed. The difference from the previous standard can be explained thus: Under the 1994 version, the question was broadly "Are you doing what the manual says you should be doing?", whereas under the 2000 version, the question is more "Will this process help you achieve your stated objectives? Is it a good process or is there a way to do it better?". The ISO 19011 standard for auditing applies to ISO 9001 besides other management systems like EMS ( ISO 14001), FSMS (ISO 22000) etc. Industry-specific interpretations The ISO 9001 standard is generalized and abstract. Its parts must be carefully interpreted, to make sense within a particular organization. Developing software is not like making cheese or offering counseling services; yet the ISO 9001 guidelines, because they are business management guidelines, can be applied to each of these. Diverse organizationspolice departments (US), professional

90

soccer teams (Mexico) and city councils (UK)have successfully implemented ISO 9001:2000 systems. Over time, various industry sectors have wanted to standardize their interpretations of the guidelines within their own marketplace. This is partly to ensure that their versions of ISO 9000 have their specific requirements, but also to try and ensure that more appropriately trained and experienced auditors are sent to assess them.

The TickIT guidelines are an interpretation of ISO 9000 produced by the UK Board of Trade to suit the processes of the information technology industry, especially software development.

AS9000

is

the

Aerospace

Basic

Quality

System

Standard,

an

interpretation developed by major aerospace manufacturers. Those major manufacturers include AlliedSignal, Allison Engine, Boeing, General Electric Aircraft Engines, Lockheed-Martin, McDonnell Douglas, Northrop Grumman, Pratt & Whitney, Rockwell-Collins, Sikorsky Aircraft, and Sundstrand. The current version is AS9100.

PS 9000 is an application of the standard for Pharmaceutical Packaging Materials. The Pharmaceutical Quality Group (PQG) of the Institute of Quality Assurance (IQA) has developed PS 9000:2001. It aims to provide a widely accepted baseline GMP framework of best practice within the pharmaceutical packaging supply industry. It applies ISO 9001: 2000 to pharmaceutical printed and contact packaging materials.

QS 9000 is an interpretation agreed upon by major automotive manufacturers (GM, Ford, Chrysler). It includes techniques such as FMEA and APQP. QS 9000 is now replaced by ISO/TS 16949.

ISO/TS 16949:2002 is an interpretation agreed upon by major automotive manufacturers (American and European manufacturers); the latest version is based on ISO 9001:2000. The emphasis on a process approach is stronger than in ISO 9001:2000. ISO/TS 16949:2002 contains the full text of ISO 9001:2000 and automotive industry-specific requirements. 91

TL 9000 is the Telecom Quality Management and Measurement System Standard, an interpretation developed by the telecom consortium, QuEST Forum. The current version is 4.0 and unlike ISO 9001 or the above sector standards, TL 9000 includes standardized product measurements that can be benchmarked. In 1998 QuEST Forum developed the TL 9000 Quality Management System to meet the supply chain quality requirements of the worldwide telecommunications industry.

ISO 13485:2003 is the medical industry's equivalent of ISO 9001:2000. Whereas the standards it replaces were interpretations of how to apply ISO 9001 and ISO 9002 to medical devices, ISO 13485:2003 is a standalone standard. Compliance with ISO 13485 does not necessarily mean compliance with ISO 9001:2000.

ISO 29001 is quality management system requirements for the design, development, production, installation and service of products for the petroleum, petrochemical and natural gas industries.

x. Debate on the Effectiveness of ISO 9000 The debate on the effectiveness of ISO 9000 commonly centers on the following questions: 1. Are the quality principles in ISO 9001:2000 of value? (Note that the version date is important: in the 2000 version ISO attempted to address many concerns and criticisms of ISO 9000:1994). 2. Does it help to implement an ISO 9001:2000 compliant quality management system? 3. Does it help to obtain ISO 9001:2000 certification?

92

Advantages It is widely acknowledged that proper quality management improves business, often having a positive effect on investment, market share, sales growth, sales margins, competitive advantage, and avoidance of litigation.[2][3] The quality principles in ISO 9000:2000 are also sound, according to Wade, [4] and Barnes,
[3]

who says "ISO 9000 guidelines provide a comprehensive model for quality management systems that can make any company competitive." Barnes also cites a survey by Lloyd's Register Quality Assurance which indicated that ISO 9000 increased net profit, and another by Deloitte-Touche which reported that the costs of registration were recovered in three years. According to the Providence Business News advantages: 1. Create a more efficient, effective operation 2. Increase customer satisfaction and retention 3. Reduce audits 4. Enhance marketing 5. Improve employee motivation, awareness, and morale 6. Promote international trade 7. Increases profit 8. Reduce waste and increases productivity However, a broad statistical study of 800 Spanish companies
[6] [5]

, implementing ISO often gives the following

found that ISO

9000 registration in itself creates little improvement because companies interested in it have usually already made some type of commitment to quality management and were performing just as well before registration.[2] In today's service-sector driven economy, more and more companies are using ISO 9000 as a business tool. Through the use of properly stated quality objectives, customer satisfaction surveys and a well-defined continual

93

improvement program companies are using ISO 9000 processes to increase their efficiency and profitability. xii. Problems A common criticism of ISO 9001 is the amount of money, time and paperwork required for registration.[7] According to Barnes, "Opponents claim that it is only for documentation. Proponents believe that if a company has documented its quality systems, then most of the paperwork has already been completed."[3] According to Seddon, ISO 9001 promotes specification, control, and procedures rather than understanding and improvement.
[8] [9]

Wade argues that ISO 9000 is

effective as a guideline, but that promoting it as a standard "helps to mislead companies into thinking that certification means better quality, ... [undermining] the need for an organization to set its own quality standards." guarantee a successful quality system. The standard is seen as especially prone to failure when a company is interested in certification before quality.[8] Certifications are in fact often based on customer contractual requirements rather than a desire to actually improve quality.[3][10] "If you just want the certificate on the wall, chances are, you will create a paper system that doesn't have much to do with the way you actually run your business," said ISO's Roger Frost.[10] Certification by an independent auditor is often seen as the problem area, and according to Barnes, "has become a vehicle to increase consulting services." achieved. [11] Another problem reported is the competition among the numerous certifying bodies, leading to a softer approach to the defects noticed in the operation of the Quality System of a firm.
[3] [4]

Paraphrased,

Wade's argument is that reliance on the specifications of ISO 9001 does not

In fact, ISO itself advises that ISO 9001 can

be implemented without certification, simply for the quality benefits that can be

94

Abrahamson[12] argued that fashionable management discourse such as Quality Circles tends to follow a lifecycle in the form of a bell curve, possibly indicating a management fad. xiii. Summary A good overview for effective use of ISO 9000 is provided by Barnes: [3] "Good business judgment is needed to determine its proper role for a company... Is certification itself important to the marketing plans of the company? If not, do not rush to certification... Even without certification, companies should utilize the ISO 9000 model as a benchmark to assess the adequacy of its quality programs."

4.2

GMP

Good Manufacturing Practice or GMP (also referred to as 'cGMP' or 'current Good Manufacturing Practice') is a term that is recognized worldwide for the control and management of manufacturing and quality control testing of foods, pharmaceutical products, and medical devices. Since sampling product will statistically only ensure that the samples themselves (and perhaps the areas adjacent to where the samples were taken) are suitable for use, and end-point testing relies on sampling, GMP takes the holistic approach of regulating the manufacturing and laboratory testing environment itself. An extremely important part of GMP is documentation of every aspect of the process, activities, and operations involved with drug and medical device manufacture. If the documentation showing how the product was made and tested (which enables traceability and, in the event of future problems, recall from the market) is not correct and in order, then the product does not meet the required specification and is considered contaminated (adulterated in the US).

95

Additionally, GMP requires that all manufacturing and testing equipment has been qualified as suitable for use, and that all operational methodologies and procedures (such as manufacturing, cleaning, and analytical testing) utilized in the drug manufacturing process have been validated (according to predetermined specifications), to demonstrate that they can perform their purported function(s). In the US, the phrase "current good manufacturing practice" appears in 501(B) of the 1938 Food, Drug, and Cosmetic Act (21USC351). US courts may theoretically hold that a drug product is adulterated even if there is no specific regulatory requirement that was violated as long as the process was not performed according to industry standards. By June 2010, the same cGMP requirements will apply to all manufacture of dietary supplements.[1] i. The World Health Organization Version The World Health Organization (WHO) version of GMP is used by pharmaceutical regulators and the pharmaceutical industry in over one hundred countries worldwide, primarily in the developing world. The European Union's GMP (EU-GMP) enforces more compliance requirements than the WHO GMP, as does the Food and Drug Administration's version in the US. Similar GMPs are used in other countries, with Australia, Canada, Japan, Singapore and others having highly developed/sophisticated GMP requirements. In the United Kingdom, the Medicines Act (1968) covers most aspects of GMP in what is commonly referred to as "The Orange Guide", because of the colour of its cover, is officially known as The Rules and Guidance for Pharmaceutical Manufacturers and Distributors. Since the 1999 publication of GMPs for Active Pharmaceutical Ingredients, by the International Conference on Harmonization (ICH), GMPs now apply in those countries and trade groupings that are signatories to ICH (the EU, Japan and the

96

US), and applies in other countries (e.g., Australia, Canada, Singapore) which adopt ICH guidelines to the manufacture and testing of active raw materials. GMP is designed to help assure the quality of drug products by ensuring several key attributes, including correctness and legibility of recorded manufacturing and control documentation. Data transfers must be performed in specific ways to avoid mistakes (e.g., writing down a reading on a balance and requiring a second person to also check the balance reading to assure accuracy). Methods have been developed to make this process easier (e.g., links between equipment and central data storage facilities for direct transfer of important data). ii. Enforcement GMPs are enforced in the United States by the FDA; within the European Union, GMP inspections are performed by National Regulatory Agencies (e.g., GMP inspections are performed in the United Kingdom by the Medicines and Healthcare products Regulatory Agency (MHRA)); in the Republic of Korea (South Korea) by the Korea Food and Drug Administration (KFDA); in Australia by the Therapeutical Goods Administration (TGA); in South Africa by the Medicines Control Council (MCC); in Brazil by the Agncia Nacional de Vigilncia Sanitria (National Health Surveillance Agency Brazil) (ANVISA); in Iran, India and Pakistan by the Ministry of Health[1]; and by similar national organisations worldwide. Each of the inspectorates carry out routine GMP inspections to ensure that drug products are produced safely and correctly; additionally, many countries perform Pre-Approval Inspections (PAI) for GMP compliance prior to the approval of a new drug for marketing. Regulatory agencies (including the FDA in the US and regulatory agencies in many European nations) are authorized to conduct unannounced inspections, though some are scheduled. FDA routine domestic inspections are usually unannounced, but must be conducted according to 704(A) of the FD&C Act (21USC374), which requires that they are performed at a "reasonable time."

97

Courts have held that any time the firm is open for business is a reasonable time for an inspection. iii. Other good practices Other 'Good Practice' systems, along the same lines as GMP, exist:

Good Laboratory Practice (GLP), for laboratories conducting non-clinical studies (toxicology and pharmacology studies in animals); Good clinical practice' (GCP), for hospitals and clinicians conducting clinical studies on new drugs in humans; Good Regulatory Practice (GRP), for the management of regulatory commitments, procedures and documentation.

Collectively, these 'Good Practice' requirements are referred to as 'GxP' requirements, all of which follow similar philosophies. This is far from a complete list, other examples include Good Agriculture Practices, Good Guidance Practices, and Good Tissue Practices. In the US, medical device manufacturers must follow what are called "Quality System Regulations" which are deliberately harmonized with ISO requirements, not cGMPs. 4.3 Halal Certificate

JAKIM Halal Certificate i. Checklist The applicant must complete the application forms by furnishing all the information as required and the following certificates / documents must be enclosed together with the application form.

98

a. Company profile b. Company / business registration c. Name and product description/menu for verification d. Contents of ingredients e. Names and addresses of manufacturers/ supplier of the ingredients f. Halal status for the ingredients and the halal certificate or the product specification for critical

Ingredients ( if applicable) g. Type of packaging materials h. Manufacturing processes and procedures i. Other documents such as HACCP, ISO,GHP, GMP, TQM and so forth. j. Premise/factory location map The applicant must provide a special folder Halal Certification Certificate for keeping all relevant documents. If will be useful when an inspection is carried out at the premise. For incomplete application, the applicant will be notified by mail accordingly. Whereas, For the complete application, notice for service charge will be mailed to the applicant. Notes:- Document from CCM (Company Commission of Malaysia) information are not required if application is made through BLESS. ii. Guideline Applicants who are eligible to apply for Halal Certification are categorized as follows:

99

a. Manufacturer/ producer b. Distributor/ trader c. Sub-contract manufacturer d. Repacking; e. Food premise, and f. Abattoir Application for Halal Certification Certificate for national and International market must be forwarded directly to JAKIM. Application for Halal Certification Certificate for local/domestic market may be forwarded directly to the relevant State Islamic dept / council. State Islamic Department / Council Applications will be rejected due to the following reasons: a. The company/firm is producing halal and non-halal product. b. The product is not halal c. Natural materials not involving any processing d. Medications or products categorized by the Ministry of Health as Pharma Ceutical products. e. hair color/ hair dye; f. Finished processed products from abroad. g. Products using synonyms and confusing vocabularies such as bak kut teh etc. h. Fertilizer and animal food.

100

CHAPTER 5 Quality Awards

101

5.1

Deming Prize

i. Deming Prize in Japan The late Dr. W. E. Deming (1900 - 1993), one of the foremost experts of quality control in the United States, was invited to Japan by the Union of Japanese Scientists and Engineers (JUSE) in July 1950. Upon his visit, Dr. Deming lectured day after day his "Eight-Day Course on Quality Control" at the Auditorium of the Japan Medical Association in KandaSurugadai, Tokyo. This was followed by Dr. Deming's "One-Day Course on Quality Control for Top Management," held in Hakone. Through these seminars, Dr. Deming taught the basics of statistical quality control plainly and thoroughly to executives, managers, engineers and researchers of Japanese industry. impetus to quality control in Japan, which was in its infancy. The transcript of the eight-day course, "Dr. Deming's Lectures on Statistical Control of Quality," was compiled from stenographic records and distributed for a charge. Dr. Deming donated his royalties to JUSE. In appreciation of Dr. His teachings made a deep impression on the participants' mind and provided great

102

Deming's generosity, the late Mr. Kenichi Koyanagi, managing director of JUSE, proposed using it to fund a prize to commemorate Dr. Deming's contribution and friendship in a lasting way and to promote the continued development of quality control in Japan. Upon receiving the proposal, the JUSE's board of directors unanimously made a resolution to establish the Deming Prize. As shown in the table below, the categories of the Deming Prize are the Deming Prize for Individuals, the Deming Application Prize and the Quality Control Award for Operations Business Units.

For individuals or groups Given to those who have made outstanding The Deming Prize for Individuals contributions to the study of TQM or statistical methods used for TQM, or those who have made outstanding contributions in the dissemination of TQM For organizations or divisions of organizations that manage their business autonomously The Deming Application Prize Given to organizations that have or divisions of

organizations

achieved

distinctive

performance improvement through the application of TQM in a designated year For operations business units of an organization Given The Quality Control Award Operations Business Units to operations that business units of an for organization

have

achieved

distinctive

performance improvement through the application of quality control/management in the pursuit of TQM in a designated year

The Deming Prize, especially the Deming Application Prize that is given to companies, has exerted an immeasurable influence directly or indirectly on the development of quality control/management in Japan.

103

Applicant companies and divisions of companies sought after new approaches to quality management that met the needs of their business environment and challenged for the Deming Prize. put the methods into practice. Commonly, those who have challenged for the Prize share the feeling that they have had a valuable experience and that the management principle of achieving a business success through quality improvement has really worked. inspired to begin their own quest for quality management. Through witnessing the success of these organizations, many other companies have been Learning from those who went before them, the new practitioners are convinced that quality management is an important key to their business success and that the challenge to attain the Prize can provide an excellent opportunity to learn useful quality methodologies. Thus, quality management has spread to many organizations, its methods have evolved over the years and the methods contributed to the advancement of these organizations' improvement activities. This mechanism that encourages each organization's self-development comes from the examination process of the Deming Prize, though the very process has invited some criticism that the marking criteria for the Deming Application Prize is unclear. To make the examination process more transparent and to communicate the intentions of the Deming Prize more clearly, the evaluation criteria and the judgment criteria for passing are now presented. However, the Committee's basic stance on the examination criteria remains unchanged. circumstance. Namely, the criteria should reflect each applicant organization's Those organizations developed effective quality management methods, established the structures for implementation and

The Deming Prize examination does not require applicants to conform to a model provided by the Deming Prize Committee. Rather, the applicants are expected

104

to understand their current situation, establish their own themes and objectives and improve and transform themselves company-wide. future are subjects for the examination. Not only the results achieved and the processes used, but also the effectiveness expected in the To the best of their abilities, the examiners evaluate whether or not the themes established by the applicants were commensurate to their situation; whether or not their activities were suitable to their circumstance and whether or not their activities are likely to achieve their higher objectives in the future. The Deming Prize Committee views the examination process as an opportunity for "mutual-development," rather than "examination." to evaluation and judgment is comprehensive. While in realty the applicants still receive the examination by a third party, the examiners' approach Every factor such as the applicants' attitude toward executing Total Quality Management (TQM), their implementation status and the resulting effects are taken into overall consideration. In other words, the Deming Prize Committee does not specify what issues the applicants must address, rather the applicants themselves are responsible for identifying and addressing such issues, thus, this process allows quality methodologies to be further developed. Total Quality Control (TQC) that had been developed in Japan as discussed above was re-imported to the United States in the 1980s and contributed to the revitalization of its industries. While the term TQC had been used in Japan, it was translated as TQM in western nations. To follow an internationally-accepted practice, Japan changed the name from TQC to TQM There is no easy success at this time of constant change. No organization can

expect to build excellent quality and management systems just by solving problems given by others. They need to think on their own, set lofty goals and drive themselves to challenge for achieving those goals. For these companies that introduce and implement TQM in this manner, the Deming Application Prize

105

aims to be used as a tool for improving and transforming their business management. The Deming Prize Committee conducts the examination and awards the Deming Prize. It is customary that the chairman of the Foundation of Economic The Organizations assumes office as the chairman of the Committee. The Committee members are consisted of TQM experts from industries and academia . Prize examination and discuss related matters.
The Total Adjustment Subcommittee The System Amendment Subcommittee The Deming Prize for Individuals Subcommittee The Deming Application Prize Subcommittee The Nikkei QC Literature Prize Subcommittee Examines and selects the candidates for the Japan Quality Medal, the Deming Application Prize and the Quality Control Award for Operations Business Units. Also, conducts the TQM Diagnosis by Deming Prize Committee Members*. Examines and selects the candidates for the Nikkei QC Literature Prize. Examines and selects the candidates for the Deming Prize for Individuals. Coordinates Deming Prize-related activities, widely listens to input on how to improve the examination and award process and reports its recommendations to the Committee. Reviews the systems and regulations regarding the Deming Prize and proposes necessary revisions to the Committee.

Deming Prize Committee utilizes five subcommittees to carry out the Deming

ii. TQM Diagnosis by The Deming Prize Committee Recommended for preparing for the Deming Prize challenge or grasping the level of TQM.

106

Carrying out the TQM Diagnosis by the Deming Prize Committee is a mandatory requirement upon application for the Deming Prize/Japan Quality Medal. In the event that the application is made to prepare for the Deming Prize/ Japan Quality Medal challenge, the pre-application consultation will also be carried out by the examiners during the on-site TQM diagnosis. i. What is the TQM Diagnosis? It is useful to have a third party objectively diagnose the implementation status of TQM and provide recommendations so that the company can better understand where it stands and what it has to do to promote TQM more effectively. Established in 1971, the TQM Diagnosis, which is provided by the Deming Application Prize Subcommittee upon request of a company, aims to contribute to the further development of that company's TQM. The TQM Diagnosis is not a preliminary Deming Application Prize examination. A company that receives the TQM Diagnosis cannot apply for the Deming Application Prize examination that same year. Furthermore, whether or not a company has received the TQM Diagnosis has no influence or bearing whatsoever on the results of the Deming Prize examination ii. TQM Diagnosis Procedures The purpose of the TQM Diagnosis is to further advance the promotion and practice of effective TQM in companies under diagnosis. The TQM diagnosis and resulting guidance is provided from an objective viewpoint to companies at varying stages of TQM advancement as indicated below. Those companies that wish to receive the TQM Diagnosis must complete and submit the application form with necessary documents at least three months prior to the desired diagnosis dates. However, no diagnosis will be conducted during the Deming Prize examination period (early July to mid-October).

107

For companies at the introductory and promotional stages: diagnose the status of TQM and provide recommendations. For companies that wish to effectively use the Deming Application Prize criteria to promote TQM: diagnose the status of TQM and provide recommendations in view of the criteria. For companies that wish to receive the Diagnosis in lieu of the on-site review three years after receiving the Japan Quality Medal or the Deming Application Prize: diagnose the status of TQM and provide recommendations The Deming Application Prize Subcommittee conducts the TQM Diagnosis. While the details of the diagnosis program will be determined in consultation with the company, the methods and documents used for the diagnosis follow those for the Deming Application Prize. As a rule, the diagnosis will be based on the company's presentations, the on-site examination, the document review and questions and answers. The results of the diagnosis will be communicated through a report on the diagnosis findings after the findings of all the examiners who conducted the diagnosis have been compiled. Those companies that wish to receive the TQM Diagnosis should contact the JUSE Secretariat for the Deming Prize Committee.

5.2

MBNQA

i. Introductions

108

The Malcolm Baldrige National Quality Award is given by the United States National Institute of Standards and Technology. Through the actions of the National Productivity Advisory Committee chaired by Jack Grayson,, it was established by the Malcolm Baldrige National Quality Improvement Act of 1987 Public Law 100-107 and named for Malcolm Baldrige, who served as United States Secretary of Commerce during the Reagan administration from 1981 until his 1987 death in a rodeo accident. APQC, organized the first White House Conference on Productivity, spearheading the creation and design of the Malcolm Baldrige National Quality Award in 1987, and jointly administering the award for its first three years. The program recognizes quality service in the business, health care, education, and nonprofit sectors and was inspired by the ideas of Total Quality Management or TQM. This is the only quality award that is actually awarded by the President of the United States. This award and the Ron Brown Award are the two U.S. presidential awards given to corporations. The original stated purposes of the award were to:

promote quality awareness recognize quality achievements of the US companies publicize successful quality strategies

The current award criteria are stated to have three important roles in strengthening US competitiveness:

To help improve organizational performance practices, capabilities and results To facilitate communication and sharing of the best practice information among US organizations of all types To serve as a working tool for understanding and managing performance and for guiding planning and opportunities for learning

The criteria are designed to help organizations use an aligned approach to organizational performance management that results in: 109

Delivery of ever-improving value to customers, contributing to market success Improvement in overall organizational effectiveness and capabilities Organizational and personal learning

The seven categories of the criteria are: 1. 2. 3. 4. 5. 6. Leadership Strategic Planning Customer & Market Focus Measurement, Analysis and Knowledge Management Workforce Focus Process Management

ii. Results The basic criteria for the award are found at the Baldrige Program Web Site where they provide free downloads as follows:

Criteria for Performance Excellence Education Criteria for Performance Excellence Healthcare Criteria for Performance Excellence

iii. Winners 2008 Poudre Valley Health System Fort Collins CO (health care)

110

Cargill Corn Milling North America Wayzata, Minn. (manufacturing) Iredell-Statesville Schools Statesville, N.C. (education) 2007 PRO-TEC Coating Co. Leipsic, Ohio (small business) Mercy Health System Janesville, Wisc. (health care) Sharp Healthcare San Diego, Calif. (health care) City of Coral Springs Coral Springs, Florida (nonprofit) U.S. Army Armament Research, Development and Engineering Center (ARDEC) Picatinny Arsenal, N.J. (nonprofit) 2006 MESA Products, Inc. Tulsa, Okla. (small business) Premier Inc. San Diego, Calif. (service) North Mississippi Medical Center Tupelo, Miss. (health care) 2005 Sunny Fresh Foods, Inc. Monticello, Minn. (manufacturing) DynMcDermott Petroleum Operations New Orleans, La. (service) Park Place Lexus Plano, Texas (small business) Richland College Dallas, Texas (education) Jenks Public Schools Jenks, Okla. (education) Bronson Methodist Hospital Kalamazoo, Mich. (health care)

111

2004 The Bama Companies Tulsa, Okla. (manufacturing) Texas Nameplate Company, Inc. Dallas, Texas (small business) Kenneth W. Monfort College of Business Greeley, Colo. (education) Robert Wood Johnson University Hospital Hamilton Hamilton, N.J. (health care) 2003 Medrad, Inc. Indianola, Pa. (manufacturing) Boeing Aerospace Support St. Louis, Mo. (service) Caterpillar Financial Services Corp. Nashville, Tenn. (service) Stoner Inc. Quarryville, Pa. (small business) Community Consolidated School District 15 Palatine, Ill. (education) Baptist Hospital, Inc. Pensacola, Fla. (health care) Saint Lukes Hospital of Kansas City Kansas City, Mo. (health care) 2002 Motorola Inc. Commercial, Government and Industrial Solutions Sector Schaumburg, Ill. (manufacturing) Branch-Smith Printing Division Fort Worth, Texas (small business) SSM Health Care St. Louis, Mo. (health care) 2001 Clarke American Checks, Incorporated San Antonio, Texas (manufacturing) Pals Sudden Service Kingsport, Tenn. (small business) 112

Chugach School District Anchorage, Alaska (education) Pearl River School District Pearl River, N.Y. (education) University of Wisconsin-Stout Menomonie, Wis. (education) 2000 Dana Corp.-Spicer Driveshaft Division Toledo, Ohio (manufacturing) KARLEE Company, Inc. Garland, Texas (manufacturing) Operations Management International, Inc. Greenwood Village, Colo. (service) Los Alamos National Bank Los Alamos, N.M. (small business) 1999 STMicroelectronics, Inc.-Region Americas Carrollton, Texas (manufacturing) BI Performance Services Minneapolis, Minn. (service) The Ritz-Carlton Hotel Company, L.L.C. Atlanta, Ga. (service) Sunny Fresh Foods Monticello, Minn. (small business) 1998 Boeing Airlift and Tanker Programs Long Beach, Calif. (manufacturing) Solar Turbines Inc. San Diego, Calif. (manufacturing) Texas Nameplate Company Inc. Dallas, Texas (small business) 1997 3M Dental Products Division St. Paul, Minn. (manufacturing) Solectron Corp. Milpitas, Calif. (manufacturing)

113

Merrill Lynch Credit Corp. Jacksonville, Fla. (service) Xerox Business Services Rochester, NY (service) 1996 ADAC Laboratories Milpitas, Calif. (manufacturing) Dana Commercial Credit Corp. Toledo, Ohio (service) Custom Research Inc. Minneapolis, Minn. (small business) Trident Precision Manufacturing Inc. Webster, NY (small business) 1995 Armstrong World Industries Building Products Operation Lancaster, Pa. (manufacturing) Corning Telecommunications Products Division Corning, NY (manufacturing) 1994 AT&T Consumer Communications Services Basking Ridge, N.J. (service) GTE Directories Corp. Dallas/Ft. Worth, Texas (service) Wainwright Industries Inc. St. Peters, Mo. (small business)

1993 Eastman Chemical Co. Kingsport, Tenn. (manufacturing) Ames Rubber Corp. Hamburg, NJ (small business)

114

1992 AT&T Network Systems Group/Transmission Systems Business Unit Morristown, NJ (manufacturing) Texas Instruments Inc. Defense Systems & Electronics Group Dallas, Texas (manufacturing) AT&T Universal Card Services Jacksonville, Fla. (service) The Ritz-Carlton Hotel Co. Atlanta, Ga. (service) Granite Rock Co. Watsonville, Calif. (small business) 1991 Solectron Corp. Milpitas, Calif. (manufacturing) Zytec Corp. Eden Prairie, Minn. (manufacturing) Marlow Industries Dallas, Texas (small business) 1990 Cadillac Motor Car Division Detroit, Mich. (manufacturing) IBM Rochester Rochester, Minn. (manufacturing) Federal Express Corp. Memphis, Tenn. (service) Wallace Co. Inc. Houston, Texas (small business)

1989 Milliken & Co. Spartanburg, S.C. (manufacturing) Xerox Corp. Business Products and Systems Rochester, NY (manufacturing)

115

1988 Motorola Inc. Schaumburg, Ill. (manufacturing) Commercial Nuclear Fuel Division of Westinghouse Electric Corp. Pittsburgh, Pa. (manufacturing) Globe Metallurgical Inc. Beverly, Ohio (small business)

5.3

PMQA

a. PMQA for Private Sector i. Prime Ministers Quality Award (Private Sector) The Prime Minister Quality Award (Private Sector Category) was first introduced on 9 November 1990. This annual national quality award is given to organisations in the private sector in recognition for their excellent achievement in quality management. Winning the award is a prestigious accomplishment, as the Prime Minister Quality Award is a proof of Organisational Excellence. Organisations that receive the Award may publicise and advertise receipt of the award for a period of 3 years, as long as the year of receiving the award is mentioned. ii. Objectives Promote Quality Awareness among various organisations in the private sector category. Promote the adoption of Quality values in organisations. Encourage healthy competition among organisations towards continuous improvement of quality. Encourage information sharing on successful performance strategies and the benefits derivedfrom using these strategies.

116

iii. Who Can Apply Any organisation registered under the Malaysian Companies Act 1965 is eligible to apply for this Award. iv. Criteria Top Management Leadership and Management of Quality Use of Quality Data and Information Human Resource Management Customer Focus Quality Assurance of External Suppliers Process Management Quality and Operational/Business Results Corporate Responsibilities v. On-Site Inspection There will be a pre-assessment visit prior to the assessment visit by the Panel of Judges. The Panel of Judges will then recommend the award recipient to the Government. The primary objectives of the on-site assessments are to: Verify the information provided in the application form; and Clarify issues and questions raised during the review of the submission paper by the Panel of Judges. vi. Qualification Rules Qualification Rules to apply this award are: All companies registered under Companies Act, Malaysia 1965. Completed Participation Form with company's official stamp. Audited Financial Statement for the previous year has to be attached. An organization awarded with PMQA is not entitled to be nominated for the duration of three years, starting from the year of receiving the award.

117

Subsidiaries of large companies may apply as separate entities if they are able to provide supportive documents to prove their own organisational corporate identify as reflected in their corporate literature. vii. The Award Recipients Responsibilities and Contributions The Award recipient is required to share information of its successful performance and quality strategies with other Malaysian organisations. However, the recipient is not required to share proprietary information even if such information is part of the award application. The Award winner will only be eligible to apply for the Award again after 3 years, starting from the year of receiving the Award. viii. Review Period For the Award, all data and information should relate mainly to the companys performance in the immediate previous year. ix. Recognition to the Winner The Prime Minister Quality Award (PMQA) Trophy Cash prize of RM30,000 (Ringgit Malaysia Thirty Thousand only) Certificate of Appreciation Eligible to use the Symbol Q for three years following the year of the Award for publicity purposes

b. Prime Minister for Government Sector AKPM (SEKTOR AWAM)

118

i. PENGENALAN Anugerah Kualiti Perdana Menteri (AKPM) merupakan pengiktirafan tertinggi Kerajaan kepada agensi-agensi sektor awam yang telah berjaya menonjolkan kecemerlangan menyeluruh dalam pengurusan organisasi masingmasing serta mampu menyampaikan perkhidmatan yang berkualiti kepada pelanggan. Pelaksanaan AKPM ini adalah sebahagian daripada aktiviti utama di bawah Gerakan Budaya Kerja Cemerlang. Di bawah anugerah ini, calon akhir yang tidak terpilih sebagai pemenang AKPM akan dipertimbangkan untuk memenangi anugerah-anugerah seperti berikut: a. Anugerah Kualiti Ketua Setiausaha Negara (AKKSN); b. Anugerah Kualiti Ketua Pengarah Perkhidmatan Awam (AKKPPA); dan c. Anugerah Kualiti Ketua Pengarah MAMPU (AKKP MAMPU). Ia bertujuan untuk mengiktiraf agensi-agensi lain yang juga cemerlang tetapi belum mencapai tahap memenangi AKPM. ii. OBJEKTIF AKPM AKPM diwujudkan untuk: a. Menggalak dan meningkatkan kesedaran tentang kualiti dalam sektor awam; b. Memberi pengiktirafan secara formal kepada agensi-agensi Kerajaan yang telah menunjukkan pemahaman yang mendalam tentang pengurusan dan peningkatan kualiti serta yang telah mencapai tahap unggul kepimpinan kualiti; c. Memberi publisiti tentang strategi-strategi kualiti yang telah berjaya;

119

d. Menggalakkan persaingan sihat antara agensi-agensi Kerajaan ke arah memperbaiki lagi amalan pengurusan kualiti; dan e. Memberi manfaat kepada agensi untuk menilai tahap kecemerlangan agensi masing-masing pembaikan perkhidmatan. iii. BENTUK HADIAH Pemenang AKPM akan menerima hadiah dan keistimewaan seperti berikut: a. Hadiah wang tunai sebanyak RM50,000.00, sebuah piala iringan dan sijil penghargaan. b. Dibenarkan untuk tiga (3) tahun berturut-turut menggunakan simbol Q dan mencetak kenyataan Pemenang Anugerah Kualiti Perdana Menteri Tahun .. di bahagian kepala atau kaki muka surat yang digunakan. Pemenang AKKSN akan menerima hadiah wang tunai RM30,000.00, sebuah piala iringan dan sijil penghargaan. Pemenang AKKPPA pula akan menerima hadiah wang tunai RM25,000.00, sebuah piala iringan dan sijil penghargaan. Sementara pemenang AKKP MAMPU akan menerima hadiah wang tunai RM20,000.00, sebuah piala iringan dan sijil penghargaan. iv. SYARAT KELAYAKAN PERMOHONAN Syarat kelayakan bagi memohon menyertai AKPM adalah seperti berikut: serta mengetahui bagi kekuatan, kelemahan sistem dan cadangan masing-masing meningkatkan penyampaian

120

a. Semua Kementerian, Jabatan dan Agensi di peringkat Persekutuan, Negeri dan Daerah serta Badan-badan Berkanun Persekutuan atau Negeri adalah layak untuk menyertai anugerah ini; b. Bahagian, seksyen atau unit di bawah Kementerian, Jabatan, Agensi serta Badan Berkanun Persekutuan dan Negeri secara berasingan tidak layak menyertai anugerah ini; c. Agensi yang ingin menyertai anugerah ini hendaklah mempunyai bilangan anggota tidak kurang daripada 100 orang; d. Permohonan untuk menyertai anugerah ini perlu ditandatangani oleh Ketua Agensi melalui borang permohonan yang ditetapkan; e. Permohonan daripada agensi yang telah menerima AKPM tidak akan dipertimbangkan bagi tempoh tiga tahun mulai tahun berikutnya; dan f. Borang permohonan dan laporan AKPM hendaklah dikemukakan kepada urus setia AKPM selewat-lewatnya pada tarikh tutup yang ditetapkan. v. PROSES PENILAIAN Permohonan AKPM akan melalui proses penilaian seperti berikut: a. Rujukan kepada Badan Pencegah Rasuah bagi kes rasuah, Jabatan Perkhidmatan Awam bagi kes tatatertib dan disiplin, Biro Pengaduan Awam bagi kes aduan awam dan Jabatan Audit Negara bagi kes pengurusan kewangan; b. Tapisan awal oleh pihak urus setia bagi memastikan agensi memenuhi syaratsyarat kelayakan yang telah ditetapkan;

121

c. Tapisan awal permohonan AKPM seperti di bawah: - penilaian ke atas dokumen dan lawatan pra-penilaian oleh Jawatankuasa Teknikal; - Pembentangan Laporan Penilaian kepada Panel Penilai termasuk kekuatan, kelemahan serta cadangan penambahbaikan sebagai asas kepada cadangan senarai pendek. d. Penilaian oleh Panel Penilai AKPM yang dilantik oleh Ketua Pengarah MAMPU yang juga merangkap Pengerusi kepada Panel Penilaian AKPM. Penilaian dibuat melalui lawatan on-site ke agensi yang di senarai pendek. Semasa lawatan penilaian tersebut, agensi dikehendaki memberi taklimat kepada Panel Penilai mengenai kecemerlangan organisasi berpandu kepada empat (4) kriteria penilaian yang telah ditetapkan; e. Perakuan pemenang oleh Panel Penilai kepada Panel Pemantauan Penyampaian Perkhidmatan (Panel 3P) yang dipengerusikan oleh Ketua Setiausaha Negara; dan f. Keputusan muktamad pemenang AKPM oleh Panel 3P. vi. TATACARA PERMOHONAN Agensi yang hendak menyertai AKPM perlu menyempurnakan tindakantindakan berikut: a. Melengkapkan borang permohonan menyertai Anugerah Kualiti Perdana Menteri seperti yang dilampirkan dan ditandatangani oleh Ketua Agensi; b. Menyediakan lima (5) salinan Laporan Anugerah AKPM dalam bentuk hard copy dan satu (1) salinan soft copy yang mengandungi:

122

i. Ringkasan Eksekutif iaitu ringkasan ciri-ciri kecemerlangan agensi; dan ii. Laporan penuh mengenai penerangan dan bukti ciri-ciri kecemerlangan kepada keempat-empat kriteria seperti dilampirkan. c. Ringkasan Eksekutif dan laporan penuh hendaklah menggunakan font Arial dengan saiz 12 dan line spacing 1.5; d. Mengemukakan borang permohonan bersama Laporan AKPM melalui Ketua Agensi kepada: Urus Setia Anugerah Kualiti Perdana Menteri Unit Pemodenan Tadbiran dan Perancangan Pengurusan Malaysia (MAMPU) e. Mengemukakan permohonan sebelum tarikh tutup permohonan yang dinyatakan dalam surat pelawaan; dan f. Mengemukakan salinan permohonan penyertaan kepada Ketua Setiausaha Kementerian/Setiausaha Kerajaan Negeri untuk makluman.

123

CHAPTER 6 Inspection Process

6.1

Sampling Techniques

a. Accidential Sampling

124

Accidental sampling is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a sample population selected because it is readily available and convenient. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer was to conduct such a survey at a shopping center early in the morning on a given day, the people that he/she could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey was to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population that is sampled is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample during the drawing of a single sample. An essential property of Bernoulli sampling is that all elements of the population have equal probability of being included in the sample during the drawing of a single sample. Bernoulli sampling is therefore a special case of Poisson sampling, where each element of the population may have a different probability of being included in the sample. b. Cluster Sampling Cluster sampling is a sampling technique used when "natural" groupings are evident in a statistical population. It is often used in marketing research. In this technique, the total population is divided into these groups (or clusters) and a sample of the groups is selected. Then the required information is collected from the elements within each selected group. This may be done for every element in these groups or a subsample of elements may be selected within each of these

125

groups. The technique works best when most of the variation in the population is within the groups, not between them. i. Cluster Elements

Elements within a cluster should ideally be as heterogeneous as possible, but there should be homogeneity between cluster means. Each cluster should be a small scale representation of the total population. The clusters should be mutually exclusive and collectively exhaustive. A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are used. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters. The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so analysis is done on a population of clusters (at least in the first stage). In stratified sampling, the analysis is done on elements within strata. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are studied. The main objective of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the main objective is to increase precision. ii. Aspects of Cluster Sampling One version of cluster sampling is area sampling or geographical cluster sampling. Clusters consist of geographical areas. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by treating several respondents within a local area as a cluster. It is usually necessary to increase the total sample size to achieve equivalent precision in the estimators, but cost savings may make that feasible.

126

In some situations, cluster analysis is only appropriate when the clusters are approximately the same size. This can be achieved by combining clusters. If this is not possible, probability proportionate to size sampling is used. In this method, the probability of selecting any cluster varies with the size of the cluster, giving larger clusters a greater probability of selection and smaller clusters a lower probability. However, if clusters are selected with probability proportionate to size, the same number of interviews should be carried out in each sampled cluster so that each unit sampled has the same probability of selection. Cluster sampling is used to estimate high mortalities in cases such as wars, famines and natural disasters. Cognitive interviewing is a field research method used primarily in pre-testing survey instruments developed in collaboration by psychologists and survey researchers. It allows survey researchers to collect verbal information regarding survey responses and is used in evaluating whether the question is measuring the construct the researcher intends. The data collected is then used to adjust problematic questions in the questionnaire before fielding the survey instrument to the full sample. Although survey researchers do not totally agree as to what a cognitive interview entails, it in general collects the following information from participants: evaluations on how the subject constructed his or her answers; explanations on what the subject interprets the questions to mean; reporting of any difficulties the subject had in answering the questions; and anything else that reveals the circumstances to the subjects answers. In general, there are two methods practiced when conducting a cognitive interview. The first method, called the think-aloud method, encourages participants participating in the interview to verbalize their thoughts while responding to the survey questions. This method is considered purer as it reduces the possibility of the interviewer introducing any biasness into the

127

participants answers. In contrast, the disadvantage to this method is that it does require training on the part of the participant on the think-aloud process which can be burdensome to the interviewee. The second method has the interviewer ask detailed probes after subjects answer a survey question (called the probingmethod). An example of a probe question is: In your own words, what is this question asking? or How did you arrive at your answer? Advocates for this method suggest that follow-up probes do not interfere with the actual process of responding to survey questions while requiring very little training on the part of the respondent. In order to conduct a cognitive interview on a survey instrument, the researcher should recruit a minimum of 10 participants to a maximum of 25 participants. The participants recruited should reflect the diversity in the population being studied. Cognitive interviewing is regularly practiced by U.S. Federal Agencies, including the Census Bureau, National Center for Health Statistics and the Bureau of Labor Statistics. c. The Demon Algorithm The demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the original state, the excess energy is transferred to the demon. For a sampled state that has higher energy than desired, the demon provides the missing energy if it is available. The demon can not have negative energy and it does not interact with the particles beyond exchanging energy. Note that the additional degree of freedom of the demon does not alter a system with many particles significantly on a macroscopic level. i. Full Procedure Steps

128

1. Perform a random change in the state of a randomly chosen particle (e. g. change velocity or position). 2. Calculate the change in energy E of the thermal system. 3. Negative E, i. e. excess energy, is given to the demon by adding | E | to the demon. This case (E < 0) is always accepted. 4. The demon provides positive E to keep the total energy constant only if it has sufficient energy, i. e. Ed > E. In this case the change is accepted, otherwise the randomly chosen change in velocity is rejected and the algorithm is restarted from the original microscopic state. 5. If the change is accepted, repeat the algorithm for the new configuration. Since energy fluctuations per degree of freedom are only of order 1/N, the presence of the demon has little effect on macroscopic properties of systems with high numbers of particles. After many iterations of the algorithm, the interplay of demon and random energy changes equilibriates the system. Assuming that a particular system approaches all possible states over very long times (quasi-ergodicity), the resulting Monte Carlo dynamics realistically sample microscopic states that correspond to the given energy value. This is only true if macroscopic quantities are stable over many Monte Carlo steps, i. e. if the system is at equilibrium. In the theory of finite population sampling, the inclusion probability of an element is its probability of becoming part of the sample during the drawing of a single sample. Each element of the population may have a different probability of being included in the sample. The inclusion probability is also termed the first-order inclusion probability to distinguish it from the second-order inclusion probability, i.e. the probability of including a pair of elements.

129

Generally, the first-order inclusion probability of the ith element of the population is denoted by the symbol i and the second-order inclusion probability that a pair consisting of the ith and jth element of the population that is sampled is included in a sample during the drawing of a single sample is denoted by ij. d. Latin Hypercube Sampling (LHS) The statistical method of Latin hypercube sampling (LHS) was developed to generate a distribution of plausible collections of parameter values from a multidimensional distribution. The sampling method is often applied in uncertainty analysis. The technique was first described by McKay in 1979. It was further elaborated by Ronald L. Iman, and others in 1981. Detailed computer codes and manuals were later published. In the context of statistical sampling, a square grid containing sample positions is a Latin square if (and only if) there is only one sample in each row and each column. A Latin hypercube is the generalisation of this concept to an arbitrary number of dimensions, whereby each sample is the only one in each axis-aligned hyperplane containing it. When sampling a function of N variables, the range of each variable is divided into M equally probable intervals. M sample points are then placed to satisfy the Latin hypercube requirements; note that this forces the number of divisions, M, to be equal for each variable. Also note that this sampling scheme does not require more samples for more dimensions (variables); this independence is one of the main advantages of this sampling scheme. Another advantage is that random samples can be taken one at a time, remembering which samples were taken so far. The maximum number of combinations for a Latin Hypercube of M divisions and N variables (i.e., dimensions) can be computed with the following formula:

130

For example, a Latin hypercube of M = 4 divisions with N = 2 variables (i.e., a square) will have 24 possible combinations. A Latin hypercube of M = 4 divisions with N = 3 variables (i.e., a cube) will have 576 possible combinations. Orthogonal sampling adds the requirement that the entire sample space must be sampled evenly. Although more efficient, orthogonal sampling strategy is more difficult to implement since all random samples must be generated simultaneously.

In two dimensions the difference between random sampling, Latin Hypercube sampling and orthogonal sampling can be explained as follows: In random sampling new sample points are generated without taking into account the previously generated sample points. One does thus not necessarily need to know beforehand how many sample points are needed.

131

In Latin Hypercube sampling one must first decide how many sample points to use and for each sample point remember in which row and column the sample point was taken. e. Orthogonal Sampling In Orthogonal Sampling, the sample space is divided into equally probable subspaces, the figure above showing four subspaces. All sample points are then chosen simultaneously making sure that the total ensemble of sample points is a Latin Hypercube sample and that each subspace is sampled with the same density. Thus, orthogonal sampling ensures that the ensemble of random numbers is a very good representative of the real variability, LHS ensures that the ensemble of random numbers is representative of the real variability whereas traditional random sampling (sometimes called brute force) is just an ensemble of random numbers without any guarantees. f. Line-Intercept Sampling (LIS) In statistics, line-intercept sampling (LIS) is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a transect, intersects the element [1]. g. Monte Carlo Method In thermodynamical systems equal macroscopic properties (e. g. temperature) can results from different microscopic properties (e. g. velocities of individual particles). Computer simulations of the full equations of motion for every individual particle to simulate microscopic properties is computationally very expensive. Monte Carlo methods can overcome this problem by sampling microscopic states according to stochastic rules instead of modeling the complete microphysics.

132

The microcanonical ensemble is a collection of microscopic states which have fixed energy, volume and number of particles. In an enclosed system with a certain number of particles, energy is the only macroscopic variable affected by the microphysics. The Monte Carlo simulation of a microcanonical ensemble thus requires sampling different microscopic states with the same energy. When the number of possible microscopic states of thermodynamical systems is very large, it is inefficient to randomly draw a state from all possible states and accept it for the simulation if it has the right energy, since many drawn states would be rejected. Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. The term Monte Carlo method was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory.

133

i. Overview

The Monte Carlo method can be illustrated as a game of battleship. First a player makes some random shots. Next the player applies algorithms (i.e. a battleship is four dots in the vertical or horizontal direction). Finally based on the outcome of the random sampling and the algorithm the player can determine the likely locations of the other player's ships. There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern: i. Define a domain of possible inputs. ii. Generate inputs randomly from the domain. iii.Perform a deterministic computation using the inputs. iv.Aggregate the results of the individual computations into the final result. v. For example, the value of can be approximated using this method.

134

a. Draw a square on the ground, then inscribe a circle within it. b. Uniformly scatter some objects of uniform size throughout the square. For example, grains of rice or sand. c. Count the number of objects in the circle, multiply by four, and divide by the total number of objects in the square. d. The proportion of objects within the circle vs objects within the square will approximate /4, which is the ratio of the circle's area to the square's area, thus giving an approximation to . e. Notice how the approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of . Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of will become more accurate both as the grains are dropped more uniformly and as more are dropped. ii. History The name "Monte Carlo" was popularized by physics researchers Stanislaw Ulam, Enrico Fermi, John von Neumann, and Nicholas Metropolis, among others; the name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.[3] The use of randomness and the

135

repetitive nature of the process are analogous to the activities conducted at a casino. Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog (see Simulated annealing). Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number

136

generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. iii. Applications As mentioned, Monte Carlo simulation methods are especially useful for modeling phenomena with significant uncertainty in inputs and in studying systems with a large number of coupled degrees of freedom. Specific areas of application include: iv. Physical Sciences Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms. The Monte Carlo method is widely used in statistical physics, in particular, Monte Carlo molecular modeling as an alternative for computational molecular dynamics; see Monte Carlo method in statistical physics. In experimental particle physics, these methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. v. Design and Visuals Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema. vi. Finance and Business Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate 137

financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models. For its use in the insurance industry, see stochastic modelling. vii. Telecommunications When planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process. viii. Games Monte Carlo methods have recently been applied in game playing related artificial intelligence theory. Most notably the game of Go has seen remarkably successful Monte Carlo algorithm based computer players. One of the main problems that this approach has in game playing is that it sometimes misses an isolated, very good move. These approaches are often strong strategically but weak tactically, as tactical decisions tend to rely on a small number of crucial moves which are easily missed by the randomly searching Monte Carlo algorithm. ix. Monte Carlo Simulation versus What If Scenarios The opposite of Monte Carlo simulation might be considered deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a best guess estimate. Various combinations of each input variable are manually chosen (such as best case, worst case, and most likely case), and the results recorded for each so-called what if scenario.

138

By contrast, Monte Carlo simulation considers random sampling of probability distribution functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional what if scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the what if analysis. This is because the what if analysis gives equal weight to all scenarios. i. Use in Mathematics In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers obeying some property or properties. The method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration. ii. Integration Monte Carlo integration Deterministic methods of numerical integration operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of vectors, deterministic quadrature methods can be very inefficient. To numerically integrate a function of a two-dimensional vector, equally spaced grid points over a two-dimensional surface are required. For instance a 10x10 grid requires 100 points. If the vector has 100 dimensions, the same spacing on the grid would require 10100 points far too many to be computed. 100 dimensions is by no means unreasonable, since in many physical problems, a "dimension" is equivalent to a degree of freedom. (See Curse of dimensionality.) 139

Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the law of large numbers, this method will display convergencei.e. quadrupling the number of

sampled points will halve the error, regardless of the number of dimensions. A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below. A similar approach involves using low-discrepancy sequences insteadthe quasi-Monte Carlo method. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly. iii. Optimization Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. These problems use functions of some often large-dimensional vector that are to be minimized (or maximized). Many problems can be phrased in this way: for example a computer chess program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The traveling salesman problem is another optimization problem. There are also applications to engineering design, such as multidisciplinary design optimization.

140

Most Monte Carlo optimization methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient. iv. Probabilistic Formulation Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995), or Tarantola (2005).

141

v. Computational Mathematics Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a onein-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea. vi. Monte Carlo and Random Numbers Interestingly, Monte Carlo simulation methods do not always require truly random numbers to be useful while for some applications, such as primality testing, unpredictability is vital (see Davenport (1995)).[9] Many of the most useful techniques use deterministic, pseudo-random sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense. What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones. Multistage sampling is a complex form of cluster sampling. Using all the sample elements in all the selected clusters may be prohibitively expensive or not necessary. Under these circumstances, multistage cluster sampling becomes useful. Instead of using all the elements contained in the selected clusters, the 142

researcher randomly selects elements from each cluster. Constructing the clusters is the first stage. Deciding what elements within the cluster to use is the second stage. The technique is used frequently when a complete list of all members of the population does not exist and is inappropriate. In some cases, several levels of cluster selection may be applied before the final sample elements are reached. For example, household surveys conducted by the Australian Bureau of Statistics begin by dividing metropolitan regions into 'collection districts', and selecting some of these collection districts (first stage). The selected collection districts are then divided into blocks, and blocks are chosen from within each selected collection district (second stage). Next, dwellings are listed within each selected block, and some of these dwellings are selected (third stage). This method means that it is not necessary to create a list of every dwelling in the region, only for selected blocks. In remote areas, an additional stage of clustering is used, in order to reduce travel requirements.[1] Although cluster sampling and stratified sampling bear some superficial similarities, they are substantially different. In stratified sampling, a random sample is drawn from all the strata, where in cluster sampling only the selected clusters are studied, either in single stage or multi stage. h. Non-Probability Sampling i. Introductions Sampling is the use of a subset of the population to represent the whole population. Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular sample may be calculated. Nonprobability sampling does not meet this criterion and should be used with caution. Nonprobability sampling techniques cannot be used to infer from the sample to the general population. Any generalizations obtained from a nonprobability sample must be filtered through one's knowledge of the topic

143

being studied. Performing nonprobability sampling is considerably less expensive than doing probability sampling, but the results are of limited value. ii. Examples Convenience, Haphazard or Accidental sampling - members of the population are chosen based on their relative ease of access. To sample friends, coworkers, or shoppers at a single mall, are all examples of convenience sampling. Snowball sampling - The first respondent refers a friend. The friend also refers a friend, etc. Judgmental sampling or Purposive sampling - The researcher chooses the sample based on who they think would be appropriate for the study. This is used primarily when there is a limited number of people that have expertise in the area being researched. Deviant Case-Get cases that substantially differ from the dominant pattern(a special type of purposive sample) Case study - The research is limited to one group, often with a similar characteristic or of small size. ad hoc quotas - A quota is established (say 65% women) and researchers are free to choose any respondent they wish as long as the quota is met. Even studies intended to be probability studies sometimes end up being nonprobability studies due to unintentional or unavoidable characteristics of the sampling method. In public opinion polling by private companies (or other organizations unable to require response), the sample can be self-selected rather than random. This often introduces an important type of error: self-selection error. This error sometimes makes it unlikely that the sample will accurately represent the broader population. Volunteering for the sample may be determined by characteristics such as submissiveness or availability. The 144

samples in such surveys should be treated as non-probability samples of the population, and the validity of the estimates of parameters based on them unknown. i. Finite Population Sampling In the theory of finite population sampling, Poisson sampling is a sampling process where each element of the population that is sampled is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample during the drawing of a single sample. Each element of the population may have a different probability of being included in the sample. The probability of being included in a sample during the drawing of a single sample is denoted as the first-order inclusion probability of that element. If all first-order inclusion probabilities are equal, Poisson sampling becomes equivalent to Bernoulli sampling, which can therefore be considered to be a special case of Poisson sampling. A mathematical consequence of Poisson sampling Mathematically, the first-order inclusion probability of the ith element of the population is denoted by the symbol i and the second-order inclusion probability that a pair consisting of the ith and jth element of the population that is sampled is included in a sample during the drawing of a single sample is denoted by ij. The following relation is valid during Poisson sampling:

In quota sampling, the population is first segmented into mutually exclusive subgroups, just as in stratified sampling. Then judgment is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60. 145

It is this second step which makes the technique one of non-probability sampling. In quota sampling, the selection of the sample is non-random unlike random sampling and can often be found unreliable. For example interviewers might be tempted to interview those people in the street who look most helpful, or may choose to use accidental sampling to question those which are closest to them, for time-keeping sake. The problem is that these samples may be biased because not everyone gets a chance of selection. This non-random element is its greatest weakness and quota versus probability has been a matter of controversy for many years. j. Quota Sampling Quota sampling is useful when time is limited, sampling frame is not available, research budget is very tight or when detailed accuracy is not important. you can also choose how many of each category is selected. A quota sample is a convenience sample with an effort made to insure a certain distribution of demographic variables. Subjects are recruited as they arrive and the researcher will assign them to demographic groups based on variables like age and gender. When the quota for a given demographic group is filled, the researcher will stop recruiting subjects from that particular group. This is the non probability version of stratified sampling. Subsets are chosen and then either convenience or judgement sampling is used to choose people from each subset. k. Random sampling Stratified sampling is probably the most commonly used probablity method. Subsets of the population are created so that each subset has a common characteristic, such as gender. Random sampling chooses a number of subjects from each subset.

146

In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals (Yates, Daniel S.; David S. Moore, Daren S. Starnes (2008). The Practice of Statistics, 3rd Ed.. Freeman. ISBN 978-0-7167-7309-2.). This process and technique is known as Simple Random Sampling, and should not be confused with Random Sampling. In small populations and often in large ones, such sampling is typically done "without replacement" ('SRSWOR'), i.e., one deliberately avoids choosing any member of the population more than once. Although simple random sampling can be conducted with replacement instead, this is less common and would normally be described more fully as simple random sampling with replacement ('SRSWR'). An unbiased random selection of individuals is important so that in the long run, the sample represents the population. However, this does not guarantee that a particular sample is a perfect representation of the population. Simple random sampling merely allows one to draw externally valid conclusions about the entire population based on the sample. Conceptually, simple random sampling is the simplest of the probability sampling techniques. It requires a complete sampling frame, which may not be available or feasible to construct for large populations. Even if a complete frame is available, more efficient approaches may be possible if other useful information is available about the units in the population. Advantages are that it is free of classification error, and it requires minimum advance knowledge of the population other than the frame. Its simplicity also makes it relatively easy to interpret data collected via SRS. For these reasons,

147

simple random sampling best suits situations where not much information is available about the population and data collection can be efficiently conducted on randomly distributed items, or where the cost of sampling is small enough to make efficiency less important than simplicity. If these conditions are not true, stratified sampling or cluster sampling may be a better choice. Distinction between a random sample and a simple random sample In a simple random sample, one person must take a random sample from a population, and not have any order in which one chooses the specific individual. Let us assume you had a school with 1000 students, divided equally into boys and girls, and you wanted to select 100 of them for further study. You might put all their names in a bucket and then pull 100 names out. Not only does each person have an equal chance of being selected, we can also easily calculate the probability of a given person being chosen, since we know the sample size (n) and the population (N) and it becomes a simple matter of division: n/N or 100/1000 = 0.10 (10%) This means that every student in the school has a 10% or 1 in 10 chance of being selected using this method. Further, all combinations of 100 students have the same probability of selection. If a systematic pattern is introduced into random sampling, it is referred to as "systematic (random) sampling". For instance, if the students in our school had numbers attached to their names ranging from 0001 to 1000, and we chose a random starting point, e.g. 0533, and then pick every 10th name thereafter to give us our sample of 100 (starting over with 0003 after reaching 0993). In this sense, this technique is similar to cluster sampling, since the choice of the first unit will determine the remainder. This is no longer simple random sampling, because some combinations of 100 students have a larger selection probability

148

than others - for instance, {3, 13, 23, ..., 993} has a 1/10 chance of selection, while {1, 2, 3, ..., 100} cannot be selected under this method. There are a number of potential problems with simple and systematic random sampling. If the population is widely dispersed, it may be extremely costly to reach them. On the other hand, a current list of the whole population we are interested in (sampling frame) may not be readily available. Or perhaps, the population itself is not homogeneous and the sub-groups are very different in size. In such a case, precision can be increased through stratified sampling. Some problems that arise from random sampling can be overcome by weighting the sample to reflect the population or universe. For instance, if in our sample of 100 students we ended up with 60% boys and 40% girls, we could decrease the importance of the characteristics for boys and increase those of the girls to reflect our universe, which is 50/50. l. Sampling a Dichotomous Population If the members of the population come in two kinds, say "red" and "black", one can consider the distribution of the number of red elements in a sample of a given size. That distribution depends on the numbers of red and black elements in the full population. For a simple random sample with replacement, the distribution is a binomial distribution. For a simple random sample without replacement, one obtains a hypergeometric distribution. m. Snowball Sampling In social science research, snowball sampling is a technique for developing a research sample where existing study subjects recruit future subjects from among their acquaintances. Thus the sample group appears to grow like a rolling snowball. As the sample builds up, enough data is gathered to be useful for research. This sampling technique is often used in hidden populations which are

149

difficult for researchers to access; example populations would be drug users or commercial prostitutes. Because sample members are not selected from a sampling frame, snowball samples are subject to numerous biases. For example, people who have many friends are more likely to be recruited into the sample. It was widely believed that it was impossible to make unbiased estimates from snowball samples, but a variation of snowball sampling called respondent-driven sampling has been shown to allow researchers to make asymptotically unbiased estimates from snowball samples under certain conditions. Respondent-driven sampling also allows researchers to make estimates about the social network connecting the hidden population. When sub-populations vary considerably, it is advantageous to sample each subpopulation (stratum) independently. Stratification is the process of grouping members of the population into relatively homogeneous subgroups before sampling. The strata should be mutually exclusive: every element in the population must be assigned to only one stratum. The strata should also be collectively exhaustive: no population element can be excluded. Then random or systematic sampling is applied within each stratum. This often improves the representativeness of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population. n. Stratified Sampling Strategies i. Introductions Proportionate allocation uses a sampling fraction in each of the strata that is proportional to that of the total population. If the population consists of 60% in the male stratum and 40% in the female stratum, then the relative size of the two samples (three males, two females) should reflect this proportion.

150

Optimum allocation

(or

Disproportionate allocation)

- Each

stratum

is

proportionate to the standard deviation of the distribution of the variable. Larger samples are taken in the strata with the greatest variability to generate the least possible sampling variance. A real-world example of using stratified sampling would be for a US political survey. If the respondents needed to reflect the diversity of the population of the United States, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A stratified survey could thus claim to be more representative of the US population than a survey of simple random sampling or systematic sampling. Similarly, if population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equal statistical power. For example, in Ontario a survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north. Randomized stratification can also be used to improve population representativeness in a study. ii. Advantages a. Focuses on important subpopulations and ignores irrelevant ones. b. Allows use of different sampling techniques for different subpopulations. c. Improves the accuracy/efficiency of estimation. d. Permits greater balancing of statistical power of tests of differences between strata by sampling equal numbers from strata varying widely in size.

151

iii. Disadvantages a. Requires selection of relevant stratification variables which can be difficult. b. Is not useful when there are no homogeneous subgroups. c. Can be expensive to implement. d. Requires accurate information about the population, or introduces bias as a result of either measurement error (effects of which can be modeled by the errors-in-variables model) or selection bias. iv. Practical Example In general the size of the sample in each stratum is taken in proportion to the size of the stratum. This is called proportional allocation. Suppose that in a company there are the following staff:

male, full time: 90 male, part time: 18 female, full time: 9 female, part time: 63 Total: 180

152

and we are asked to take a sample of 40 staff, stratified according to the above categories. The first step is to find the total number of staff (180) and calculate the percentage in each group. % male, full time = (90 / 180) x 100 = 50 % male, part time = ( 18 / 180 ) x100 = 10 % female, full time = (9 / 180 ) x 100 = 5 % female, part time = (63 / 180) x 100 = 35 This tells us that of our sample of 40, 50% should be male, full time. 10% should be male, part time. 5% should be female, full time. 35% should be female, part time. 50% of 40 is 20. 10% of 40 is 4. 5% of 40 is 2. 35% of 40 is 14. In statistics, survey sampling involves selecting a sample from a finite population. It is an important part of planning statistical research and design of experiments. Sophisticated sampling techniques that are both economical and scientifically reliable have been developed.

153

An entire industry of public opinion polling as well as the technical activities of the U.S. Bureau of the Census depends on these techniques. The most elementary methodology is called simple random sampling. Most of the theory of statistics assumes this kind of sampling unless otherwise noted. In theory it ensures that all subsets of the population are given a balanced probability of selection. The possibility of very expensive or very atypical samples has led to a variety of modifications such as stratified sampling, cluster sampling, and multistage sampling. In public opinion polling by private companies or organizations unable to require response, the resulting sample is self-selected rather than random. Volunteering for the sample may be determined by characteristics such as submissiveness or availability. The samples in such surveys are therefore non-probability samples of the population, and the validity of estimates of parameters based on them is unknown. Generally, the survey is designed to minimise such bias, such that it can reasonably be assumed that the sample is close enough to random, to be treated as such. o. Systematic Sampling Systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equal-probability method, in which every kth element in the frame is selected, where k, the sampling interval (sometimes known as the 'skip'), is calculated as: sample size (n) = population size (N) /k Using this procedure each element in the population has a known and equal probability of selection. This makes systematic sampling functionally similar to

154

simple random sampling. It is however, much more efficient (if variance within systematic sample is more than variance of population). The researcher must ensure that the chosen sampling interval does not hide a pattern. Any pattern would threaten randomness. A random starting point must also be selected. Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. Example: Suppose a supermarket wants to study buying habits of their customers, then using systematic sampling they can choose every 10th or 15th customer entering the supermarket and conduct the study on this sample. This is random sampling with a system. From the sampling frame, a starting point is chosen at random, and choices thereafter are at regular intervals. For example, suppose you want to sample 8 houses from a street of 120 houses. 120/8=15, so every 15th house is chosen after a random starting point between 1 and 15. If the random starting point is 11, then the houses selected are 11, 26, 41, 56, 71, 86, 101, and 116. If, as more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, so there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected. The random starting point should instead be selected as a noninteger between 0 and 15.625 (inclusive on one endpoint only) to ensure that every house has equal chance of being selected; the interval should now be nonintegral (15.625); and each noninteger selected should be rounded up to the next integer. If the random starting point is 3.6, then the houses selected are 4, 19, 35, 51, 66, 82, 98, and 113, where there are 3 cyclic intervals of 15 and 5 intervals of 16. 155

To illustrate the danger of systematic skip concealing a pattern, suppose we were to sample a planned neighbourhood where each street has ten houses on each block. This places houses #1, 10, 11, 20, 21, 30... on block corners; corner blocks may be less valuable, since more of their area is taken up by streetfront etc. that is unavailable for building purposes. If we then sample every 10th household, our sample will either be made up only of corner houses (if we start at 1 or 10) or have no corner houses (any other start); either way, it will not be representative. Systematic sampling may also be used with non-equal selection probabilities. In this case, rather than simply counting through elements of the population and selecting every kth unit, we allocate each element a space along a number line according to its selection probability. We then generate a random start from a uniform distribution between 0 and 1, and move along the number line in steps of 1. Example: We have a population of 5 units (A to #). We want to give unit A a 20% probability of selection, unit B a 40% probability, and so on up to unit E (100%). Assuming we maintain alphabetical order, we allocate each unit to the following interval: A: 0 to 0.2 B: 0.2 to 0.6 (=0.2+0.4) C: 0.6 to 1.2 (=0.6+0.6) D: 1.2 to 2.0 (=1.2+0.8) E: 2.0 to 3.0 (=2.0+1.0) If our random start was 0.156, we would first select the unit whose interval contains this number (i.e. A). Next, we would select the interval containing 1.156 (element C), then 2.156 (element E). If instead our random start was 0.350, we would select from points 0.350 (B), 1.350 (D), and 2.350 (E).

156

6.2

Inspection Procedure

a. Complete Inspection Process

Inspections are a formal process used to identify and correct errors in a completed deliverable, before the deliverable is used as input to a subsequent deliverable. For example, after inspection, the Requirements Definition is released for reference by the Functional Design Specification. The focus of the inspection process is on finding defects, rather than solutions, which can divert the inspection meeting time. The inspection process is conducted by dividing an entire deliverable, such as the Requirements Definition, into manageable pieces that can be optimally inspected in a meeting time of two hours.

157

b. Benefits of Inspections Inspections provide a number of benefits, and are one of the least expensive and most effective methods of detecting errors. The inspection process: i. Improves productivity by correcting defects early and preventing costly

rework, ii. Provides designers/programmers with immediate corrective feedback,

iii. Prevents perpetuation of errors in subsequent iterations of the development process, iv. Makes participants more knowledgeable of the system at an earlier time frame, v. Provides findings that can be used to improve the software development process early in the project. c. Effort for Inspections The inspection process adds approximately 15 percent of the development cost to the beginning of the development effort, but prevents the perpetuation of errors that cause costly rework in the latter stages of the project. Without inspections, approximately 45 percent of the development effort is spent on the correction of defects. With experience, and the use of optimum inspection rates, inspections reduce the amount of rework time to approximately 15 percent of the development effort. The earlier an error is found, the more effort saved. A rule of thumb illustrates the savings based on when the error is found and corrected through inspection, as opposed to being found in system test.

158

d. Inspections Versus Testing Inspections do not eliminate the need for testing, but can decrease the testing workload. Inspections can be used to evaluate requirements and design before development to eliminate defects early, while testing can not. However, unit testing, integration testing, and system testing involve the actual running of the programs. Integration and system test, in particular, can find program interface and environmental problems, which inspections do not find. e. Inspection Process Steps The inspection process is a formal process consisting of a number of defined steps. These steps are: i. Complete Inspection Logistics 2. Conduct Inspection Orientation 3. Review Materials Prior to the Inspection 4. Conduct Inspection Meetings 5. Complete Inspection Action Items 6. Conduct Reinspection If scheduling restraints limit the amount of time available for inspections, apply various strategies to decrease the required inspection effort. For example, focus on certain types of defects that are known to be prevalent, inspect only complex portions of the deliverable, concentrate on types of inspections that are anticipated to have a high return, or focus only on major defects. In addition,

159

time can be diverted by replacing some of the existing review methods which test the same areas but are less effective, (e.g., informal reviews or portions of unit testing which focus on the same defects as inspections). During the inspection logistics step, the inspection chairperson completes the tasks necessary to initiate the inspection process. The logistics tasks include: i. Determine if the deliverable is ready for inspection, ii. Identify inspection participants, iii. Divide the deliverable into manageable pieces, Iv. Format the deliverable so that it is easy to inspect, v. Prepare the inspection meeting schedule, vi. distribute review materials, and other necessary information, to the participants. f. Determine if Deliverable is Ready To be ready for inspection, the deliverable must meet pre-inspection criteria established by the organization. For example: i. All automated checks, such as compilation, spell check, and grammar check are completed, ii. A walk-through has been completed, iii. A team leader review of the deliverable has been completed.

160

iv. Identify Inspection Participants v. Select the applicable staff to participate in the inspection roles. vi. Assign Inspection Roles vii. Divide the Deliverable into Manageable Pieces Divide the deliverable into pieces that can be inspected within a two hour time frame. The pieces need to be complete, (e.g., all code applicable to a particular business function). The rules of thumb for preparing and conducting inspection meetings identify optimum preparation and inspection rates. g. Prepare For and Conduct Detailed Design Inspection Meetings Prepare For and Conduct Code Inspection Meetings. Industry experience indicates that the rate of finding defects decreases when the rate of inspection increases. An inspection that is too rapid will not save costly rework effort later in the development process. h. Format for Inspection Format the deliverable so that it is easy to inspect. For example, include page numbers or line numbers, and extra space for making notations. i. Prepare Inspection Meeting Schedule Schedule inspection meetings in two hour increments. Arrange the schedule to allow participants enough time to review the material prior to the meeting.

161

Determine which participants attend each meeting. Reserve the meeting place needed for the inspection of each portion of the deliverable. Publish an inspection meeting schedule and distribute the schedule to each participant. j. Distribute Review Materials Distribute the applicable materials for review, segmented into the workable pieces, (e.g., a section of the Requirements Definition, or the Detailed Design Specification for one module). Prepare and provide written inspection procedures, such as an Inspection Handbook, to standardize the process. For example, include methods for specifying clear pre-inspection and post-inspection criteria and checklists to focus the review.

k. Sample Checklists a. Document Inspection Checklist i. Record Metrics For future inspections, track the number of hours spent on the logistics process. Ideally, enter this information into a database that is used to track the inspection process. When the inspection process is new, or new inspection procedures are introduced, an orientation session helps to ensure participants have a common understanding of the process and the procedures. A number of types of orientation can be provided, including: ii. Inspection Overview Orientation,

162

ii. Project-Specific Orientation, iv. Management Orientation, v. Chairperson Orientation. l. Conduct Inspection Overview Orientation Inspection overview orientation provides general information to staff who are new to the inspection process. Topics include: i. Overview, ii. Roles, iii. Procedures. This orientation can be combined with the Project-Specific Orientation, if most participants are new to the inspection process. m. Conduct Project-Specific Orientation Orientation may be needed to introduce staff to information that is specific to the project. Topics include communication of information compiled as part of the inspection logistics, including: i. Inspection strategy, (e.g., division of focus so that each inspector) ii. Types of defects to focus on), iii. How the deliverable has been divided into workable pieces for inspection, iv. Role assignments, v. Schedule.

163

n. Conduct Management Orientation Separate orientation may be needed to educate management on their role in the inspection process, as well as the benefits of conducting inspections. o. Conduct Chairperson Orientation Staff who will act as chairpersons need extensive training as they are key to the success of the inspection process. The chairpersons need to be trained on the responsibilities specified in the chairperson role. p. Identify Inspection Chairperson Inspection participants must review applicable materials prior to attending the inspection meeting, to ensure familiarity with the deliverable and to optimize the use of time during the inspection meeting. q. Identify Materials to Review The chairperson identifies and provides the materials to review, which includes items, such as the source document, (e.g., the Requirements Definition), the deliverable, (e.g., the Functional Design Specification), the documented inspection procedures, and any checklists. r. Review Materials The reviewer reads the material to gain a general understanding of the workings of the deliverable. The reviewer compares the deliverable to the source document to ensure accuracy, and notes any discrepancies. In addition, the reviewer uses the checklists and any applicable strategies, (e.g., focus on a

164

particular type of defect), to identify obvious defects. The reviewer also notes any questions for the author. Rules of thumb describe the optimum preparation rates for various types of deliverables. i. Prepare For and Conduct Specification Inspection Meetings ii. Prepare For and Conduct Detailed Design Inspection Meetings iii. Prepare For and Conduct Code Inspection Meetings iv. Document the Review v. A template and sample illustrate one way to document the results of the review process. vi. Inspection Materials Review Form Template vii. Inspection Materials Review Form Sample viii. Verify Review of Materials ix. Prior to the inspection meeting, the chairperson ensures each participant is prepared by reviewing the completed Inspection Materials Review Form.

165

s. Collect Materials Review Metrics The chairperson collects metrics that are applicable to the materials review process, primarily the number of hours spent on the review process. This information is collected for each deliverable subset that is inspected. For example, the information can be collected easily from the Inspection Materials Review Form template using the number of work hours for each inspector. The purpose of the inspection meeting is to identify defects. Inspection meeting time is not spent on discussing correction. In general, causes are not discussed, though obvious causes can be noted. The chairperson is responsible for directing the inspection meeting, and keeps track of the time spent on actual logging activity and discussion activity. The optimum length of the inspection meeting is two hours. The ability to detect errors diminishes when meetings are longer. t. Assign Participant Roles The chairperson assigns specific roles and responsibilities, as described in Assign Inspection Roles. u. Assign Inspection Roles Managers should not participate in inspections, because staff often become defensive and view the process as a personal evaluation.

166

v. Inspection Meeting Process The inspection meeting process includes a number of steps, including: i. Summarize deliverable under inspection, ii. Review deliverable under inspection in detail, iii. Log defects, iv. Record inspection results. w. Summarize Deliverable Under Inspection The presenter provides a brief summary of the deliverable under inspection. x. Review Deliverable in Detail Each participant active in the actual inspection process refers to his or her Inspection Materials Review Form completed during the materials review process. As each section of the deliverable under inspection is examined, the participants: i. Ask their questions to the point of identifying the concern as a defect or resolved issue, ii. Report defects and discrepancies between the source and deliverable found during the materials review process, and confirm if there is a defect, iii. As a group identify any additional defects.

167

y. Log Defects All defects are recorded by the record keeper. A form helps with this process. The record keeper records the defect number, (a sequential number), location of the defect, (e.g., line number or page number), a description of the defect, the severity of the defect, (major or minor), and a classification for the defect, (e.g., missing information, incorrect information, or extra information). i. Inspection Action Log Template ii. Inspection Meeting Action Log Sample The record keeper quickly reads each item as it is logged, so everyone agrees on the clarity and accuracy of each entry. z. Record Inspection Metrics Following the inspection meeting, the chairperson records the metrics applicable to the inspection meeting. For example, the number of defects identified and the duration of the meeting are recorded. Ideally, this information is entered into a database that is used to track the inspection process. A template assists with the metric gathering process. i. Inspection Meeting Metrics Template ii.Inspection Meeting Metrics Sample aa. Tips and Hints The chairperson keeps the group focused on finding defects. If there is

controversy regarding whether an item is a defect, or on the level of severity, the

168

chairperson makes the decision and moves the discussion back to finding defects. To assist with this process, instruct inspectors on the way to summarize issues for logging purposes, such as use no more than "x" number of words. The person responsible for correction should sit next to the record keeper to see what the record keeper is writing. The corrector can help ensure the words will be understandable during the correction process. The person responsible for producing the deliverable subset, (e.g., the author of the document or program unit), corrects all of the defects identified during the inspection meeting, or documents the resolution if the deliverable is correct and other action is required. ab. Document the Action The corrector documents the date the action is taken. The corrector also documents the type of defect from a list of types identified by the organization. Defect types are identified based on the deliverable, (e.g., different types are identified for a document such as the Requirements Definition and a coded program unit.) A checklist illustrates types of code defects. ac. Code Defect Type Sample The corrector documents the action on the Inspection Action Log. i. Inspection Action Log Template ii. Inspection Corrective Action Log Sample

169

ad. Inform Chairperson Once all of the defects are resolved, the corrector returns the complete Inspection Action Log to the chairperson. The corrector also reports the number of hours spent completing the corrections. The chairperson either arranges a reinspection, or reviews the deliverable to verify that the corrections are complete. The organization may establish a rule of thumb to trigger a reinspection. For example, deliverables with x number of defects are subject to reinspection. After the deliverable correction process is complete, either the chairperson, or the inspection group, reviews the deliverable again to verify the corrections are complete and accurate. During this process, only the changed areas are reviewed, though in context with any related unchanged areas. ae. Return for Correction If there are any new errors or incorrect actions, record the defects on the Inspection Action Log and identify the inspection as a reinspection. Return the deliverable to the corrector. Give the corrector a deadline for correcting the errors and recording correction of the defects on the Inspection Action Log. The chairperson ensures any required action is taken. i. Inspection Action Log Template ii. Reinspection Action Log Sample

170

af. Record Metrics Reinspection metrics are gathered using the Inspection Meeting Metrics form that was used to gather metrics from the inspection meeting. i. Inspection Meeting Metrics Template ii. Reinspection Metrics Sample ag. Release Deliverable Before releasing the deliverable subset as inspected, the chairperson ensures: i. All corrections are complete, ii. Any other actions required are complete, (e.g., a change request is submitted), iii. Inspections metrics are recorded.

ah. Supplier performance Supplier performance has never been more crucial to business success than it is today. Suppliers performing well can help the organization to be more efficient, produce higher quality products, reduce costs, and most important, increase profits. On the other hand, suppliers performing badly can interrupt organization's operation and make the organization to fail in their commitment to provide highquality products to customers. Also, effective supplier management is increasingly being seen as a competitive differential for businesses across many industries.

171

Organizations that adopt best-in-class supplier performance management practices are two to three times as likely to achieve supplier on-time delivery and first-time fill rates that are above the market average. But managing supplier performance can be challenging at the best of times! How can you ensure that your supplier delivers the right goods, at the right time and the right quality? The first line of defense could be incoming product inspection. Pre-existing quality defects in delivered primary products will influence every stage of the manufacturing process and impact on the overall quality of a finished product. The inspection of quality-relevant characteristics during the delivery process considerably reduces this risk. If the delivered product does not meet the required specifications, this immediately activates a supplier complaint. Also, in order to ensure that your customers receive the required quality, the same methods and techniques used in the incoming goods inspection can be applied for the outgoing goods inspection process. By performing these inspections only products that match the required customer specifications will be shipped. Simply put, inspection is the process of measuring, examining, testing, or otherwise comparing the unit of product with the established requirements. af.The inspection and measurement process helps your organization to: i. Improve incoming material quality; ii. Reduce lead times; iii. Help achieve uninterrupted supply; iv. Enhance pricing competitiveness;

172

v. Improve customer service; vi. Strengthen and protect your brand image and reputation; vii. Improve the performance of your suppliers; viii. Drive continued improvement of your quality systems; ix. Protect sales revenue by helping to prevent late shipments, poor quality, wasted materials, rework, etc. While measuring supplier goods and performance is very costly and time consuming, it certainly can be justified by the benefits of identifying and resolving inefficiencies and performance problems. With decreasing product life cycle and time-to-market, the challenge to deliver quality products on-time increases. If a product is found not to meet the appropriate quality specifications for the marketplace - either after or late in the production stage - the results can be loss of product and revenues, delayed shipment or wasted materials, and the potential risk of a product recall. The dynamic business environment today requires organizations to effectively use all available resources to remain competitive. The quality and cost of a product or service offered in the market is a function, not only of the capabilities of the organization, but also the supplier network providing inputs to the organization. Ensuring that suppliers maintain high quality programs requires tracking and management of supplier qualifications, audits, nonconformance, corrective actions, and other processes. As organizations continue to rely on outsourcing as a crucial element of their supply chain, they are also realizing that incorporating supplier performance and material inspection tools into their overall quality management systems is a requirement for reducing risk and decreasing costs.

173

Certainly, supplier performance and material inspection should be monitored on a continuous basis to ensure the success of your suppliers, your organization, and most importantly your customers. SE is a web-based tool designed for measuring supplier quality, delivery, service performance, as well as incoming/outgoing goods across your business enterprise. If your organization wants efficient, flexible, and state-of-the-art software for inspection planning, inspection data gathering, evaluation, and analysis, then SEis the tool of your choice. Whether for process-accompanying inspections, incoming or outgoing goods inspections use SE as the foundation for supplier performance evaluation. From initial qualification to supplier corrective actions, SE provides a centralized tool for managing all supplier related quality data, issues and actions. SE enables your organization to reduce the cost of poor supplier quality and decrease efforts required to ensure your suppliers are meeting the standards that your organization has established. Most importantly, SE enables organizations to improve quality, and reduce costs and risks associated with a growing number of regulatory compliance and corporate governance processes such as those related to the ISO 9000 Quality Guidelines, Sarbanes-Oxley, FDA 21CFR Part 11 Electronic Record Keeping, Good Manufacturing Practices, OSHA Regulations, and others.

174

175

THE END... MSIS

176

You might also like