You are on page 1of 19

Operations management

--- the planning, organizing, controlling, and directing, of systems or processes that create goods and/or services. Operations management is an area of management concerned with overseeing, designing, and redesigning business operations in the production of goods and/or services. It involves the responsibility of ensuring that business operations are efficient in terms of using as little resources as needed, and effective in terms of meeting customer requirements. It is concerned with managing the process that converts inputs (in the forms of materials, labor, and energy) into outputs (in the form of goods and/or services). The relationship of operations management to senior management in commercial contexts can be compared to the relationship of line officers to the highest-level senior officers in military science. The highest-level officers shape the strategy and revise it over time, while the line officers make tactical decisions in support of carrying out the strategy. In business as in military affairs, the boundaries between levels are not always distinct; tactical information dynamically informs strategy, and individual people often move between roles over time. According to the U.S. Department of Education, operations management is the field concerned with managing and directing the physical and/or technical functions of a firm or organization, particularly those relating to development, production, and manufacturing. Operations management programs typically include instruction in principles of general management, manufacturing and production systems, plant management, equipment maintenance management, production control, industrial labor relations and skilled trades supervision, strategic manufacturing policy, systems analysis, productivity analysis and cost control, and materials planning. Management, including operations management, is like engineering in that it blends art with applied science. People skills, creativity, rational analysis, and knowledge of technology are all required for success. Operations management focuses on carefully managing the processes to produce and distribute products and services. Major, overall activities often include product creation, development, production and distribution. (These activities are also associated with Product and Service Management.) Related activities include managing purchases, inventory control, quality control, storage, logistics and evaluations of processes. A great deal of focus is on efficiency and effectiveness of processes. Therefore, operations management often includes substantial measurement and analysis of internal processes. Ultimately, the nature of how operations management is carried out in an organization depends very much on the nature of the products or services in the organization, for example, on retail, manufacturing or wholesale. Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want. The purvey of OM ranges from strategic to tactical and operational levels. Representative strategic issues include determining the size and location of manufacturing plants, deciding the structure of service or telecommunications networks, and designing technology supply chains. Tactical issues include plant layout and structure, project management methods, and equipment selection and replacement. Operational issues include production scheduling and control, inventory management, quality control and inspection, traffic and materials handling, and equipment maintenance policies. Operations management is the area concerned with the efficiency and effectiveness of the operation in support and development of the firm's strategic goals. Other areas of concern to operations management include the design and operations of systems to provide goods and services. To put it succinctly, operations management is the planning, scheduling, and control of the activities that transform inputs (raw materials and labor) into outputs (finished goods and services). A set of recognized and well-developed concepts, tools, and techniques belong within the framework considered operations management. While the term operations management conjures up views of manufacturing environments, many of these concepts have been applied in service settings, with some of them actually developed specifically for service organizations. Operations management is also an academic field of study that focuses on the effective planning, scheduling, use, and control of a manufacturing or service firm and their operations. The field is a synthesis of concepts derived from design engineering, industrial engineering, management information systems, quality management, production management, inventory management, accounting, and other functions. The field of operations management has been gaining increased recognition over the last two decades. One major reason for this is public awareness of the success of Japanese manufacturers and the perception that the quality of many Japanese products is superior to that of American manufacturers. As a result, many businesses have come to realize that the operations function is just as

important to their firm as finance and marketing. In concert with this, firms now realize that in order to effectively compete in a global market they must have an operations strategy to support the mission of the firm and its overall corporate strategy. Another reason for greater awareness of operations management is the increased application of operations management concepts and techniques to service operations. Finally, operations management concepts are being applied to other functional areas such as marketing and human resources. The term marketing/operations interface is often used. Origins The origins of operations management can be traced back through cultural changes of the 18th, 19th, and 20th centuries, including the Industrial Revolution, the development of interchangeable manufacture, the Waltham-Lowell system, the American system of manufacturing, Fayolism, scientific management, the development of assembly line practice and mass production, industrial engineering, systems engineering, manufacturing engineering, operations research, the Toyota Production System, lean manufacturing, and Six Sigma. Combined, these ideas allow for the standardization of best practices balanced with room for further innovation through continuous improvement of production processes. Key features of these production systems are the departure from craft production to a more thorough division of labor and the transfer of knowledge from within the minds of skilled, experienced workers into the systems of equipment, documentation, and semiskilled workers, often with an average of less tenure and less experience. The disciplines of organizational studies, industrial and organizational psychology, program management, project management, and management information systems all ideally inform optimal operations management, although most smart people who work in the corporate world can empirically observe that the reality often falls far short of the ideal in ways that the market nevertheless rewards, based mostly on the fact that in markets, "good-enough-to-scrape-by" methods tend to defeat "proper" ones on cost. There is a strong tradition of recruiting operations managers simply by promoting the most effective workers, which does work, although its main systemic flaw is the Peter Principle. One of the reasons why competition doesn't kill businesses that operate this way is that few operate in any more ideal way. Typically the Peter Principle is so pervasive throughout an industry that similarly afflicted businesses face a field of competitors who are more or less equally hobbled by it (the "same circus, different clowns" problem).

History of Operations Management


Operations management is the act of controlling and directing the design, production and delivery of products. Although people have been producing and selling products since the very beginning of civilization, the implementation of operations management is a relatively new phenomenon. Operations management came to prominence in the 20th century, but its roots can be traced back to the 18th and 19th centuries. Operations management is the systemization of the process for developing goods. Operations management theories are commonly applied to modern manufacturing businesses, but this has not always been the case. Modern operations management has developed over the course of many years to become what it is today. Understanding this development can help managers to understand the purpose of operations management today.

Pre-Industrial Revolution Adam Smith, a philosopher and the father of modern economic theory, was the first person to note the effects of operations management. He noted that factories were able to produce goods more efficiently through what he called the division of labor; that is by dividing processes between workers rather than having individual workers build a product from start to finish. Industrial Revolution During the industrial revolution, factories became able to produce at an even greater rate through the use of steam power and technological innovations, such as the spinning jenny. Operations management systems were implemented to further increase production. Most notable in American history, was the Waltham-Lowell system. Prior to the introduction of this system, textiles were produced at home. Under this system, young women left their home to live and work in factories according to a rigid system laid out to produce efficient work.

Post-Industrial Revolution Following the industrial revolution, operations management continued to grow in importance. Frederick Winslow Taylor, an American industrialist, began treating production like a science in order to analyze processes and increase efficiency. During this period of time, Henry Ford made a remarkable advancement in operations management by introducing the assembly line as a method for building the Model-T car. Post-World War II Technological developments during the second world war created new possibilities for managers looking to improve their operations. Specifically, the development of computational technology allowed for a greater degree of data to be analyzed by firms. The abilities of computers have continued to increase exponentially, allowing for a high degree of data analysis and communication. Modern producers are now able to track their inventory from raw materials, through production and delivery. Modern Day Quality management systems are popular in today's operations management. Quality management is a system for mapping, improving and monitoring operations processes. A variety of quality management systems are in use among top firms, the most notable systems being the ISO systems and Six Sigma. These systems aim to increase the efficiency of business processes. Although operations management has typically dealt with the manufacturing process, the growth of the service industry has created a field of service operations management. Modern Operations Management Modern operations management has continued to focus on increasing efficiency. This has been done largely through systems of standardization, such as ISO 9000 or Six Sigma, which lay out specific ways of manufacturing efficiently. The concepts of operations management have spilled over beyond production and have been incorporated into other aspects of a business through strategic planning, which aims to systematize the way businesses make strategic decisions.

History of Operations Management Pre Industrial Revolution Public works or projects for the government Pyramids of Egypt, Great Wall of China, Aqueducts of Rome, etc. Craft Production

Scientific Management Management Pioneers Frederick Taylor Henry Gantt Harrington Emerson Henry Ford

Human Relations Movement Industrial Revolution 1770s in England Replaced manpower with machine power Invention of machines Steam engine Standardization of gauges Emphasized on the human factor in production Emergence of Motivational Theories by Frederick Herzberg Douglas Mcgregor Abraham Maslow

Japanese Influences Scientific Management Focused on observation, measurement, analysis & improvement of work design Replaced craft production by mass production Low skilled workers replaced highly skilled workers Developed and refined existing management practices Introduced the concept of quality, continual improvement, and time based management

Recent Trends Internet & Electronic Business Supply Chain Management Supply chain is a sequence of activities and organizations involved in producing a good or a service

Importance of Operations Management 1. 2. 3. 4. 5. Operations activity is the core of all business organizations A large percentage of jobs are in the field of operations All activities in the other areas of business are interrelated with operations management Responsible for a large portion of the companys assets It has a major impact on quality & is the face of the company to its customers

A short history of Linear Programming: 1. 2. In 1762, Lagrange solved tractable optimization problems with simple equality constraints. In 1820, Gauss solved linear system of equations by what is now call Causssian elimination. In 1866 Wilhelm Jordan refinmened the method to finding least squared errors as ameasure of goodness-of-fit. Now it is referred to as the Gauss-Jordan Method. 3. 4. 5. 6. In 1945, Digital computer emerged. In 1947, Dantzig invented the Simplex Methods. In 1968, Fiacco and McCormick introduced the Interior Point Method. In 1984, Karmarkar applied the Interior Method to solve Linear Programs adding his innovative analysis.

WHAT DO OPERATIONS MANAGERS DO? At the strategic level (long term), operations managers are responsible for or associated with making decisions about product development (what shall we make?), process and layout decisions (how shall we make it?), site location (where will we make it?), and capacity (how much do we need?). At the tactical level (intermediate term), operations management addresses the issues relevant to efficiently scheduling material and labor within the constraints of the firm's strategy and making aggregate planning decisions. Operations managers have a hand in deciding employee levels (how many workers do we need and when do we need them?), inventory levels (when should we have materials delivered and should we use a chase strategy or a level strategy?), and capacity (how many shifts do we need? Do we need to work overtime or subcontract some work?). At the operational level, operations management is concerned with lower-level (daily/weekly/monthly) planning and control. Operations managers and their subordinates must make decisions regarding scheduling (what should we process and when should we process it?), sequencing (in what order should we process the orders?), loading (what order to we put on what machine?), and work assignments (to whom do we assign individual machines or processes?). Today's operations manager must have knowledge of advanced operations technology and technical knowledge relevant to his/her industry, as well as interpersonal skills and knowledge of other functional areas within the firm. Operations managers must also have the ability to communicate effectively, to motivate other people, manage projects, and work on multidisciplinary teams. Sunil Chopra, William Lovejoy, and Candace Yano describe the scope of operations management as encompassing these multi-disciplinary areas:

Supply Chainsmanagement of all aspects of providing goods to a consumer from extraction of raw materials to end-of-life disposal. Operations Management/Marketing Interfacedetermining what customers' value prior to product development. Operations Management/Finance InterfaceCapital equipment and inventories comprise a sizable portion of many firms' assets. Service OperationsCoping with inherent service characteristics such as simultaneous delivery/consumption, performance measurements, etc. Operations StrategyConsistent and aligned with firm's other functional strategies. Process Design and ImprovementsManaging the innovation process.

Mark Davis, Nicolas Aquilano and Richard Chase (1999) have suggested that the major issues for operations management today are: reducing the development and manufacturing time for new goods and services achieving and sustaining high quality while controlling cost integrating new technologies and control systems into existing processes obtaining, training, and keeping qualified workers and managers working effectively with other functions of the business to accomplish the goals of the firm integrating production and service activities at multiple sites in decentralized organizations working effectively with suppliers at being userfriendly for customers working effectively with new partners formed by strategic alliances

As one can see, all these are critical issues to any firm. No longer is operations management considered subservient to marketing and finance; rather, it is a legitimate functional area within most organizations. Also, operations management can no longer focus on isolated tasks and processes but must be one of the architects of the firm's overall business model.

The Roles of Operations Management Operations managers have many roles and responsibilities. Some of these responsibilities may vary depending on the industry and size of the company the operations manager works for, but her general roles and responsibilities are the same -- financial management, risk management and short- and long-term goals.

Roles of the Operations Manager The operations manager has financial management responsibilities. For example, he may manage the quarterly or annual budget, compare the company's actual spending with the budget and make any necessary changes, prepare yearly financial reports, take part in monitoring the company's cash flow and ensure that requests from the accounting department are handled in a timely fashion, Supporting Advancement reports. Forecasting is a large part of the operations manager's job. Forecasting involves making an educated guess without having all of the necessary information. The operations manager makes forecasts such as how many products to order, scheduling and production planning. Forecasts can be qualitative or quantitative. Qualitative forecasts involve making judgment calls where little information is given and the forecast decision is not necessarily a quantifiable answer. Quantitative forecasts involve not only making a decision -- "yes, order the inventory" -- but a quantifiable decision, such as how much inventory to order -- "order 1,000 units," according to "Manager's Guide to Operations Management" by John Kamauff. Other Roles of the Operations Manager The operations manager keeps in mind the short- and long-term goals of the company when she makes any decision. She holds regular meetings with company executives to discuss the company's short- and long-term goals. The operations manager also has organizational and risk management responsibilities. She assists departments, such as human resources and finance, and develops a system of communication within and between those departments. She also acts as a consultant to other members of the company, makes informed production decisions and makes informed decisions regarding the quality and quantity of goods or services produced or rendered, according to Doc Stoc. The operations manager addresses the entry of new, competing companies into the market. She makes any necessary changes to the company's current methods to maintain the competitive edge, says John Kamauff. Her other risk management responsibilities include handling the company's legal issues and corresponding with legal counsel on legal issues; she also handles the company's insurance policies, Supporting Advancement reports.

Skills The operations manager must be able to communicate effectively both orally and in writing. He must have excellent communication skills and be an attentive listener; he must be able to listen to others speak without interrupting, take time to understand the information he is receiving and ask questions when something is unclear. The operations manager must have the ability to fit into his environment, he must be able to adjust as the environment is not going to adjust for him. When solving problems, he must use logic and reason and assess the strengths, weaknesses, costs and benefits of all potential solutions. The operations manager must be able to critique himself and others; he must be able to make impartial performance reviews of both himself and others. He must also be an excellent leader and be able to delegate responsibility to the appropriate people, according to the Career Guide for General and Operations Managers.

Objectives Operations Management focuses on the efficient and effective transformation of resources inputs such as labor and materials into useful outputs such as quality products and services. As a result of increased global competitiveness the practice of operations management, especially in the manufacturing sector, is evolving from a traditional approach of meeting production goals within budget into more strategic function in which design and control of operations are an integral part of the strategic mission of the organization. The purpose of this course is to introduce problems and analysis related to the design, planning, implementation, control and improvement of manufacturing and service operations. In particular, this course looks at operation management from an integrated viewpoint. The course materials integrate marketing strategy, technology, information systems, and organizational issues.

Engineering Management or Management Engineering


- is a specialized form of management and engineering that is concerned with the application of engineering principles to business practice. Engineering Management is a career that brings together the technological problem-solving savvy of engineering and the organizational, administrative, and planning abilities of management in order to oversee complex enterprises from conception to [1] completion . Example areas of engineering are product development, manufacturing, construction, design engineering, industrial engineering, technology, production, or any other field that employs personnel who perform an engineering function. Successful engineering managers typically require training and experience in business and engineering. Technically inept managers tend to be deprived of support by their technical team, and non-commercial managers tend to lack commercial acumen to deliver in a market economy. Largely, engineering managers manage engineers who are driven by non-entrepreneurial thinking, thus require the necessary people skills to coach, mentor and motivate technical professionals. Engineering professionals joining manufacturing companies sometimes become engineering managers by default after a period of time. They are required to learn how to manage once they are on the job, though this is usually an ineffective way to develop managerial abilities. Engineering Management is a specialized form of management that is required to successfully lead engineering or technical personnel and projects. The term can be used to describe either functional management or project management. Engineering managers typically require training and experience in both general management and the specific engineering disciplines that will be used by the engineering team to be managed. The successful engineering manager must have the skills necessary to coach, mentor and motivate technical professionals, which are often very different from those that are required for individuals in other fields. The Management Engineering Program builds on the liberal arts tradition at the Ateneo and combines cour ses in management with extensive training in the scientific approach to problem solving and decision-making. In addition to the core curriculum and core management subjects (marketing, operations, accounting and finance, economics, organizational behavior, business strategy), the ME curriculum includes courses on systems methodologies, management science (mathematics, statistics, and operations research) and information technology applications.The curriculum seeks to produce graduates who have a holistic world view, are capable of integrating both qualitative and quantitative information in making decisions, and prepared to assume leadership roles in their place of work.

History Departments The first university department titled "Engineering Management" was founded at the Missouri University of Science and Technology (Missouri S&T, formerly the University of Missouri-Rolla) in 1967. Stevens Institute of Technology is believed to have the oldest EM department, established as the School of Business Engineering in 1908. This was later called the Bachelor of Engineering in Engineering Management (BEEM) program and moved into the School of Systems and Enterprises. Outside the USA, Istanbul Technical University has a Management Engineering Department established in 1982, offering a number of graduate and undergraduate programs in Management Engineering.

Education Engineering Management programs typically include instruction in accounting, economics, finance, project management, systems engineering, mathematical modeling and optimization, management information systems, quality control & six sigma, operations [2][3] research, human resources management, industrial psychology, safety and health. There are many options for entering into engineering management, albeit that the foundation requirement is an engineering degree (or other computer science, mathematics or science degree) and a business degree.

Linear programming
Linear programming (LP, or linear optimization) is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Linear programming is a specific case of mathematical programming (mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polyhedron, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such point exists. Linear programs are problems that can be expressed in canonical form:

where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients and A is a (known) matrix T of coefficients. The expression to be maximized or minimized is called the objective function (c x in this case). The equations Ax b are the constraints which specify a convex polytope over which the objective function is to be optimized. (In this context, two vectors are comparable when every entry in one is less-than or equal-to the corresponding entry in the other. Otherwise, they are incomparable.) Linear programming can be applied to various fields of study. It is used most extensively in business and economics, but can also be utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.

History The problem of solving a system of linear inequalities dates back at least as far as Fourier, after whom the method of FourierMotzkin elimination is named. Linear programming itself was first developed by Leonid Kantorovich, a Russian mathematician, in [1] 1939. It was used during World War II to plan expenditures and returns in order to reduce costs to the army and increase losses to the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality. Postwar, many industries found its use in their daily planning. The linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. Dantzig's original example of finding the best assignment of 70 people to 70 jobs exemplifies the usefulness of linear programming. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the Simplex algorithm. The theory behind linear programming drastically reduces the number of possible optimal solutions that must be checked. Uses Linear programming is a considerable field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming is heavily used in microeconomics and company management, such as planning, production, transportation, technology and other issues. Although the modern management issues are ever-changing, most companies would like to maximize profits or minimize costs with limited resources. Therefore, many issues can be characterized as linear programming problems. Standard form Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following four parts: A linear function to be maximized

Problem constraints of the following form

Non-negative variables

Non-negative right hand side constants

The problem is usually expressed in matrix form, and then becomes: Other forms, such as minimization problems, problems with constraints on alternative forms, as well as problems involving negative variables can always be rewritten into an equivalent problem in standard form.

Augmented form (slack form) Linear programming problems must be converted into augmented form before being solved by the simplex algorithm. This form introduces non-negative slack variables to replace inequalities with equalities in the constraints. The problem can then be written in the following block matrix form: Maximize Z: x, xs 0 where xs are the newly introduced slack variables, and Z is the variable to be maximized.

Duality Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as: Maximize cTx subject to Ax b, x 0; Maximize cTx subject to Ax b; with the corresponding symmetric dual problem, with the corresponding asymmetric dual problem, T T Minimize b y subject to A y c, y 0. Minimize bTy subject to ATy = c, y 0. An alternative primal formulation is:
There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality * * T * T * theorem states that if the primal has an optimal solution, x , then the dual also has an optimal solution, y , such that c x =b y . A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible (See also Farkas' lemma). Covering-packing dualities Covering-packing dualities Covering problems Minimum set cover Minimum vertex cover Packing problems Maximum set packing Maximum matching Subject to: A y c, y 0, such that the matrix A and the vectors b and c are nonnegative. The dual of a covering LP is a packing LP, a linear program of the form: Maximize: c x, Subject to: Ax b, x 0, such that the matrix A and the vectors b and c are nonnegative.
T T

Minimum edge cover Maximum independent set A covering LP is a linear program of the form: Minimize: b y,
T

Complementary slackness It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem. The theorem states: Suppose that x = (x1, x2, ... , xn) is primal feasible and that y = (y1, y2, ... , ym) is dual feasible. Let (w1, w2, ..., wm) denote the corresponding primal slack variables, and let (z1, z2, ... , zn) denote the corresponding dual slack variables. Then x and y are optimal for their respective problems if and only if xjzj = 0, for j = 1, 2, ... , n, and wiyi = 0, for i = 1, 2, ... , m.

So if the i-th slack variable of the primal is not zero, then the i-th variable of the dual is equal zero. Likewise, if the j-th slack variable of the dual is not zero, then the j-th variable of the primal is equal to zero. This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there are "leftovers"), then additional quantities of that resource must have no value. Likewise, if there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is not zero, then there must be scarce supplies (no "leftovers"). Theory Existence of optimal solutions Geometrically, the linear constraints define the feasible region, which is a convex polyhedron. A linear function is a convex function, which implies that every local minimum is a global minimum; similarly, a linear function is a concave function, which implies that every local maximum is a global maximum. Optimal solution need not exist, for two reasons. First, if two constraints are inconsistent, then no feasible solution exists: For instance, the constraints x 2 and x 1 cannot be satisfied jointly; in this case, we say that the LP is infeasible. Second, when the polytope is unbounded in the direction of the gradient of the objective function (where the gradient of the objective function is the vector of the coefficients of the objective function), then no optimal value is attained. Optimal vertices (and rays) of polyhedra Otherwise, if a feasible solution exists and if the (linear) objective function is bounded, then the optimum value is always attained on the boundary of optimal level-set, by the maximum principle for convex functions (alternatively, by the minimum principle for concave functions): Recall that linear functions are both convex and concave. However, some problems have distinct optimal solutions: For example, the problem of finding a feasible solution to a system of linear inequalities is a linear programming problem in which the objective function is the zero function (that is, the constant function taking the value zero everywhere): For this feasibility problem with the zero-function for its objective-function, if there are two distinct solutions, then every convex combination of the solutions is a solution. The vertices of the polytope are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the * number of variables. Then the fundamental theorem of linear inequalities implies (for feasible problems) that for every vertex x of the LP feasible region, there exists a set of d (or fewer) inequality constraints from the LP such that, when we treat those d * constraints as equalities, the unique solution is x . Thereby we can study these vertices by means of looking at certain subsets of the set of all constraints (a discrete set), rather than the continuum of LP solutions. This principle underlies the simplex algorithm for solving linear programs. Algorithms

A series of linear constraints on two variables produces a region of possible values for those variables. Solvable problems will have a feasible region in the shape of a simple polygon. Basis exchange algorithms Simplex algorithm of Dantzig The simplex algorithm, developed by George Dantzig in 1947, solves LP problems by constructing a feasible solution at a vertex of the polytope and then walking along a path on the edges of the polytope to vertices with non-decreasing values of the objective function until an optimum is reached. In many practical problems, "stalling" occurs: Many pivots are made with no increase in the objective function. In rare practical problems, the usual versions of the simplex algorithm may actually "cycle. To avoid cycles, researchers developed new pivoting rules .

In practice, the simplex algorithm is quite efficient and can be guaranteed to find the global optimum if certain precautions against cycling are taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of steps, which is similar to its behavior on practical problems. However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size. In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time (complexity class P). Criss-cross algorithm Like the simplex algorithm of Dantzig, the criss-cross algorithm is a basis-exchange algorithm that pivots between bases. However, the criss-cross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The criss-cross D algorithm does not have polynomial time-complexity for linear programming. Both algorithms visit all 2 corners of a (perturbed) cube in dimension D, the KleeMinty cube (after Victor Klee and George J. Minty), in the worst case. Interior point Ellipsoid algorithm, following Khachiyan This is the first worst-case polynomial-time algorithm for linear programming. To solve a problem which has n variables and can be 4 encoded in L input bits, this algorithm uses O(n L) pseudo-arithmetic operations on numbers with O(L) digits. Khachiyan's algorithm and his long standing issue was resolved by Leonid Khachiyan in 1979 with the introduction of the ellipsoid method. The convergence analysis have (real-number) predecessors, notably the iterative methods developed by Naum Z. Shor and the approximation algorithms by Arkadi Nemirovski and D. Yudin. Projective algorithm of Karmarkar Khachiyan's algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm was not a computational break-through, as the simplex method is more efficient for all but specially constructed families of linear programs. However, Khachiyan's algorithm inspired new lines of research in linear programming. In 1984, N. Karmarkar proposed a projective 3.5 method for linear programming. Karmarkar's algorithm improved on Khachiyan's worst-case polynomial bound (giving O(n L)). Karmarkar claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interior-point methods. Its projective geometry is interesting. Path-following algorithms In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region. Since then, many interior-point methods have been proposed and analyzed. Early successful implementations were based on affine scaling variants of the method. For both theoretical and practical purposes, barrier function or path-following methods have been the most popular since the 1990s. Comparison of interior-point methods versus simplex algorithms The current opinion is that the efficiency of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better). LP solvers are in widespread use for optimization of various problems in industry, such as optimization of flow in transportation networks.

Open problems and recent work Unsolved problems in computer science Does linear programming admit a strongly polynomialtime algorithm?

There are several open problems in the theory of linear programming, the solution of which would represent fundamental breakthroughs in mathematics and potentially major advances in our ability to solve large-scale linear programs. Does LP admit a strongly polynomial-time algorithm? Does LP admit a strongly polynomial algorithm to find a strictly complementary solution? Does LP admit a polynomial algorithm in the real number (unit cost) model of computation?

This closely related set of problems has been cited by Stephen Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the problem "is the main unsolved problem of linear programming theory." While algorithms exist to solve linear programming in weakly polynomial time, such as the ellipsoid methods and interior-point techniques, no algorithms have yet been found that allow strongly polynomial-time performance in the number of constraints and the number of variables. The development of such algorithms would be of great theoretical interest, and perhaps allow practical gains in solving large LPs as well. Although the Hirsch conjecture was recently disproved for higher dimensions, it still leaves the following questions open. Are there pivot rules which lead to polynomial-time Simplex variants? Do all polytopal graphs have polynomially-bounded diameter?

These questions relate to the performance analysis and development of Simplex-like methods. The immense efficiency of the Simplex algorithm in practice despite its exponential-time theoretical performance hints that there may be variations of Simplex that run in polynomial or even strongly polynomial time. It would be of great practical and theoretical significance to know whether any such variants exist, particularly as an approach to deciding if LP can be solved in strongly polynomial time. The Simplex algorithm and its variants fall in the family of edge-following algorithms, so named because they solve linear programming problems by moving from vertex to vertex along edges of a polytope. This means that their theoretical performance is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graph-theoretical diameter of polytopal graphs. It has been proved that all polytopes have subexponential diameter. The recent disprove of the Hirsch conjecture is the first step to prove whether any polytope has superpolynomial diameter. If any such polytopes exist, then no edge-following variant can run in polynomial time. Questions about polytope diameter are of independent mathematical interest. Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, criss-cross pivot methods do not preserve (primal or dual) feasibilitythey may visit primal feasible, dual feasible or primal-and-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentially, these methods attempt to find the shortest pivot path on the arrangement polytope under the linear programming problem. In contrast to polytopal graphs, graphs of arrangement polytopes are known to have small diameter, allowing the possibility of strongly polynomial-time criss-cross pivot algorithm without resolving questions [8] about the diameter of general polytopes. Integer unknowns If the unknown variables are all required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0-1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than

arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems. If only some of the unknown variables are required to be integers, then the problem is called a mixed integer programming (MIP) problem. These are generally also NP-hard. There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers. Advanced algorithms for solving integer linear programs include: cutting-plane method branch and bound branch and cut branch and price if the problem has some extra structure, it may be possible to apply delayed column generation.

Such integer-programming algorithms are discussed by Padberg and in Beasley. Integral linear programs A linear program in real variables is said to be integral if it has at least one optimal solution which is integral. Likewise, a polyhedron is said to be integral if for all bounded feasible objective functions c, the linear program has an optimum x with integer coordinates. As observed by Edmonds and Giles in 1977, one can equivalently say that a polyhedron is integral if for every bounded feasible integral objective function c, the optimal value of the linear progam is an integer.
*

Integral linear programs are of central importance in the polyhedral aspect of combinatorial optimization since they provide an alternate characterization of a problem. Specifically, for any problem, the convex hull of the solutions is an integral polyhedron; if this polyhedron has a nice/compact description, then we can efficiently find the optimal feasible solution under any linear objective. Conversely, if we can prove that a linear programming relaxation is integral, then it is the desired description of the convex hull of feasible (integral) solutions. Note that terminology is not consistent throughout the literature, so one should be careful to distinguish the following two concepts, in an integer linear program, described in the previous section, variables are forcibly constrained to be integers, and this problem is NP-hard in general, in an integral linear program, described in this section, variables are not constrained to be integers but rather one has proven somehow that the continuous problem always has an integral optimal value (assuming c is integral), and this optimal value may be found efficiently since all polynomial-size linear programs can be solved in polynomial time.

One common way of proving that a polyhedron is integral is to show that it is totally unimodular. There are other general methods including the integer decomposition property and total dual integrality. Other specific well-known integral LPs include the matching polytope, lattice polyhedra, submodular flow polyhedra, and the intersection of 2 generalized polymatroids/g-polymatroids --- e.g. see Schrijver 2003. A bounded integral polyhedron is sometimes called a convex lattice polytope, particularly in two dimensions.

Application of linear programming Linear programming deals with the optimization (maximization or minimization) of linear functions subject to linear constraints.This technique has found its applications to important areas of product mix, blending problems and diet problems.Oil refineries, chemical industries, steel industries and food processing industry are also using linear programming with considerable success.Linear programming problems involving only two variables can be effectively solved by a graphical technique which prrovides a pictorial representation of the solution. Step 1: formulate the given problem as a linear programming problem Step 2 : plot the given constraints as equalities on x1-x2 cordinate plane and determine the convex region formed by them Step 3 : determine the vertices of the convex region and find the value of objective function at each vertex.The vertex which gives the optimal value of the objective function gives the desired optimal solution to the problem.

General Linear Programming Problem Any linear programming problem involving more than two variables may be expressed as follows find the values of the variable x1,x2,............,xn which maximize (or minimize) the objective function Z=c1x1+c2x2+..............+cnxn subject to the constraints a11x1+a12x2+.............+a1nxn<=b1 a21x1+a22x2+............. +a2nxn<=b2 ............................................................. am1x1+am2x2+..............+amnxn<=bm and meet the non negative restrictions x1,x2,...........xn>=0 1 . A set of values x1,x2......xn which satisfies the constraints of linear programming problem is called its
solution. 2 . Any solution to a linear programming problem which

its optimal solution . Forms of linear programming problem There are two forms of linear programming problem.They are: Canonical form: The general linear programming problem can be expressed as Maximize z=c1x1+c2x2+........cnxn subject to the constraints ai1x1+ai2x2+.........ainxn<=bi; x1,x2........xn>=0. This form is called its canonical form and has the following characteristics:
1. objective function is of maximization type 2. all constraints are of(<=)type 3 .all variables xi are non-negative Standard form: standard form has its following characteristics: 1. objective function is of maximization type 2. all constraints are expressed as equations 3. right hand side of each constraint is non negative 4. all variables are non negative

satisfies the non negativity restrictions of the problem is called itsfeasible solution.
3. Any feasible solution which maximizes(or minimizes)

the objective function of the linear programming problem is called


Working rules for solving linear programming problem (LPP) Step 1. Identify the unknowns in the given LPP. Denote then by x and y. Step 2. Formulate the objective function in terms of x and y. be sure whether it is to be maximized or Step 3. Translate all the constraints in the form of linear

minimized.

inequations. Step 4. Solve these inequations simultaneously. Mark the common area by shaded region. This is the feasible region .

Step 5. Find the coordinates of all the vertices of the

Step 7. Find the values of x and y for which the objective

feasible region Step 6. Find the value of the objective function at each vertex of the feasible region.

function z = ax + by has maximum or minimum value (as the case may be)
EXAMPLES

PROBLEM:

Solve the following linear program: maximise 5x1 + 6x2 subject to x1 + x2 <= 10 x1 - x2 >= 3 5x1 + 4x2 <= 35 x1 >= 0QA x2 >= 0
SOLUTION:

5x1 + 4x2 = 35 and x1 - x2 = 3 Solving simultaneously, rather than by reading values off the graph, we have that 5(3 + x2) + 4x2 = 35 i.e. 15 + 9x2 = 35 i.e. x2 = (20/9) = 2.222 and x1 = 3 + x2 = (47/9) = 5.222 The maximum value is 5(47/9) + 6(20/9) = (355/9) = 39.444

It is plain that the maximum occurs at the intersection of

PROBLEM2: A carpenter makes tables and chairs. Each table can be sold for a profit of 30 and each chair for a profit of 10. The carpenter can afford to spend up to 40 hours per week working and takes six hours to make a table and three hours to make a chair. Customer demand requires that he makes at least three times as many chairs as tables. Tables take up four times as much storage space as chairs and there is room for at most four tables each week.

Formulate this problem as a linear programming problem .


SOLUTION:

Variables Let xT = number of tables made per week xC = number of chairs made per week Constraints total work time 6xT + 3xC <= 40 customer demand xC >= 3xT storage space (xC/4) + xT <= 4

all variables >= 0 Objective maximise 30xT + 10xC The graphical representation of the problem is given below and from that we have that the solution lies at the intersection of (xC/4) + xT = 4 and 6xT + 3xC = 40 Solving these two equations simultaneously we get xC = 10.667, xT = 1.333 and the profit = 146.667

Graphical method of solving linear programming

Draw the graph of the constraints. Determine the region which satisfies all the constraints and non-negative constraints (x > 0, y > 0). This region is called the feasible region. Determine the co-ordinates of the corners of the feasible region.

Calculate the values of the objective function at each corner. Select the corner point which gives the optimum (maximum or minimum) value of the objective function. The coordinates of that point determine the optimal solution.

Problem Formulation With computers able to solve linear programming problems with ease, the challenge is in problem formulation - translating the problem statement into a system of linear equations to be solved by computer. The information required to write the objective function is derived from the problem statement. The problem is formulated from the problem statement as follows: 1. 2. 3. 4. 5. Identify the objective of the problem; that is, which quantity is to be optimized. For example, one may seek to maximize profit. Identify the decision variables and the constraints on them. For example, production quantities and production limits may serve as decision variables and constraints. Write the objective function and constraints in terms of the decision variables, using information from the problem statement to determine the proper coefficient for each term. Discard any unnecessary information. Add any implicit constraints, such as non-negative restrictions. Arrange the system of equations in a consistent form suitable for solving by computer. For example, place all variables on the left side of their equations and list them in the order of their subscripts.

The following guidelines help to reduce the risk of errors in problem formulation: Be sure to consider any initial conditions. Make sure that each variable in the objective function appears at least once in the constraints. Consider constraints that might not be specified explicitly. For example, if there are physical quantities that must be nonnegative, then these constraints must be included in the formulation.

Applications of Linear Programming

Linear programming is used to solve problems in many aspects of business administration including: product mix planning distribution networks truck routing staff scheduling financial portfolios corporate restructuring Linear Programming Operations management often presents complex problems that can be modeled by linear functions. The mathematical technique of linear programming is instrumental in solving a wide range of operations management problems. Linear Program Structure

Linear programming models consist of an objective function and the constraints on that function. A linear programming model takes the following form: Objective function: Z = a1X1 + a2X2 + a3X3 + . . . + anXn Constraints: b11X1 + b12X2 + b13X3 + . . . + b1nXn < c1 b21X1 + b22X2 + b23X3 + . . . + b2nXn < c2 . . . bm1X1 + bm2X2 + bm3X3 + . . . + bmnXn < cm In this system of linear equations, Z is the objective function value that is being optimized, Xi are the decision variables whose optimal values are to be found, and ai, bij, and ci are constants derived from the specifics of the problem.

Linear Programming Assumptions Linear programming requires linearity in the equations as shown in the above structure. In a linear equation, each decision variable is multiplied by a constant coefficient with no multiplying between decision variables and no nonlinear functions such as logarithms. Linearity requires the following assumptions: Proportionality - a change in a variable results in a proportionate change in that variable's contribution to the value of the function. Additivity - the function value is the sum of the contributions of each term.

Divisibility - the decision variables can be divided into non-integer values, taking on fractional values. Integer programming techniques can be used if the divisibility assumption does not hold.

In addition to these linearity assumptions, linear programming assumes certainty; that is, that the coefficients are known and constant.

The simplex method. The simplex method has been the standard technique for solving a linear program since the 1940's. In brief, the simplex method passes from vertex to vertex on the boundary of the feasible polyhedron, repeatedly increasing the objective function until either an optimal solution is found, or it is established that no solution exists. In principle, the time required might be an exponential function of the number of variables, and this can happen in some contrived cases. In practice, however, the method is highly efficient, typically requiring a number of steps which is just a small multiple of the number of variables. Linear programs in thousands or even millions of variables are routinely solved using the simplex method on modern computers. Efficient, highly sophisticated implementations are available in the form of computer software packages. Interior-point methods. In 1979, Leonid Khaciyan presented the ellipsoid method, guaranteed to solve any linear program in a number of steps which is a polynomial function of the amount of data defining the linear program. Consequently, the ellipsoid method is faster than the simplex method in contrived cases where the simplex method performs poorly. In practice, however, the simplex method is far superior to the ellipsoid method. In 1984, Narendra Karmarkar introduced an interior-point method for linear programming, combining the desirable theoretical properties of the ellipsoid method and practical advantages of the simplex method. Its success initiated an explosion in the development of interior-point methods. These do not pass from vertex to vertex, but pass only through the interior of the feasible region. Though this property is easy to state, the analysis of interior-point methods is a subtle subject which is much less easily understood than the behavior of the simplex method. Interior-point methods are now generally considered competitive with the simplex method in most, though not all, applications, and sophisticated software packages implementing them are now available. Whether they will ultimately replace the simplex method in industrial applications is not clear.

Introduction Linear optimization is a procedure to optimally plan complex operations. It is based on a set of variables, so called decision variables, that define the objective of a particular operation. Quantitative constraints are defined as a set of linear inequalities (or equalities). For example, consider a manufacturing operation of some products, using different resources in the process, with the objective to maximize profits under known quantifiable constraints. Such an operation can be formulated mathematically as follows: Decision Variables Objective Coefficients Constraints Operation Coefficients Objective Operation Matrix m different products xi 0, pi = profit per product ; i m n resources and other constraints rj 0, aji = constraint j per product i ; j n Maximize (Minimize) P = p1x1+p2x2+...+pmxm Solve:

a11x1+a12x2+... +a1mxm r1 a21x1+a22x2+... +a2mxm r2 .................................. aj1x1+aj2x2+ ... +ajmxm rj .................................. an1x1+an2x2+... +anmxm rn Notice the "" relation in row j of the linear system above. This indicates a non-standard optimization problem. In standard linear optimization problems the production matrix contains only the "" relations. minimization problems are solved by finding the maximum of the negative objective function.

The Input Data File Given all required quantitative information about an operation, mathematical algorithms can be applied to solve related linear optimization problems; the most popular of which being the Simplex Method. Using a computer program, all data relevant to an operation must be accessed either through human interface or electronic storage. There are a few popular (ASCII) file formats for data acquisition and storage, e.g. MPS (Mathematical Programming System), that are comprehensive in scope and have the ability to describe a variety of linear problems. However, for a computer program, the most efficient method to save, transport and access data is through binary files. The following file structure, in x86/87 notation, contains all required data elements to solve a linear maximization problem. VarType VarSz NumD NumC ConRel Objective Constraint OpMtrx DB DB DD DD DB ? ? m n n DUP (?) ; 0 = Integer, 1 = Real, 2 = BCD ; 1=Byte,2=Word,4=Dword,8=Qword,10=Tbyte ; number of decision variables (products) ; number of constraints (resources) ; constraints relation flags [j] {-1,0,+1}, j n ; objective (profit) coef. array [pi] with m elements ; constraints array [rj] with n elements ; operation (production) coef. matrix [aji] with m columns and n rows

[VarSz] m DUP (?) [VarSz] n DUP (?) [VarSz] n DUP (m DUP (?))

The elements of the constraints relations flags array (ConRel) represent the type of constraint for each row in the Operations Matrix: "" 1, "=" 0, "" +1. Therefore, in standard linear optimization problem all flags are set to 1. The minimal size of an input data file will thus be 10 + VarSz*(m + n + m*n) + n bytes. This structure enables the solver (_Simplex) to decode any standard (x86/87) number. Note that all array elements must be of the same size (VarSz) and type (VarType). In particular, Integers may be 1,2,4 or 8 bytes long; Real numbers, 4,8 or 10 bytes and BCD 10 bytes. The Results File A problem solving procedure can save the results to any memory device. The associated binary file may have the following structure: NumD NumC Max D_Var S_Var DD DD DT DT DT m n P m DUP (?) n DUP (?) ; number of decision variables (products) ; number of constraints (resources) ; resulting maximum (profit) P ; array of resulting decision variables [xi], i m ; array of resulting slack or surplus variables [sj], j n

The slack or surplus variables sj are the quantities of constraints that have not been utilized. Note that all results are returned as 80 bit floating point numbers (DT = Tbyte), and the file size is 8 + 10*(1 + m + n) bytes.

System Requirements & Limitations Optimizing large and complex operations involves large numbers of parameters that have to be processed fast. Hence, the higher the CPU's clock-speed and the more RAM available, the better. However, today's personal computers are capable of handling relatively large and complex optimization problems. Furthermore, when constrained by a real 16-bit operating system - e.g. MSDOS (in short DOS) - a (hard) disk can compensate for RAM limitations; noting that its environment is limited to 640KB RAM, 8GB disk [1] space and 2GB file size. A Simplex algorithm running under DOS can therefore handle efficiently no more than 3276 constraints and requires a minimum 256KB of RAM and 1GB free disk space. Nonetheless, also larger problems can be handled, but with (much) less efficiency. Furthermore, by utilizing unused SVGA video memory some RAM limitations can be overcome. Large optimization problems require many computations and algorithmic iterations, often resulting in large errors and unreliable solutions. To overcome this difficulty (up to a certain degree) extended precision arithmetic can be used internally. Even under DOS [2] limitations, large optimization problems can be successfully handled by using internally defined high precision numbers . The allowed number of constraints is then reduced, accordingly. For example, using internally 256-bit-mantissa FP numbers (e.g. 256 bit mantissa, 31 bit exponent and 1 bit for the sign) the number of constraints may not exceed 720.

Research Paper In

Specialization: Building Administration 01

Submitted by: Rolly A. Balberan


BS Architecture - 3

Submitted to: Arch. Dexter Labasan


Instructor

You might also like