Professional Documents
Culture Documents
Acknowledgments xv
Preface xvii
1 Introduction 1
1.1 Cellular Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Benefits and Drawbacks of CM . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Research Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Reading Suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Literature Review 25
3.1 Classification of Methods . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.1 Cluster Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 26
i
ii CONTENTS
4 Problem Formulation 61
4.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.1 Generally Assumptions . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.2 Particular Assumptions . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.1 Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.2 Mandatory Input Parameters . . . . . . . . . . . . . . . . . . . 64
4.2.3 Complementary Input Parameters . . . . . . . . . . . . . . . . 66
4.3 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.1 Decision variables . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.2 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.3 Evaluation Parameters . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.4 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . 75
4.4 Size of Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4.1 Size of Search Space for the Resource Planning Problem . . . . 76
4.4.2 Size of Search Space for the Cell Formation Problem . . . . . . 78
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5 Methodology 87
5.1 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.1.1 Definition of Genetic Algorithm . . . . . . . . . . . . . . . . . . 87
5.2 Grouping Genetic algorithm (Gga) . . . . . . . . . . . . . . . . . . . . 89
5.3 Multiple Objective Grouping Genetic algorithm (Mogga) . . . . . . . 90
5.4 Proposed solutions - Methodology . . . . . . . . . . . . . . . . . . . . . 92
5.4.1 Mogga with an integrated module . . . . . . . . . . . . . . . . 92
5.4.2 Simogga: an hybrid Mogga for GCFP . . . . . . . . . . . . . 98
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
CONTENTS iii
6 Implementation 103
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.2 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.2.1 Coding of Chromosomes . . . . . . . . . . . . . . . . . . . . . . 103
6.2.2 Operator Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3 Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4.1 Random Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4.2 Flow Heuristic CF . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.4.3 Process Heuristic RP . . . . . . . . . . . . . . . . . . . . . . . . 113
6.4.4 Process Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.4.5 Harhalakis Heuristic CF . . . . . . . . . . . . . . . . . . . . 117
6.4.6 CF-Mogga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.4.7 RP-Mogga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.5 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.5.1 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.5.2 Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.5.3 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.5.4 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.6 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.6.1 Local optimisation . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.6.2 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.7 Stopping condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7 Validation 131
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.2 Separated Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.2.1 Heuristic CF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.2.2 Heuristic RP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.2.3 Process Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.3 Simultaneous Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.4 Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.5 Efficiency of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 150
7.5.1 Huge Number of Generations . . . . . . . . . . . . . . . . . . . 150
7.5.2 Efficiency of the Crossover . . . . . . . . . . . . . . . . . . . . . 151
7.5.3 Large Population . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.5.4 Comparison with the Successive Resolution . . . . . . . . . . . 153
7.6 Adaptation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.6.1 Number of Generations . . . . . . . . . . . . . . . . . . . . . . 157
7.6.2 Population Size . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.6.3 Initialisation Rate . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.6.4 Operators Rates . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.6.5 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.6.6 Final Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
7.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
iv CONTENTS
8 Applications 173
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.2 CS without Alternativity . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.3 CS with Alternative Routings and Processes . . . . . . . . . . . . . . . 179
8.4 Conclusion for Literature Case Studies . . . . . . . . . . . . . . . . . . 183
9 Conclusions 187
9.1 Summary of the Results . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2 Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.2.1 Transformation of the Sigga in a Simogga . . . . . . . . . . . 189
9.2.2 Sensitive of the Algorithm in Adding some Constraints . . . . . 189
9.2.3 Extension in the Cellular Manufacturing . . . . . . . . . . . . . 190
9.2.4 Another Application? . . . . . . . . . . . . . . . . . . . . . . . 190
G Acronyms 281
vi CONTENTS
List of Figures
vii
viii LIST OF FIGURES
6.1 Flow matrix for the RP-solution found by the Random Heuristic RP. . 112
xi
xii LIST OF TABLES
7.10 Best values after 100 generations using 4 RP heuristics for the cases
RA = 100 and P A = 100. . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.11 Values of the intra-cellular flow for ideal case studies (Average for
different values of RA and P A) - Complete value in Tables E.1, E.2
and E.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.12 Comparative table with best values for a population of 32 chromo-
somes after 100 generations (Average of four sets of alternativity). See
complete Table E.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.13 Comparative table with best values for a population of 32 chromosomes
after 100 generations (Average of cases). See complete Table E.4. . . . 147
7.14 Best evolution values without and with the local optimisation. . . . . . 148
7.15 Comparative table with best values for the algorithm without and
with the local optimisation after 100 generations and 32 chromosomes
(average of four sets of alternativity). See complete Table E.5. . . . . . 149
7.16 Average best values for the algorithm with the local optimisation after
1000 generations (32 and 64 chromosomes) for four sets of alternativity.
See complete Tables E.6 and E.7 . . . . . . . . . . . . . . . . . . . . . 151
7.17 Average best values for the algorithm with the local optimisation after
1000 generations (32 and 64 chromosomes) for four each of 20 cases.
See complete Tables E.6 and E.7 . . . . . . . . . . . . . . . . . . . . . 151
7.18 Best flows when the Simogga is run with and without crossover. . . . 152
7.19 Best flows when the Simogga is run with a large population without
generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.20 Average values (on 4 cases) found with the Simogga and the Mogga
with an integrated module (RP and CF). See complete Table E.8 . . . 155
7.21 Average values (on RA and P A) found with the Simogga and the
Mogga with an integrated module (RP and CF). See complete Table
E.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.22 Average values found with the Mogga with an integrated heuristic
(RP and CF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.23 Values (Average, minimum and maximum) for the percentage of cases
reaching 100% and the best generation for different generations and
two population sizes (32 and 64). . . . . . . . . . . . . . . . . . . . . . 157
7.24 Percentage of the number of cases reaching 100% and the average
best flow for different generations and a population size of 32 and 64
chromosomes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.25 Values of the percentage of cases and the generation for the cases
reaching 100% of intra-cellular flow and the best flow found after 100
generations (Average of 10 runs for 80 cases). . . . . . . . . . . . . . . 160
7.26 Values of the percentage of cases and the generation for the cases
reaching 100% of intra-cellular flow and the best flow found after 100
generations for different values of initialisation rate. . . . . . . . . . . . 162
7.27 Best values found after 100 generations for different values of operator
rates (Continued). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.28 Best values found after 100 generations for different values of operator
rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
LIST OF TABLES xiii
7.29 Comparative table with average values based on the four sets of alter-
nativity for the algorithm after 100 generations for two operator rates
(80 and 100). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.30 Average values by set of alternativity for the algorithm after 100 gen-
erations for two operator rates (80 and 100). . . . . . . . . . . . . . . . 165
7.31 Evolution parameters (average values based on the four sets of alter-
nativity) for the algorithm after 100 generations for two operator rates
(80 and 100). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.32 Probability to apply a mutation depending on the mutation frequency
and mutation rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.33 Best average values by set of alternativity for different mutation pa-
rameters (RA = 100). . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.34 Best average values for large mutation frequencies with Mut. Rate= 1
and Mut. Int.= 0.2 (RA = 100). . . . . . . . . . . . . . . . . . . . . . 167
7.35 Comparative table with best values for the final algorithm after all
adaptations after 100 generations for four sets of alternativity. . . . . . 168
7.36 Comparative table with best values for the final algorithm after all
adaptations after 100 generations for four sets of alternativity. . . . . . 169
7.37 Comparative table with average values (RA = 100) before and after
local optimisation and after adaptation of parameters after 100 gener-
ations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.38 Average (on RA and P A) best values found with the final Simogga
and the Mogga with an integrated module (RP and CF). See com-
plete Table E.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
7.39 Best values found with the final Simogga and the Mogga with an
integrated module (RP and CF). See complete Table E.10 . . . . . . . 171
E.1 Average values of the intra-cellular flow for ideal case studies (RA=0
and PA=100). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
E.2 Average values of the intra-cellular flow for ideal case studies (RA=100
and PA=0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
E.3 Average values of the intra-cellular flow for ideal case studies (RA=100
and PA=100). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
E.4 Comparative table with best values for a population of 32 chromosomes
after 100 generations for four sets of alternativity. . . . . . . . . . . . . 254
E.5 Comparative table with best values for the algorithm with the local
optimisation after 100 generations and 32 chromosomes for four sets
of alternativity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
E.6 Comparative table with best values for the algorithm with the local
optimisation after 1000 generations and 32 chromosomes for four sets
of alternativity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
E.7 Values of the generation and the percentage of cases reaching 100% of
intra-cellular flow and the best intra-cellular flows after 1000 genera-
tions of the Simogga and a population of 64 chromosomes. . . . . . . 257
E.8 Best values found with the Simogga and the Mogga with an inte-
grated module (RP and CF). . . . . . . . . . . . . . . . . . . . . . . . 258
E.9 Comparative table with best values for the final algorithm after all
adaptations after 100 generations for four sets of alternativity. . . . . . 259
E.10 Best values found with the final Simogga and the Mogga with an
integrated module (RP and CF). . . . . . . . . . . . . . . . . . . . . . 260
Acknowledgments
xv
xvi ACKNOWLEDGMENTS
Preface
• the grouping of each machine in cells producing traffic inside and outside the
cells.
In function of the solution to the first problem, different clusters will be created to
minimise the inter-cellular traffic.
In this thesis, an original method based on the grouping genetic algorithm (Gga)
is proposed to solve simultaneously these two interdependent problems. The efficiency
of the method is highlighted compared to the methods based on two integrated al-
gorithms or heuristics. Indeed, to form these cells of machines with the allocation
of operations on the machines, the used methods permitting to solve large scale
problems are generally composed by two nested algorithms. The main one calls the
secondary one to complete the first part of the solution. The application domain goes
beyond the manufacturing industry and can for example be applied to the design of
the electronic systems as explained in the future research.
All the algorithms and methods described in this thesis are implemented in a
ready-to-use system. The development platform chosen is a Kubuntu/Linux operat-
ing system running the graphical environment KDE, using the GNU developments
tools and Eclipse. The developed softwares currently consists of about 10.440 lines
xvii
xviii PREFACE
of C++ source code, developed by the author and based on 29.360 lines of the code
open source developed by Pascal Francq.
Chapter 1
Introduction
• job shop layout is functional and used for high part variety and low volume;
• flow shop layout is based on a line layout and dedicated to parts with low
variety and high volume.
In general, functional job shops are designed to achieve maximum flexibility such
that a wide variety of parts with small lot sizes can be manufactured. Parts manu-
factured in job shops usually require different operations and have different operation
sequences. Operating time for each operation can vary significantly. Parts are re-
leased to the shops in batches (jobs). The objective is to locate similar machines,
and labour with similar skills, together. So, the machines are grouped functionally
according to the type of manufacturing process: drills in one department, lathes in
another, and so forth. In job shops, jobs spend 95% of their time in non-productive
activity waiting a machines queue. The remaining 5% is split between lot setup and
processing [12]. Generally, the distance between machines belonging to the process of
a part is large which produces a lot of traffic between all the machines and through-
out the company as we can see in Figure 1.1. When the part mix demands change or
when a new part is ordered, the part is allocated to free machines without respecting
part families1 or minimising moves. So production planning, scheduling and control
of production are very difficult.
1
In the group technology concept, the part family is a set of related parts that can be produced by
the same sequence of machining operations because of similarity in shape and geometry or similarity
in production operation processes
1
2 CHAPTER 1. INTRODUCTION
M M M G
L L
M M M
G G
L L
M M Painting
department
A A A
P
L L D D
P
A A A
D D
L L Drilling P
department
Assembly
Receiving and Shipping department
L M D L M G A
L M D P A A
L M M D G G A
Receiving Shipping
L L M M P A
L L M D P A
part types more economically than other types of manufacturing systems [188]. If
volumes are very large, pure item flow lines are preferred; if volumes are small and
part types are varied to the point of only slight similarities between jobs, it is preferred
to use a job shop layout.
Cellular manufacturing has been one of the most successful ways that companies
have coped with the challenge of today’s global competitive environment. The ap-
proach has been applied in a variety of industries over the past three decades, such
as machinery and tools, aerospace and defence, automotive, and electrical [303].
• Reduction of lot sizes: The reduction of setup time permits working with
smaller lot sizes that are more economical. Small lot sizes also smooth the
production flow.
M D D D
P
L
M M
D A
P
M
L M
M A
G Left over
L M process
A
L P
L A
L G
Cell 2
Cell 1 G
Cell 3
Receiving and shipping A L
• Reduction of material handling cost and time: In CM, each part is processed
completely in an independent cell as much as possible. Part travel time and
distance between machines is minimal. Thus, all flows are concentrated into
the cells, reducing flow time.
• Reduction in control effort: It is easier to follow the parts in small entities like
the cells than in a job shop layout where parts travel through the entire shop.
Localised and specialised cells dedicated to similar parts cause the expertise to
be concentrated.
• Ease in scheduling and production planning: Scheduling and planning are com-
plicated when the number of machines and parts are high. In CM, the manu-
facturing facility is broken down into manufacturing cells and each part travels
in a single cell. The system to analyse is reduced and easier to plan.
• Improvement in product quality: Parts are produced in a small area and the
feedback is immediate. Thus, the process can be stopped quickly when things
go wrong.
1.2. BENEFITS AND DRAWBACKS OF CM 5
1.2.2 Disadvantages
• High implementation cost: Reorganisation of an existing layout can be expen-
sive task. Costs comprise the determination of manufacturing cells and part
families; the physical reorganisation; the training and specialisation of employ-
ment.
• Lack of flexibility: Ideally, the cells should be designed with the maximum of
flexibility to handle the maximum number of different part types. However, to
realistically obtain the benefits of the cellular manufacturing, the real flexibility
of the cells is limited.
• Change in part range and mix: With changing part ranges, it is not possible to
continually change the layout or the manufacturing cells; as a result, it might
be possible that some work-centres become bottlenecks, while others remain
under-used. There is a lack of flexibility compared to the job shop layout.
The order in which these steps will be accomplished determines the cell formation
strategies. The resolution of the cell formation problem stops after those three design
steps. The physical arrangement of machines into cells, the disposition of each cell
and the production planning and scheduling does not belong to the CFP but to the
layout design problem.
In summary, to solve a cell formation problem, it is necessary to firstly identify
the strategic issues in specifying the awaited solution for the production. The
desired solution can be oriented to the machine or cell flexibility, cell layout, machine
type, etc. Much research has been published addressing either technical issues (e.g.
1.4. RESEARCH OBJECTIVE 7
These three first objectives must be treated simultaneously. The last objective is a
consequence of solutions of the three first problems.
To solve this problem, a new integrated approach (Simogga) based on an adapted
Multiple Objective Grouping Genetic Algorithm (Mogga) is proposed to solve si-
multaneously the preferential routing selection of each part including the process
selection (by solving an associated resource planning problem), and the machine
grouping (cell formation problem).
The Multiple Objective Grouping Genetic Algorithm was developed in previous
works and allows to include a multi-criteria aided tool inside the evaluation process
of Grouping Genetic Algorithm. This algorithm has proved itself in different areas
(structured search or assembly line design). The multi-criteria methodology pre-
sented in this thesis is similar as for the Mogga. Different criteria needed to opti-
mise the GCFP are explained and these are developed in the coding of the Simogga.
However, the validation phase of the Simogga is presented only with one criterion:
the minimisation of the traffic between the cells of machines. Through lack of time
and multi-criteria case studies, the validation of the proposed hybrid algorithm with
all criteria is included in the perspective.
The originality of this thesis does not lie with the multi-criteria methodology but
with a new use of Mogga to solve simultaneously two interdependent problems.
The classical approaches to solve two interdependent problems works sequentially
i.e. an algorithm is used to solve the first problem and another method, fast and
non-optimal, is applied to solve the second embedded problem. Iterations can be
made to optimise the solutions. The hybrid approach proposed in this thesis makes
it possible to treat both problems at the same level and simultaneously. To the
best of our knowledge, this resolution method has never been applied to solve two
interdependent problems.
The chapter 5 begins with an briefly explanation of the tools used in the Simogga
including the Genetic Algorithm (Ga), the Grouping Genetic Algorithm (Gga) and
the Multiple Objective Grouping Genetic Algorithm (Mogga). Next, the different
reasonings and evolutions of the Mogga that are allows the creation of the Simogga
are described.
The complete description of the Simogga is given in the chapter 6. The particular
encoding is firstly explained. Next, the different used heuristics are presented with
some illustrations. This chapter finishes with the description of the local optimisation
applied to improve the results of the Simogga.
The chapter 7 is devoted to the validation of the algorithm. A large set of ideal
case studies were created for this target. To begin, the choice of the best heuristics are
validated treating each of the three problems (process selection, operations allocation
and machines grouping) individually. Next, the different parameters of the Simogga
as population size, operator rates, mutation rate, etc. are fixed. The efficiency of the
algorithm is finally proved and compared to the successive resolution.
The case studies found in the literature are treated and compared in the chapter
8. The final chapter 9 concludes this thesis in summarizing the previous chapters
and proposing the perspectives.
The appendices are related to: an overview of the multi-criteria decision-aid
method PROMETHEE (appendix A); a general introduction to grouping and multi-
criteria problems (appendix B) containing useful information to fully understand
chapters 5 and 6; a coding description of the proposed algorithm Simogga (appendix
C); a detailed of the application of the formula defined in section 4.4.2.5 (appendix
D); the complete tables used for the validation phase (appendix E); a description of
the case studies found in the literature (appendix F); a list of the acronyms (appendix
G).
Many researchers have recognised the need to take into account different parameters
like the process sequence, the production volume and the alternative process to solve
a cell formation problem. In this chapter, a classification of the data found in the cell
formation literature is described. These parameters used to solve a cell formation
problem are separated into data (including the part, operational, machine and cost
data), constraints and classical evaluation parameters used to compare the results of
grouping. The chapter closes on the encountered difficulties with a case study found
in the literature. This chapter has to be seen as the definition of all parameters that
are used in the next chapter about the literature review.
2.1 Data
2.1.1 Part and Production Data
The simplest way to characterise a part is to define a set of operations and a set of
machines. These machines permit to achieve the set of operations to produce the
final part. This is the minimum data need for a cell formation problem. In addition
to these data, several other parameters can be used to solve a cell formation problem.
2.1.1.2 Batches
• Batch size: The distinction must be done between process batch size and trans-
fer batch size. The first one concerns the number of parts processed on a re-
source without intervening setup. The transfer batch size meanwhile defines
the number of parts that can be transported from one resource to the next dur-
ing production. This last size is used to compute the movement between cells
and quantify the cost of transportation. For instance, if manufactured parts
11
12 CHAPTER 2. CELL FORMATION PROBLEM
are small, the part will not be transported one by one between two machines.
Therefore, the cost of transportation will be distributed on each part. In the
cell formation problem, it is most of time the transfer batch size that be used
behind the term of batch size.
• Pallet Capacity: The pallet capacity can be useful to define the batch size
transfer and automatically split a batch too large.
• Required tools: Defining the required tool to achieve the operation permits to
define a set of machines that has the tools to achieve this operation.
• Setup times: The setup time defines the period required to prepare the machine
or system for it to be ready to function or accept a job.
The use of the sequence of operations can be used as explained above to compute
the movements between cells. But the sequence of operations can also be used as
similarity measure between parts and identify which parts can be grouped together
[251, 22].
A vast majority of the solution procedures used the Product Flow Analysis (PFA,
see section 3.1) approach and assumed that each part has only one process route. In
this case, the cell formation problem corresponds to a simple grouping problem of
machines. Ignoring the alternative process may reduce the possibility of forming inde-
pendent manufacturing cells. Taking into account these alternative process plans offer
several benefits, such as allowing for a smaller number of machines, higher machine
use, a reduced interdependence between cells, and an improved system throughput
rate [146, 126]. Advantages and possibility of having multiple process routes are
discussed by Nagi et al. [192].
The cell formation problem incorporating alternative process plans is called the
generalised group technology problem. Kusiak (1987) was one of the first to
consider these alternative processes [146].
The presence of alternative routings is typical in many discrete, multi-batch, small
lot size production environments [55]. This routing flexibility increases the number
of ways to form manufacturing cells.
Different approaches permit to take into account the alternative processes or
routings. But not all authors have the same specification in the definition of an
alternative process. Alternatives may exist in any level of a process plan. The
manner to take into account the processes or routings can be represented by the
following six cases:
{m6 − m5 − m3 − m4 }. The part can have two different routings with the same
number of operations like P r1 (pk ) and P r2 (pk ), but it is not mandatory like
shown by P r3 (pk ). In this example, the problem can be seen as the choice
between the three routings for the proposed part to optimise the complete
problem.
All cases are summarised in Table 2.1 here below. In case 1, we do not speak
about alternative routing or process. There is only one available sequence of machines
(routing). In cases 2 to 5, we have alternative routings to different degrees. The cases
are classified by increasing order of alternative character. It is only in the last case
(6) that we could consider an alternative process in term of sequence of machine
types. The use of these alternative processes is limited. Suresh and Slomp precise
that in most firms no more than one process plan, and one set of tooling exist
for each operation [263]. Inclusion of alternative process plans for each part type
requires a major re-examination of process plans and shop floor practices, and major
investments in tooling.
Table 2.1: Summary of the different notions of alternative routings and processes for
a part k.
optimise the cell configuration in choosing the best route [250, 188, 38, 6, 22, 117].
Several authors preferred to use to the flexibility of the routings and processes in
allowing alternative routing to coexist. This coexistence is possible when the batches
of parts are split, or when the problem is solved on different periods like in dynamic
reconfiguration cells [224, 88, 95, 55].
• Part Cost Data including unit operation time, operation cost, set-up cost,
direct production cost, tool consumption cost, subcontract cost and inter-
cellular/intra-cellular transportation cost.
• Machine Cost Data including tool usage cost, operator wage, subcontract cost,
space cost and machine investment cost.
2.2 Constraints
2.2.1 Constraints about Machines
• Machine capacity. It is obvious that, in the design of CMSs, one of the basic
requirements is that there should be adequate capacity to process all the parts.
If the operating times are defined for all operations, it is necessary to limit the
capacity of all machines to solve a capacited cell formation problem.
• Use levels. Two levels of machine use are normally used. Maximum use is
specified to ensure that machines are not overloaded. Minimum use for a new
machine ensures that it is economically justifiable to include the new machine
in a cell. These constraints allow dealing with the obligation of use expensive
machines without treating the costs [156, 263].
• Cell size. The size of a cell is mostly measured by the number of machines in
the cell, but the cell size can also be expressed in terms of number of operators,
space available or number of outputs. The cell size limit criterion is necessary
for several reasons: available space might limit the number of machines in a
2.3. EVALUATION 19
cell. If a cell is run by operators, the size of the cell should not be too large to
be controlled visually by the operator. Instead of a single value of cell size, the
cell size can be fixed by a maximum value or/and by a minimum value. This
would allow more flexibility in the design process.
2.3 Evaluation
In the literature, there are many measures of goodness of machine-part grouping in
cellular manufacturing. The two following ones are the most frequently used because
they are easy to implement. They evaluate the quality of block diagonal forms of
binary matrices containing only binary data (1 and 0). There are quantitative criteria
of goodness of block diagonal forms of binary matrices containing only with binary
data (1 and 0).
2.3.1 Definitions
• A block is a sub matrix of a machine-part matrix where the rows represent the
machine cell and the columns represent a part-family associated to the cell.
• A void is a zero element that appears in the diagonal blocks.
The notations for the evaluation of incidence matrix are presented in Table 2.2.
Number of 1s Number of 0s
In the diagonal blocks e1 ev
In the off-diagonal blocks e0 eh
Total in matrix e o
where
e1 is the number of ones in the diagonal block
e0 is the total number of exceptional elements outside the diagonal blocks
ev is the total number of voids in the diagonal blocks
eh is the total number of zeros outside the diagonal blocks
e is the total number of ones in the machine-part matrix
o is the total number of zeros in the machine-part matrix
where
• η1 is the ratio of the number of 1s in the diagonal blocks to the total number
of elements in the diagonal blocks of the final matrix
e1
η1 =
e1 + ev
• q is a weight factor (0 ≤ q ≤ 1)
This expression can be also expressed as follows:
e1 e0
η=q + (1 − q)
e1 + ev (o − ev ) + (e − e1 )
where e is the total number of ones in the matrix.
This efficiency function is non-negative (0 ≤ η ≤ 1). The weighted parameter q
lets the choice to put more importance on the void elements or on the inter-cellular
movement (exceptional elements). If q = 0, 5, both have the same impact. It is
difficult to assign a good value to this parameter. Kumar and Chandrasekharan
analysed 100 randomly chosen data sets and concluded that, even though an algo-
rithm produces the best possible block diagonal form in all the cases, the range of
grouping efficiency varies practically from about 75 to 100% [142]. So it is difficult
to assign a value of q to correctly weigh voids and exceptional elements in a block
diagonal matrix.
The main problem is encountered with large matrices where a lower value of q
should be assigned instead of one with an equal impact on the number of voids and
exceptional elements. Otherwise the second term (1 − q) will obtain a diminishing
form and be less effective, and thus the overall measure will be less effective [42].
Therefore, Kumar and Chandrasekharan suggested that it is necessary to choose
a very low value of q to have equal weights on the voids and exceptional elements
[142]. It is also desirable to have a common denominator for both quantities in order
to choose a good weight factor q.
• It is able to incorporate both the with-in cell machine use and the inter-cell
movement.
• the built in mechanism assign weights to voids and exceptional elements which
are unknown to the user;
• grouping efficacy is more sensitive to changes in the number of voids than that
of the exceptional elements;
• in case of grouping efficacy, the user has no freedom to assign the weights for
voids and exceptional elements.
To these limitations, I can add an important one for this work. The objective of
the cell formation problem is to define independent cells. The number of exceptional
elements is then primary. But the number of voids characterized the utilization of
the cell by the part. A part is penalized if it does not use all the machines belonging
to the cell. As we can see in Table 2.4, part 4 is assigned to cell 2 but the part
uses only the machine 1. The solution is penalized because this part requires only
one machine. Those voids are important when the grouping formation is based on a
diagonalisation matrix as we will see in section 3.1.1.
These measures are available for each case study whatever the set of used data.
Indeed, the used parameters depend on the cell in which each machine and each part
are assigned and nothing else. When the set of data is more extended, these basic
values are always available. This is the reason why these basic measures are always
used in the literature to compare the performance between several methods.
Parts
Machines 1 2 3 4 5
1 1 1 1
2 1 1 1
3 1 1
4 1 1
2.3.6 In Practice
These measures are illustrated in the example here below from Kusiak [147]. Five
parts and four machines must be grouped into cells. A machine-part matrix is one
way to represent the processing requirements of a part on the machine as shown in
Table 2.3. This structure matrix is explained in section 3.1.1. A 1 entry on row i
and column j indicates that part j has one or more operations on machine i. In
our example, part 1 has operations on machines 1 and 3. In Table 2.4, a solution of
cells is presented. Two clusters are formed. Cell 1 regroups machines 2 and 4 and
produces parts 5, 2 and 3. Cell 2 groups machines 1 and 3 and produce part 1, 4 and
3.
In this matrix, the 1 off-diagonal represents an exceptional part or a bottleneck
machine. Part 3 is an exceptional part because it needs to be processed on machine
1 and 3 in cell 2, and also on machine 2 which is assigned in cell 1. While machine 2
is a bottleneck machine because it is one that processes parts belonging to two cells.
The 0 represents a void in cell 2. This void indicates that the machine 3 assigned
to the second cell is not required for the processing of part 4 in this cell. “The
presence of voids leads to inefficient large cells, which in turn could lead to additional
intra-cellular material handling costs and complex control requirements.” [188]
For this example, the grouping efficacy and the grouping efficiency can be com-
puted on the diagonal form of Table 2.4 where e = 10, e0 = 1, ev = 1, η1 = 9/10 and
2.4. CASE STUDY 23
η2 = 1/10.
e − e0
Γ= = 0, 81
e + ev
2.3.7 To Conclude
In conclusion, these parameters are often used in the literature to compare the results
of cell formation problem and to construct some comparative tables. However, several
limitations can be highlighted:
• These parameters are applicable only with an incidence matrix with 0 and 1.
• The operation sequences are not taken into account in these evaluations.
• The traffic between cells cannot be computed as the operation sequence is not
used.
• All the combination issue from the alternative process must be enumerated into
alternative routings to use the generalised parameter.
• incidence matrix with the indication of the use of each machines by all parts
(1 defines that the machine is used by the part, 0 other else)
• incidence matrix with the indication of the operating sequence where each one
is replaced by the identificator i for the ith operation of the part.
• incidence matrix where the ones are replaced by the operating time on the
associated machine.
• incidence matrix with the presence of alternative routings where the column
defining the part is duplicated with the other proposed routing.
24 CHAPTER 2. CELL FORMATION PROBLEM
All case studies based on these matrices do not take into account the operating
sequencing or the process plans.
If the incidence matrix is not used, the problem is described by tables with dif-
ferent parameters:
• tables with parts, demands.
• tables with operating sequence with the sequence of machines and the process-
ing time associated.
• tables with alternative routings or process plans.
• tables with machine capacity.
• tables with different costs.
To use and compare the data found in the literature, several difficulties are en-
countered:
• The case studies are small and without alternativity.
• The case studies are often defined with 0-1 incidence matrix.
• The operating sequences are not always described.
• There are a lot of existing definition of alternative routings and processes (as
shown in section 2.1.2.2)
• The large cases are not detailed except some cases with incidence matrix.
• The industrial cases are complex and difficult to obtain from the author owing
to private and confidentiality issues.
• The multi-criteria are not often used in the literature and/or the cases are not
completely described with all data.
In this study, we use a lot of case studies found in the literature. Depending on the
data set, they are completed to be used by the proposed algorithm. The algorithm
is adapted with a specific heuristic and cost function (without flow computing) to
test the case studies with incidence matrix 0-1 and compare the results. The case
is also adapted to take into account the sequence of operations and to compute flow
between cells. When the data are completed with the sequence of operations, the
solution significantly depends on the chosen sequence. In this last case, the result
cannot be compared with the best solution found in the literature because the data
set is changed. All basic cases found in the literature and the completed cases used
for the test are presented in appendix F. To validate the proposed method, a great
number of ideal cases have been created. These ideal cases can be treated and solved
to create completely independent cells.
2.5 Conclusion
In this chapter, all notions and parameters used to solve a cell formation problem
have been explained. Depending on the resolution methods, all parameters will not be
employed. The reader has all information necessary to understand the next chapter.
Chapter 3
Literature Review
The objective of this thesis is to use genetic algorithm (GA) to solve a generalized cell
formation problem (GCFP) including multi-criteria evaluation. For this reason, after
having presented in the previous chapter all the notions and parameters used by the
cell formation problems (CFP), the current chapter proposes firstly a classification
of the cell manufacturing methods.
This classification permits to position the GA among the other methods, and
is followed by the state of the art of the GA applied to the cell formation problem
(CFP). The next section is dedicated to the multi-criteria methods. All the criteria
found in the literature review are classified and the state of the art is devoted to the
use of genetic algorithm including multi-criteria. This chapter finishes with the utility
to develop a hybrid method to solve a generalized cell formation problem (GCFP).
A comparative table is presented to position this work relatively to others.
An overview of the first methods is presented by Askin and Vakharia [13]. The
part/design oriented methods are based on relational databases, classification and
coding systems. Two main approaches are described here under:
25
26 CHAPTER 3. LITERATURE REVIEW
• the Part families’ identification (PFI) begins the cell formation process by first
identifying the families of parts and allocates then machines to the families;
• the part families/machine grouping (PF/MG) determines the part families and
machine groups simultaneously [19]. Burbidge defined one of the earliest
PF/MG descriptive methods for the cell formation problem [31]. Its Production
Flow Analysis method (PFA) analyses the information given in route cards to
form cells. In the same time, El-Essaway developed a Component Flow Anal-
ysis (CFA) [64]. Burbidge partitions first the problem while El-Assaway
does not.
All other resolution methods presented in this chapter are classified into process-
oriented methods and based on the production flow analysis [298].
The best known, rank order clustering (ROC), is an iteration of two phases.
Firstly, the rowsPof the incidence matrix are rearranged in decreasing order of binary
value rm (rm = N j=1 2
N −j a
mj where amj is the incidence matrix and N , the number
of columns). Secondly, the columns
P ofMthe matrix are rearranged likewise in decreasing
order of binary value cp (cp = N i=1 2 −i a where a is the incidence matrix and M ,
ip ip
the number of rows). The advantage of this method is the saving of computational
time because the procedure is simple. However ROC generates bad solutions for even
middle-sized problems. ROC is illustrated on an example composed by 7 machines
and 9 parts. The initial incidence matrix is presented at Table 3.1. The first step
consists on rearranging the rows in decreasing order on the basis of the first column.
The rows with machine 3 and 7 are put at the top, following by the row 2, 4, 6, 5
and 1 (see Table 3.2). Next, the columns are rearrange likewise and columns with
part 1, 3, 5 and 9 are put at the left side, following by column 2, 4, 7 and 6 to finish
by column 9 (see Table 3.3). The process is begun again with the rows to find the
final matrix with diagonal blocks (see Table 3.4).
Parts
Machines 1 2 3 4 5 6 7 8 9 rm
1 1 1 1 26
2 1 1 1 1 172
3 1 1 1 1 337
4 1 1 1 164
5 1 1 1 81
6 1 1 1 1 166
7 1 1 1 1 337
cp 17 42 21 42 85 96 42 66 21
Parts
Machines 1 2 3 4 5 6 7 8 9 rm
3 1 1 1 1 337
7 1 1 1 1 337
2 1 1 1 1 172
6 1 1 1 1 166
4 1 1 1 164
5 1 1 1 81
1 1 1 1 26
cp 96 28 98 28 99 17 28 9 98
Parts
Machines 5 3 9 1 2 4 7 6 8 rm
3 1 1 1 1 577
7 1 1 1 1 577
2 1 1 1 1 172
4 1 1 1 1 166
6 1 1 1 164
5 1 1 1 321
1 1 1 1 266
cp 99 98 98 96 28 28 28 17 9
(EE: value outside block-diagonal). A part that requires a machine in another cell
creates en exceptional element. The method tries to minimise the number of these
particular elements. This final solution of Table 3.4 contains one EE and 8 VEs.
An important critic against resolutions based on binary incidence matrices is that
they do not take other information into account, such as production volumes, cost of
machines, maximum cell size, that may significantly influence cell formation. It is not
possible to show the routing of a part or operation sequences in an incidence matrix.
This is because an entry in a machine-part incidence matrix only indicates whether
a machine is used to process a part, and does not express the number of times a
machine is needed, and in which order machines need to be used. The movements
between cells depend on these routings, operation sequences and production volumes.
This information is also necessary for instance to compute the cost of transportation
[139, 42, 257, 256, 230].
Moreover these methods usually require visual inspection of the output to de-
termine the composition of the manufacturing cells. This visual inspection can give
much information [148], by example:
Parts
Machines 5 3 9 1 2 4 7 6 8 rm
3 1 1 1 1 577
7 1 1 1 1 577
5 1 1 1 321
1 1 1 1 266
2 1 1 1 1 172
4 1 1 1 1 166
6 1 1 1 164
cp 120 112 112 96 7 7 7 12 10
Parts
Machines 5 3 9 1 6 8 2 4 7 rm
3 1 1 1 1 577
7 1 1 1 1 577
5 1 1 1 321
1 1 1 1 266
2 1 1 1 1 172
4 1 1 1 1 166
6 1 1 1 164
cp 120 112 112 96 12 10 7 7 7
• Feasibility of the cell formation. The blocks on the diagonal can be identified
and the question about the maximum of machines by cell can be defined.
been used. Most of the methods based on a similarity measure are classified in this
category.
The approaches with similarity coefficient require a measure of similarity between
each pair of machines, parts, tools and design features. Fusions are based on these
similarities. In the literature, a large diversity of similarity coefficients are defined
between machines [161, 312], parts [34, 278], or sequence of operations [149, 243].
Several authors ([244, 245, 228]) present a complete overview on the similarity and
dissimilarity measures applicable to cellular manufacturing.
To illustrate the method on the example presented at Table 3.1, a similarity co-
efficient has to be defined. The Jaccard coefficient measures the similarity between
sample sets, and is defined as the size of the intersection of the sample sets divided
by the size of the union. This coefficient applied to between two machines computes
the number of parts that visit both machines divided by the number of parts that
visit at least one of both machines. McAuley was the first author who applied the
Jaccard similarity coefficient in his method (Single Linkage Cluster Analysis) to
solve cell formation problems [172].
The Jaccard similarity coefficient is defined as follows:
p
Sij = (3.1)
p+q+r
where
In Table 3.6, the Jaccard similarity coefficient Sij is computed for each pair of
machines i and j for the example presented at Table 3.1. On the basis of this matrix,
the dendrogram can be constructed.
During the second stage, the cells are formed by grouping machines based on
different methods: Single Linkage Cluster (SLC) Analysis ([172, 37]), Average Linkage
Method [89], Set Merging algorithm [280], etc. Generally this second stage determines
how the pairs with equivalent similarity levels are to be merged into clusters of
parts or machines. The designer must decide on an appropriate similarity level to
select and construct the groups. It is precisely one disadvantage of this method.
3.1. CLASSIFICATION OF METHODS 31
M1 M2 M3 M4 M5 M6 M7
M1 0 0,33 0,14 0,17 0,17 0,17 0,14
M2 0,33 0 0 0,75 0 0,4 0
M3 0,14 0 0 0 0,75 0 1
M4 0,17 0,75 0 0 0 0,5 0
M5 0,17 0 0,75 0 0 0 0,75
M6 0,17 0,4 0 0,5 0 0 0
M7 0,14 0 1 0 0,75 0 0
2 4 6 1 3 7 5
1.0
C2
0.75
0.5
C1
0.25
Figure 3.1: Dendrogram based on Jaccard similarity coefficient illustrated for the
example of Table 3.1.
In small applications, this is not a problem since the designer can visually evaluate
the dendrogram (see Figure 3.1). However, when the application is too large to
be analysed in the form of a dendrogram, other techniques must be used, such as
minimum spanning trees.
The SLC method, applied to the previous example, groups two machines (or a
machine and a machine group, or two machine groups) with the highest similarity.
This process continues until the predefined number of machine groups has been ob-
tained or when all machines have been combined into one group. This method does
not require further revision calculation because the similarity between a machine or a
group and another group is equal to the maximum of similarity between the machine
and all machines belonging to the group. In the example presented in Figure 3.1, the
machines 3 and 7 are grouped together because the coefficient S37 = 1. The next
similarity level is 0.75 and machines 2 and 4 are grouped together when machine 5
is regrouped with the machine 3 and 7. Machine 6 and 1 are not yet inserted in a
cluster, then a new level is defined (0.5). Machine 6 is added to cluster 1, and finally,
machine 1 completes this cluster with the level 0.33.
Hierarchical clustering methods do not form machine cells and part families simul-
taneously. After computing the similarity coefficient for all pairs of machines, only
machine groups are constructed. However, the parts families can be easily defined
by allocating parts to cells where they visit the maximum number of machines.
32 CHAPTER 3. LITERATURE REVIEW
a cell, capacities constraints, production volume, etc.). One of the possible reasons
for this situation is that the technique applied to solve the problem does not easily
allow consideration of many factors. Mathematical programming formulation is a
good alternative for the cell formation problem, because such problems can integrate
many important factors in the cost function and constraints [249].
Amongst these factors, we can cite operation sequences [224], alternative process
plans [146, 246, 211], cost [211], machine capacities [211], new equipment [243], lot
splitting [241], bottleneck cost [300] and exceptional elements [241].
These approaches are widely employed in the design of CMSs since they are capa-
ble of incorporating certain design requirements in the design procedure. The general
objective is to minimise or maximise a function subject to equality and inequality
constraints. Purcheck first applied linear programming techniques to solve a group
technology problem [207].
Mathematical programming methods can be further classified into four major
groups based on the type of formulation:
2. Linear and quadratic integer programming (LQP) ([146, 153, 47, 246, 131, 277,
300, 25, 306]).
Three critical limitations can be highlighted. First, because of the resulting non-
linear form of the objective function, most methods do not simultaneously group
machines into cells and parts into families. Second, the number of machine cells
must be specified in advance. Third, the variables are constrained to integer values.
Moreover, most of these models are computationally intractable for realistically sized
problems.
0.14
3
1
0.14
1
0.17 7
0.75
0.17
0.33 0.17
6 5 0.75
0.5
0.4
2 0.75 4 C2
C1
Figure 3.2: Graph theory based on Jaccard similarity coefficient illustrated for the
example of Table 3.1.
is somewhat difficult to specify the exact boundaries of this term. Voss gives the
following definition: A metaheuristic is an iterative master process that guides and
modifies the operations of subordinate heuristics to efficiently produce high quality
solutions. It may manipulate a complete (or incomplete) single solution or a collection
of solutions at each iteration. The subordinate heuristics may be high (or low) level
procedures, or a simple local search, or just a construction method [295].
The distinction can be done between constructive heuristics and local heuris-
tics. Constructive or construction heuristics are mostly used for creating initial
solutions in a very short time. These solutions can be used with other algorithms
which improve them iteratively, or even for exact algorithms as bounds. Constructive
heuristics usually work straightforward, i.e. once they make a decision to build up
a partial solution, and they never reconsider it. Because of their simple nature, the
computational complexity can usually be evaluated accurately. In many cases, not
only the complexity, but also the solution quality can be estimated [104].
In contrast to constructive heuristics which generate initial solutions for Combi-
natorial optimisation Problems (COPs), Local Search (LS) algorithms improve
existing solutions further by applying local changes. A local search algorithm starts
from a candidate solution and then iteratively moves to a neighbour solution. This
is only possible if a neighbourhood relation is defined on the search space. The final
goal is to reach the optimal solution, measured by an objective function. There is no
guarantee that the global optimum will be found. Basic local search terminates after
reaching the first local optimum in the optimisation process. It highly depends on the
initial solutions whether the global optimum can be found or not. Different methods
can be cited without describing: Variable Neighbourhood Search (VNS), Iterated
Local Search (ILS), Estimation of Distribution Algorithms (EDAs), Variable Depth
Search, Greedy Randomized Adaptive Search (GRASP), Very Large Neighbourhood
Search (VLNS), Scatter Search, and others [91].
There are many kinds of metaheuristics that are explained below: tabu search, al-
gorithms based on the processes of evolution in nature (including genetic algorithms,
simulated annealing, ant colony optimisation ...), etc. Neural network, memetic al-
gorithms, fuzzy logic, expert systems, formal logic are in the category of artificial
intelligence.
3.1.4.1 Metaheuristics
The metaheuristics includes different approaches:
• Tabu Search (TS) can be seen as a popular extension of local search [98, 163,
1, 299, 234]. TS is an heuristic approach that starts from an initial solution. At
each step the neighbourhood of the given solution is searched to find the “best”
neighbour. The new solution is set to be the primal for the next step. The
central component of TS is the Tabu List (TL). This TL enables to prevent
cycling, and to lead the search to “good” regions of the search space. The TL
contains a history of which solutions have been already considered in the past
with the attributes of forbidden solutions that have already been covered. The
new solution is usually chosen via a best improvement strategy, and in contrast
to classical Local Search, inferior ones are accepted as well as superior ones.
There are two ways to store information of the solutions in tabu list:
36 CHAPTER 3. LITERATURE REVIEW
• Fuzzy logic: The fuzzy logic (FL) is a form of multi-valued logic derived
from fuzzy set theory to deal with reasoning that is approximate rather than
precise. The FL provides a simple way to arrive at a definite conclusion based
upon vague, ambiguous, imprecise, noisy, or missing input information. FL’s
approach to control problems mimics how a person would make decisions, only
much faster. The FL was introduced in 1965 by Zadeh and used to solve cell
formation problems [319, 61, 209, 314, 320].
3.1.5 Conclusion
There is an extensive literature devoted to heuristic cell formation techniques however
most of them do not guarantee optimality and typically work exclusively with a part
machine incidence matrix. While some of these clustering algorithms offer superior
results for specific applications, no single technique has been shown to provide the
best solution for a broad range of applications [122].
Genetic Algorithms are very well known metaheuristics able to be applied to NP-
hard problems. As all metaheuristics, GAs are highly promising choice for obtaining
nearly-optimal solutions quality in reasonable time. As SAs, GAs are abundantly
applied in the literature to solve cell formation problems and industrial manufacturing
problems. In contrast to other stochastic search methods, GAs search the feasibility
space by searching feasible solutions simultaneously in order to find optimal or near
optimal solutions.
The followings section is devoted to genetic algorithm especially with a listing of
advantages and the state of the art of the GA applied to the cell formation problem.
• GAs do not make strong assumptions about the form of the objective function
as do many other optimisation techniques. Also, the objective function is in-
dependent of the algorithm, i.e. the stochastic decision rules. This offers the
flexibility to interchange various objective functions and to use multi-criteria
objective functions. The integer-based GA approach allows the system de-
signer freedom to substitute various types of evaluation functions, permitting
alternative designs to be generated and reviewed quickly.
• Contrary to the cluster methods requiring a visual analysis of the data and the
solution, the GA can group the parts and machines into families and into cells
(depending of the used coding).
• Most clustering algorithms cannot identify all naturally occurring clusters and
find solutions with a constrained number of clusters. In the GA, the designer
can incorporate or remove constraints on the number of permissible cells or part
families selectively. Unconstrained solutions containing the naturally occurring
clusters can be generated as well as constrained solutions.
• Industrial data sets are often too large for visual methods to associate machine
cells and part families effectively. GA capabilities make practical solutions to
industrial scale problems more realistic.
Hu and Yasuda used a grouping genetic algorithm (GGA) to solve cell formation
problem minimising material handing cost with alternative processing routes [106,
315].
Chan et al. proposed a method to solve the machine-part grouping and the cell
layout in two stages. Two mathematical models are solved by a GA with the con-
sideration of machining sequence and production volume of each part. The objective
for machine-part grouping problem is to minimise both the inter-cellular and intra-
cellular part movements. While the objective for cell layout problems is to minimise
inter-cellular part travelling distance unit in macro-aspect. In the first problem, ma-
chine cells, part families and routing selection have been considered simultaneously
in cell formation process [40].
Tariq et al. developed a genetic algorithm to maximise the “Group Efficacy” for
a cell formation problem expressed by a incidence matrix [268].
Mahdavia et al. proposed a model based on a GA. They deal with the min-
imisation of EEs and number of voids in cells to achieve the higher performance of
cell use. The cell use concept is considered as the number of non-zero elements of
block-diagonal divided by the block-diagonal matrix size of each cell [167].
3.2.3 Conclusion
As explained here above, GAs have numerous advantages (as other metaheuristiques).
Despite their prevalence in the literature, the advantage of using GAs has been
questioned by Hicks [99]. The “No Free Lunch” (NFL) theorem of Wolpert and
Macready suggests that, on average, no stochastic search algorithm can outperform
another (including random search) when run over all problem instances. According
to this theorem, GAs are as efficient as other metaheuristic. The NFL theorem sug-
gests that it is important for researchers to compare their results with those obtained
using random search, it will be done in chapter 7 [305].
The GGA developed by Falkenauer is particularly adapted to solve the specific
structure of the grouping problems [67]. He used the GGA to produce highly efficient
solutions to a variety of sample problems from the literature. More details on this
choice are given in section 5.2.
3.3.2 Methods
The purpose of all multi-criteria methods is to enrich the dominance graph, i.e. to
reduce the number of incomparability’s. When an utility function 2 is built, the multi-
criteria problem is reduced to a single criterion problem for which an optimal solution
exists [27].
• Goal programming:
The goal programming (GP) tries to minimise the set of deviations from pre-
specified multiple goals, which are considered simultaneously but are weighted
according to their relative importance. The method permits to reduce the prob-
lem to a single objective minimisation problem that can be solved by the tradi-
tional techniques of linear, non-linear, integer, or mixed integer programming.
Shafer and Rogers presented a goal programming model for dealing with
three objectives: minimising inter-cellular movements, minimising the invest-
ment in new equipment, and maintaining an acceptable use level. The proposed
goal programming model composes with the p-median method to specify part
families and the travelling salesman method to identify the optimal sequence
of parts. They also applied heuristic to solve real world problems [243].
maximised. This new method has also the advantage of providing several solu-
tions, so that the final choice can be made by the decision makers. Mansouri
et al. developed a comprehensive mathematical model for the multi-objective
cell formation problem that considers simultaneous criteria: minimisation of
inter-cellular part movements, total cost of machine duplication and subcon-
tracting, overall machine under-use and deviation among cell-use. Solutions
were generated with a Pareto-ranking technique called XGA [170]. Bajestani
et al. developed a new multi-objective scatter search (MOSS) for finding locally
Pareto-optimal frontier. They solved a multi-objective dynamic cell formation
problem where the total cell load variation and sum of the miscellaneous costs
(machine cost, inter-cellular material handling cost, and machine relocation
cost) are to be minimised simultaneously [16].
1. To identify the best alternative or to select a limited set of the best alter-
natives (choice strategy).
2. To construct a rank-ordering of the alternatives from the best to the worst
ones (ranking strategy).
3. To classify or sort the alternatives into predefined homogeneous groups
(classification strategy).
The use of multiple objectives increases the complexity of the problem, limiting
the application of traditional optimisation methodologies to small-sized problems.
In addition, these methodologies (analytic and heuristics) do not provide a natural
mechanism for the simultaneous generation of multiple solutions.
Based on their reviews, Mansouri et al. and Dimopoulos clearly specify that
the majority of reported methods try to unify the various objectives in the form of
3.3. CRITERIA / MULTI-OBJECTIVES 45
a single objective [170, 58]. The final result of such an approach is a compromise
solution, but non-dominance is not guaranteed. Mansouri et al. proposed to explore
pareto optimisation through simultaneous consideration of various objectives, and
the ability to provide the decision-maker with a set of non-dominated solutions in a
reasonable computation time.
3.3.3 Criteria
Mansouri et al. presented a complete review of the modern approach to multi-
criteria cell design. They analyse a large panel of multi-criteria used for decision-
making. They made a comparison of inputs, criteria, solution approaches and outputs
across selected models [170]. In the following section, the set of different criteria found
in the cell design literature are classified. Different authors using at least two criteria
are cited as example for each category.
3.3.3.1 Cost
All costs are to minimise: fixed machine costs, material handling costs, work-in-
process inventory costs, production cycle inventory costs, variable production costs
and setup costs [8, 55], cost associated to machine duplication [8, 55] or investment
in new equipment [2, 243], cost associated to part subcontracting [144], material
handling cost (inter-cellular or intra-cellular movements of parts) [55], operation or
processing cost [2], bottleneck cost [144, 300], the spatial cost/Space usage, the an-
nual amortisation cost [163], the penalty cost for those operations that need to be
performed in cells other than the ones that they have been assigned, etc.
3.3.3.2 Similarities
3.3.3.3 Movements
The movements between machines and cells can be quantified when the sequence
of operations is used in the resolution. In this case, the grouping objectives can be
the maximisation of the cell independence by minimising inter-cellular movements
[162, 50] and/or maximising intra-cellular movements [162]. To simplify the transport
of parts between cells, the movements between cells can be preferred unidirectional
(criterion to maximise).
On the contrary, if the sequence of operations is not taken into account (with
incidence matrix by example), the maximisation of the cell independence will be
obtained in minimising the number of exceptional elements [142, 3, 171, 241] and/or
minimising the number of void elements [142].
46 CHAPTER 3. LITERATURE REVIEW
3.3.3.5 Machines
When the authors work with the alternative routings, where the choice of machines
has to be done for each operation, several criteria can be minimise: the set-up times
(through parts sequencing) or the deviation between set-up time inside cells, the
deviation between operating cost inside cells, the deviation between machine use.
The objective can also be to maintain an acceptable machine use level, by minimising
the non-respect of the lower limit of machine use and by minimising the overstepping
of high limit of machine use [250, 55].
3.3.3.6 Flexibility
In cell formation problems, authors deal with four types of flexibility to maximise:
• the machine type flexibility corresponds to the ability of the machines grouped
into cells to process a large number of distinct operations types [240];
• the routing flexibility is the ability of the cell system to process parts completely
in multiple cells [11, 224, 50];
• the part volume flexibility corresponds to the ability of the cell system to deal
with volume changes in the current part mix (it is computed as the maximum
percentage increase in volume for all parts that can be handled without chang-
ing the system configuration [11]);
• the part mix flexibility represents the ability of the cell system to handle different
product mixes with minimum disruption [225].
3.3.3.7 Conclusion
A large diversity of criteria exists, and their use depends on the multi-criteria method
and the resolution method for the cell formation problem. The similarity coefficients
permit the grouping of similar elements together. All costs can be combined with
pondered sum for the mathematical programming methods. However, these data are
not always known. The movement evaluation and the workload balance are often
used in the same time with a multi-criteria decision aided method. If the objective
is the conception of a new factory with the less traffic as possible between the cells,
the use of movement and workload is really pertinent. These different criteria can
be improved by the use of alternative routings and process plans. Flexibility of
3.3. CRITERIA / MULTI-OBJECTIVES 47
• Gupta et al. developed a GA for machine part grouping problem with more
than one objective, which attempts to minimise total inter-cellular, intra-cellular
part movement as well as minimising cell load variation [90].
Venugopal and Narendran but they maximise also the machine similarity
[283].
• Gravel et al. proposed a bi-criteria model solved by a GA to find solutions
to cell formation problems with the presence of multiple routings [84]. They
worked with two objectives functions using a weighted-sum approach: minimi-
sation of the volume of inter-cellular movements and balancing of load among
machines in a cell.
• Moon and Gen used also a sum approach (minimising the sum of the ma-
chining and duplication costs) to evaluate the solution in their proposed GA
[180].
• A study dealing with multiple objectives, sequencing of operations and multi-
ple routes was made by Zhao and Wu. They presented a GA using special
operators to group machines into manufacturing cells. Several objectives have
been used: minimising the cost due to inter-cellular and intra-cellular part
movements; minimising the total cell load variation; and minimising excep-
tional elements. An evolutionary multi-objective optimisation algorithm with
typical real-valued representation attempted the simultaneous optimisation of
objectives through the use of a weighted-sum approach [322].
• Onwubolu and Mutingi used a GA to simultaneously group machines and
part into cells by minimising the inter-cellular movements and the cell load
variation. The designer is allowed to introduce a constraint in term of number
of cells required a priori and the lower and upper bounds on cell size [196].
• The Machine-Part Cell Formation (MPCF) developed by Filho and Lorena
is modeled as a bi-objective problem that guides the construction of feasible as-
signments of machines and parts to specify clusters, and provides an evaluation
of solutions [69].
• Mansouri et al. employed a multi-objective GA called XGA to provide the
decision-makers with Pareto optimal solutions. Their model aims to decide
which parts to subcontract and which machines to duplicate in a CMS where
some exceptional elements exist. The set of objectives are: minimising inter-
cellular movements; minimising the total cost of machine duplication and part
subcontracting; minimising the under-use of machines in the system; and min-
imising the unbalance of the workloads among the cells [171].
• Yasuda et al. presented a grouping GA to solve the multi-objective cell for-
mation problem. Processing time, production requirements, and available time
on machine in a given period have been considered for two objectives: inter-
cellular movements and cell load variation. The number of cells is not known
in advance [315].
• Dimopoulos developed a GP-SLCA, a hybrid genetic programming algorithm
for the solution of single-objective cell-formation problems. This methodology
provides to the decision maker a range of non-dominated solutions instead of
a single compromise solution. The methodology is illustrated on a large-sized
test problem taken from the literature [59].
3.4. HYBRID METHODS 49
3.3.5 Conclusion
As explained here above, it is recommended to include several criteria in the evalu-
ation process of a methodology. A lot of criteria exist and it is interesting to create
a method for which criteria can be easily changed and adaptable according to the
designer’s wish. If the multi-criteria decision aided method is included in the evalua-
tion process of the genetic algorithm, the method must order the solution to compare
them and apply the genetic operators. Because the method is called upon each time,
an evaluation of a population is needed; it is important for the method to be rapid. At
the end of the resolution process, several solutions could be presented to the designer
to let him a selection choice.
In this study, the Multi-Criteria Decision-Aid (MCDA) Method used is Promethee
that responds to this demand. This method completely described in appendix A will
be integrated in the Genetic Algorithm. The efficiency of this method has been proved
by Rekiek and Francq [217, 71]. The important feature proposed by Rekiek is
the integration of the (MCDA) method in the evaluation phase of the Grouping Ge-
netic Algorithm (Gga). Indeed, the Gga enables to scan all the interesting parts
of the search space. Rekiek used this method to solve a problem of assembly line
design. The same integrated method has been used and validated by Francq for a
structured search.
The first and the second type, respectively, involve alternative machines and al-
ternative sequences, but the operations to be performed are fixed. Allowing these
flexibilities can provide better performance in mean flow time, throughput, and ma-
chine use [159]. We can make the correspondence between these three flexibilities and
the definitions of alternativity presented in section 2.1.2.2. The operation flexibility
corresponds to the definition of a type of machine containing the set of machines
able to achieve the operation, i.e. the alternative process routes. The two other ones
require the use of the alternative process plans. If there are several ways to design
a product, different sequences of distinct operations can be proposed. Moreover, if
the precedence constraints in the product conception allow it, several sequences of
operations can be defined in switching the order of operations. The use of alternative
process plans does not make distinction between sequencing and processing flexibil-
ity. As explain in section 2.1.2.2, these routing and process alternativities increase
the number of ways to form manufacturing cells.
The choice of the good process for each part and the allocation of operations
(belonging to the chosen processes) to a specific machine represent a specific problem
of resource planning. The grouping of machines and parts into cells represents the
cell formation problem. With both problems, a question arises: how to solve these
two groupings and in which order? The resolution can be sequential, iteratively or
simultaneously. The sequential resolution finds a solution for the second problem
based on the result found for the first problem or conversely. When two problems are
inter-dependent, a good solution for the first problem does not imply a good solution
for the second problem. To avoid the inconvenient of the sequential resolution, the
iteratively resolution is based on several iterations of the sequential resolution. But
in this case, the second solution significantly depends on the first solution. Finally,
the simultaneous resolution permits to optimise both problems simultaneously with
each iteration, but it is more complex to solve.
The cell formation problem considering multiple process routings is also called the
Generalised Cell Formation Problem (GCFP) or Generalised Group Technology Prob-
lem (GGFP) [308]. As explained here above, the Generalised Cell Formation Problem
can be decomposed into two distinct sub-problems: Resource Planning Problem and
Cell Formation Problem. Many hybrid methods to solve the two sub-problems are
found in the literature. Different authors have discussed the necessity for considering
alternative process plans in designing manufacturing cells [146, 87, 213, 224].
The methods described in this section are separately between three categories
depending on the used routings and processes.
3.4. HYBRID METHODS 51
model is limited to alternative routings and the authors do not consider the operation
sequence.
Solimanpur et al. developed a multi-objective integer programming model for
the design of a cellular manufacturing system. The objectives considered are the
maximisation of total similarity between parts, the minimisation of the total pro-
cessing cost, the minimisation of the total processing time and the minimisation of
the total investment needed for the acquisition of machines. An EA-based method-
ology was proposed for the solution of the computationally intractable mathematical
programming model [252].
Defersha and Chen proposed a comprehensive mathematical model for the de-
sign of cellular manufacturing system based on tooling requirements of the parts and
tooling available on the machines. Their model includes dynamic cell configuration,
alternative routings, lot splitting, sequence of operations, multiple units of identical
machines, machine capacity, workload balancing among cells, operation cost, cost of
subcontracting part processing, tool consumption cost, setup cost, cell size limits,
and machine adjacency constraints. They minimise the sum of all precitated costs
[55].
Jeon and Leep developed a methodology which can be used to form manufac-
turing cells using a new similarity coefficient based on the number of alternative
routes during machine failure, and the demand changes for multiple periods. The
methodology is divided into two phases. A new similarity coefficient, which consid-
ers the number of available alternative routes when available during machine failure,
is suggested in phase 1. The primary objective of phase 1 is to identify part fam-
ilies based on the new similarity coefficient by using a genetic algorithm. A new
methodology for the cell formation, which considers the scheduling and operational
aspects in cell design under demand changes, is introduced in phase 2. Machines
are assigned to part families by using an optimisation technique. This optimisation
technique employs sequential and simultaneous mixed integer programming models
for a given period to minimise the total costs which are related to the scheduling and
operational aspects [118].
Wu et al. developed a hybrid SA algorithm with a GA mutation operator for the
cell formation problem considering alternative process routings (routing flexibility)
[311]. The inter-cellular movements are minimised. With the use of an incidence
matrix, this criterion is expressed by maximising the grouping efficacy. They used
the similarity coefficient defined by Won and Kim to initialise the population with
a specific choices of routings [308]. In their algorithm, the number of cells resulting
in the best objective values is generated automatically. To preserve flexibility, users
are permitted to specify the preferred number of cells.
The methods given in this section deal with the processing flexibility in using several
enumerated routings or sequence of machines.
Kusiak proposed a p-median model to form a fixed number of part families based
on a similarity coefficient between process plans. Only one process is selected for each
part in this phase. Machines are then assigned to part families as in the case of the
single process plan in minimising the number of inter-cellular movements [146].
3.4. HYBRID METHODS 53
The method has the advantage of fast and accurate computations and is efficient for
large-scale industrial problems.
techniques are efficient. For large problem sizes, the computational time increases
rapidly for first method while the quality of the solution is really poor for the second
method [169].
Nsakanda et al. presented a comprehensive model to solve the machine allo-
cation problem and the part routing problem. Multiple process plans for each part
and multiple routing alternatives for each of those process plans are considered. The
part demands and machine capacities are taken into account. They propose a so-
lution methodology based on a combination of a genetic algorithm and large-scale
optimisation techniques. The objective is to minimise the total inter-cellular and
inter-cellular movement costs and the total outsourcing cost (expressed as a linear
sum). It should be noted that the methodology provides solutions that are only local
optima, as strict optimality cannot be guaranteed [193].
1. Data
2. Flexibility
3. Resolution methods
Table 3.7: Attributes used in the present study and in a sample of recently published
articles.
In taking into account the production volume, it is possible to deal with the
capacity of cell formation and to use the machine capacities as constraint. The use
of operating times allows computing the correct allocation of operations on machine
in term of time. The operations sequences permit to insert the evaluation of the
movements between machines and cells.
58 CHAPTER 3. LITERATURE REVIEW
Dealing with routing, process and sequencing flexibilities allows the design of
independent manufacturing cells without much additional investment as explained in
section 3.4.1. Allowing the coexistence of alternative routings implies working with
a lot splitting.
The effective of the algorithm significantly depends on the used resolution strategy
(hierarchically, iteratively or simultaneously). Indeed, the allocation of operations on
machines and the cell formation problem can be solved successively, in one way or
inversely. Iterations can be applied on the previous strategy to improve the solution.
Therefore, the methods based on the simultaneous strategy will be preferred to make
the solution evolved on the basis of both problems. To create a flexible method
in term of evaluation, it is interesting to propose a method associated to a multi-
criteria evaluation. It is important that the method can solve large scale problems
to be applied to an industrial case.
In the comparative table, the cell with the symbol “X” means that the authors
deal with the feature, contrary to the symbol “−” meaning that the feature is not
used by the authors. If the cell is empty, the authors do not specify this feature. If
there is a 0 in the cell for the last criterion (LS), the author(s) enumerate(s) all the
routings and/or processes to deal with all alternatives (for instance, in the incidence
matrix or a similarity coefficient). The enumeration of all solutions implies that
the method tends to be very consuming of computational resources. For this last
feature, the hypothesis that the method is not applicable to large scale problem can
be reasonably assumed.
There are a lot of papers that treat the cell formation problem and propose new
methods to solve this problem with multi-criteria evaluation that are adaptable for
large scale problem. However, papers dealing with routing flexibility and relevance
for industrial problems are not frequent. Literature proposing solutions to solve
Generalized Cell Formation Problem with routing flexibility and process flexibility
are really lacking. The reason is that the complexity of the problem increases rapidly
with the use of alternative machines and alternative process. The size of the search
space is computed in section 4.4.1.
3.5 Conclusion
In this chapter, different resolution methods applied to the cell formation problem
have been presented. The advantages of the genetic algorithm have been high-
lighted compared to other methods. One of the advantages is the independence
of the evaluation process regarding to the resolution process. That permits to use a
multi-criteria decision aided method in the evaluation phase without difficulty. The
method Promethee has the ability to insert, modify or change the weight of each
criterion very easily. A great number of sequential or iterative methods have already
been proposed. A modification of one of both problems (Resource planning and
Cell Formation Problem) implies a reconstruction of the associated problem and a
modification of the evaluation for both problems. Both grouping problems are in-
terdependent, and the resolution should be simultaneous done to deal and improve
both problems together.
To be realistic and adaptable for industrial case study, it is important to take into
account the capacity constraints for the machines. This implies the use of volume
3.5. CONCLUSION 59
of production and the processing time for each operation. To evaluate correctly the
traffic between created cells, sequence of operations has to be taken into account. If
multiple processes and multiple routings can be planned for all parts, the possibility
to create independent cells is increased.
Based on these data and in regards to Table 3.7, the model that will be presented
in this study provides a larger coverage of the attributes than the individual papers.
This tool must be seen as a design tool to transform a job shop structure to a
cell manufacturing. The environment of work is static, i.e. the data is constant and
aggregated on a long term period. Since the modification of the structure in a factory
is a long and important process, the new arrangement must be concurred for a long
period.
Finally, the feature of “coexistence of alternative routings” will not be treated in
this study. Indeed, during the cell formation, nobody can say which batches of parts
has to be split and in which proportions. It is only in a dynamic environment and
on a short term period (for the resource planning and scheduling planning) that the
alternative routings can coexist in splitting the demand. In this way, a part can be
produced with an alternative route if the selected route contains a machine breakdown
or if the delay imposes a simultaneous production of two batches of parts. For this
reason, the constant demand can not be split and the coexistence of the alternative
routings is not taken into account during the resolution, but in the evaluation only.
60 CHAPTER 3. LITERATURE REVIEW
Chapter 4
Problem Formulation
61
62 CHAPTER 4. PROBLEM FORMULATION
4.1 Assumptions
The mathematical model presented in this section is developed under several as-
sumptions. These following assumptions are separated between those ones relative
to the usual data needed to solve the problem and those ones depended on the chosen
resolution. In this last case, an explanation is supplied to justify the choice.
• each machine type can perform one or more operations (machine flexibility);
• each operation can be done by only one machine type with different times
(routing flexibility);
• the operating times for all operations on different machine types are known;
• the setup times are not considered individually but they are comprised in the
operating times.
• the number of cells is determined by the user and fixed during the resolution;
In this study, the physically grouping of machines into independent cells is made
in a static environment on a long fixed period. This assumption contrasts to the
dynamic environment where the machine grouping is achieved on several periods
with a demand changing during these periods. As explained in section 2, in the
dynamic environment, the cells are changing during the period and can be adapted
to the changing demands. In this study, the assumption of the static environment is
based on the fact that the design of the factory is a long and expensive process. The
cells are created and formed once for a set of data. The future data (new commands,
new production parameters) must be adapted to the new cellular environment. This
assumption implies that, to create the cells, all production data associated to the
parts (demands, batches) and machines (total use) is determined and known for the
whole period. The machines are grouped on the basis of these aggregated data. In
this study, the real setup times, inventories and machine breakdowns are considered
during the planning and scheduling phases but not in the cell conception. However,
to take into account the setup times, the part demand is slightly increased by the
user.
In the proposed approach, the user must fix the maximum number of cells and
the maximum number of machines by cells. It is an assumption generally adopted.
Several authors do not use this constraint and allow the search of independent cells
independently of the number of the cells. However, in this case, the problem is not so
large and the search space is reduced by not using all the previous data (alternative
routings and processes, for instance).
The number of machines must be known a priori. These machines correspond to
the set of machines existing in the studied period. If the user wants to find solution
with more machines, these machines must be created a priori with their capability
and their availability. These machines will be differentiated from the existing ones
by a specific parameter to use firstly the existing ones.
In this static environment, the part demands could be split to use simultaneously
different processes for different batches of the part. The choice to forbid the lot
splitting and the simultaneous processes is made for reasons of complexity.
The alternative routings and processes are used to find the preferential routings
optimising the cell formation. As explained in the last comparative Table 3.7, the
complete alternative processes and routings are not often used in the literature. An
explanation can be found in the complexity of the problem when these alternatives
are considered. However, when a part is designed, a precedence graph is often used.
In this case, the only known information is the precedence constraints between oper-
ations to achieve the part. The use of the alternative processes and routings permits
to take this information into account.
These alternative routings are used to created independent cells and not to con-
struct flexible cells allowing a reorientation in case of problems. To construct flexible
cells permitting the reorientation of the parts inside the cells, each cell must contain
more than one machine of each type necessary to produce the part families. In this
case, the part will always be reoriented to another machine in the same cell if the
dedicated machine breaks down. To simplify the management of the production sys-
tem, we must define small independent cells dedicated to part families. These two
objectives are contradictory.
64 CHAPTER 4. PROBLEM FORMULATION
The dynamic flexibility will be evaluated at the final stage on the final solution,
and not in the cell formation stage. To take into account the dynamic aspect, a
degree of flexibility is imposed at each machine in introducing the high use limit (see
section 4.2.3.1). This is the only level of dynamic flexibility found in the resolution.
4.2 Notations
4.2.1 Indices
nt Number of Machine types.
nm Number of Machines.
np Number of parts.
nc Number of cells.
For a given manufacturing system, a set of nT machine types {T1 , T2 , ..., TnT }
is defined. The machine type represents a capability, i.e. is the ability of a machine
type to perform a given set of operations. For each machine type, different default
characteristics are defined (such as availability) and a maximum number of machines
is specified.
4.2. NOTATIONS 65
Consider a set of nM available machines {M1 , M2 ,..., MnM } that will be used
to form manufacturing cells. Each machine m, unique, is characterised by a machine
capacity dm . This parameter represents the amount of the time that a machine of this
type is available for production. This value takes the possible failures into account.
Each machine belongs to at least one machine type. This parameter characterised
the ability of the machine to achieve all operations of this type. The machine can
also belong to several types if it is a multi-functional machine.
We define the part mix as a set of np parts {P1 , P2 , ..., PnP } to be produced in
the studied period. The quantity of each part (qi ) in the part mix to be produced is
defined on the concerned period.
Each part i is defined by a set of npri processes (kth process for part i = a
sequence of nOik operations {Oik1 , Oik2 ,... , OiknOik }). The operating sequence is
an ordered list of operations that the part must have performed. Each operation is
not performed on one given machine, but is defined as an operation type that can
be accomplished on one machine type Tt (lathe, grinding machine, etc.). So each
operation can be performed on all machines belonging to its type.
The operating time is the time required by a machine or a machine type to
perform an operation on a part. This operating time can be fixed for the considered
machine type (average operating time, tikp ) and available for all machines belonging
to this machine type. This use is practical if all machines of a type are identical
(same characteristics). But the operating time can also be particularized to a specific
machine (operating time, tikpm). If there is no particular operating time for the
machine m, the parameter tikpm will be equal to the default value tikp .
This possibility permits to forbid the use of a machine belonging to the type of
operation. In this case, the particular operating time (tikpm ) will be equal to zero.
Moreover, this notation allows forcing the use of a specific machine that does not
belong to the required type. This option is necessary to apply the approach to all
case studies found in the literature where different definition of process and routing
exist.
4.2.2.3 Cells
nc Maximum number of cells.
To group machines into cells, the user must define the maximum number of cells
(nc ) allowed in the system. This parameter is function of fixed objectives as, for
example, the available space. If the user has no idea a priori of the number of
cells, he has to be able to test different alternative solutions and verify the impact
66 CHAPTER 4. PROBLEM FORMULATION
of this number on the final configuration and on the evaluation criteria. The second
important parameter to group machines into cells is the maximum size of each cell.
The maximum number of machines (mU B ) by cells must also be defined by the user.
A low and high uses can be defined for each machine to take the user’s preferences
into account. As explained in section 2.1.3, many authors consider a fixed number of
machines for each machine type. In this approach, a machine type regroups a set of
machines that have the capabilities to achieve an operation of the considered type.
All these machines can be completely identical or different in terms of productivity,
cost, etc. It is more realistic to consider a set of different machines because the
acquisition of all machines is not done in a same time but results from a long period
of evolution and changes. All machines used in the actual job shop production are
considered with their characteristics.
The default data of each machine type permits to specify the characteristic once
for all machines belonging to the type. Indeed, when a value is needed for a machine
and this value is not determined for this specific machine, the value used will be the
one from the machine type. Depending on the maximum number of machines defined
for each machine type, the program generates the exact number of allowed machines.
Each of these created machines is only characterised by its machine type and all data
of this type. During the resolution, all machines are not used mandatory and their
uses are adjusted to improve the quality of the solution of the cell formation problem.
To treat a product needing more than one machine to achieve a particular op-
eration of its process, the proposed approach does not consider that two machines
are used to achieve one operation. When it is the case in the reality, the decision is
taken to use another machine grouped in the same cell or to make several shifts. As
the part demand is not split, the availability of a machine belonging to the required
type will be doubled or tripled to be able to achieve the operation.
To circumvent the difficulty of data collection, costs are taken into account
through a lower use limit (llm ) for each machine Mm . It is a fraction of the ma-
chine availability dm . The limit will be set near 100% if the machine is expensive
and its use mandatory. On the other hand, this limit will be lower than 50% if:
dm dm dm
um
dm.hlm dm .hl m d m.hlm
um
A higher use limit (hlm ) is also defined for each machine. It is used to impose
some flexibility to the system: if there is a failure on a machine, the production can
be reoriented to non-fully loaded resources. If the user wants a high flexibility, he or
she will fix hlm at a relatively low value (70% for instance).
These two limits are not considered as hard constraints and may not be respected,
as illustrated in Figure 4.1. dm is the actual machine use, and a non-respect of these
limits is used in the evaluation of a proposed solution. The first drawing represents
an ideal case when the use is comprised between the low and high limits. The two
following cases are less preferable. The first represents a situation where the machine
has flexibility less than desired and the second illustrates the case of a non-profitable
machine. This machine will have a high non-use cost. If the user does not want to
work with these two limits, the default value will be used (hlm = 100%, llm = 0%).
The batch size of a part represents the quantity of parts to be transferred. This
parameter is used to quantify the transport of the parts (in batches) between machines
or cells. By default, the value is equal to the part demand.
To quantify the difficulty to transport a part between two machines or two cells,
we introduce a transport factor based on the Muther’s search [191]. This factor can
be specified for each part and must contain all difficulties relative to the transport
such as the size, shape, state, quantity, fragility, price, etc. This factor is comprised
between 0 and 1. The higher the factor, the greater the difficulty or the risk to move
the part.
To orient the grouping, the user can use a similarity coefficient (sij ) between parts
i and j. The approach uses a similarity coefficient between parts but does not specify
which coefficient must be used. So the user can define his or her specific similarity
coefficient depending on the wanted results (based on the process, shape, size, etc).
This coefficient will be used for the evaluation of the similarity between all parts
assigned to a machine.
68 CHAPTER 4. PROBLEM FORMULATION
4.2.3.3 Cells
lbc Lower bound use in cell c.
ubc Upper bound use in cell c.
An upper bound use can be defined specifically for each cell. By default, this value
is equal to parameter mub characterising all cells. The same way, a lower bound use
can be defined differently for each cell. The default value of lbc is equal to zero.
Both parameters lbc and ubc characterised the size of the cell. By default, this size
corresponds to the number of machines inside the cell. However, this maximum and
minimum size of each cell can be defined by different manners:
• the maximum number of machines by cells;
• the maximum number of employees dedicated to each cell;
• the maximum space available for each cell.
According to his or her wish, the user will chose the size adapted to his situation.
4.3.2 Constraints
The following constraints imposed in the model are presented in three categories: con-
straints on the data set and the input parameters, constraints during the treatment
and constraints on the final solution.
4.3. MATHEMATICAL FORMULATION 69
1. The cell size and the number of cells introduced by the user must be adequate
to contain all machines:
n C · mU B ≥ n M (4.2)
2. There must be sufficient machine capacity to produce the specified part mix.
3. The cell size must be specified. Upper and lower bounds can be used instead
of a specific number. These parameters can be different for each cell.
4. The number of cells nC in the system must be specified a priori. If the user
does not know exactly the number of cells and their size he or she wants, the
algorithm can propose different options of grouping by varying the number of
cells between nC − 2 and nC + 2.
∗
if(yikpm = 0) ⇒ tikpm = 0 ∀i, k, p, m
∗
if((yikpm = 1) and (tikpm = 0)) ⇒ tikpm = tikp > 0 ∀i, k, p, m (4.3)
∗
if(yikpm = 1) ⇒ tikpm > 0 ∀i, k, p, m
nP npr n
Oik
X Xi X
qi · tikpm · yikpm ≤ dm ∀m (4.4)
i=1 k=1 p=1
npr
Xk
xik = 1 ∀i (4.5)
k=1
3. The total number of machines allocated inside cell c must respect the lower and
upper limits:
nM
X
lbc ≤ zmc ≤ ubc ∀c (4.6)
m=1
70 CHAPTER 4. PROBLEM FORMULATION
nM
X
if (xik = 1) ⇒ yikpm = 1 ∀i, k, p (4.7)
m=1
nC
X
zmc = 1 ∀m (4.8)
c=1
nC
X
∗
zic =1 ∀i (4.9)
c=1
nP npr nOik −1
X Xi X
φ1mn = xik · yikpm · yik(p+1)n qi · tik(p+1)n (4.10)
m6=n
i=1 k=1 p=1
nP npr nOik −1
X Xi X qi
φ2mn = xik · yikpm · yik(p+1)n · f ti (4.11)
m6=n
p=1
bsi
i=1 k=1
For the following expressions (traffic between cells, traffic intra and inter-cellular
and total traffic), the basic traffic between machines can be φ1mn or φ2mn according to
4.3. MATHEMATICAL FORMULATION 71
the designer’s wish. For all these expressions, a index i specify which traffic between
machines is used:
Φicd Traffic from cell c to cell d. This traffic is computed as the sum of all
traffics from a machine belonging to the cell c to a machine in cell d:
nM X
X nM
Φi cd = (zmc · znd ) φimn (4.12)
c6=d
m=1 n=1
Φiintra Intra-cellular traffic computed as the sum of the traffic between all ma-
chines belonging to the same cell:
nC nM nM
!
X X X
ΦiIntra = (zmc · znc ) (φimn + φinm ) (4.13)
c=1 m=1 n=m+1
Φiinter Inter-cellular traffic computed as the sum of the traffic between all ma-
chines belonging to different cells:
nC X
nC nM nM
!
X X X
ΦiInter = (zmc · znd ) (φimn + φinm ) (4.14)
c=1 d=c+1 m=1 n=m+1
Φitot Total traffic in the system computed as the sum of the traffic between
machines allocated to all cells:
The matrices φi is of dimension nM , while the matrices ΦiIntra , ΦiInter and ΦiT otal
are of dimension nC .
( P Pn
npri Oik
1 if k=1 x ik · p=1 y ikpm >0
Imi = (4.16)
0 otherwise
nM X
X nP
e= Imi (4.17)
m=1 i=1
nC
" nM X
nP
!#
X X
∗
e1 = (zmc · zic ) · Imi (4.18)
c=1 m=1 i=1
nC
" nP
! nM
!# nC
" nM X
nP
!#
X X X X X
∗ ∗
ev = zic zmc − (zmc · zic ) Imi (4.19)
c=1 i=1 m=1 c=1 m=1 i=1
e0 = e − e1
"n n # nC
" n n !#
XM XP X XM XP
∗
e0 = Imi − (zmc · zic ) Iim (4.20)
m=1 i=1 c=1 m=1 i=1
Γ Group Efficacy:
e − e0
Γ= (4.21)
e + ev
4.3. MATHEMATICAL FORMULATION 73
4.3.3.3 Similarity
N Om Number of operations assigned on machine m:
nP npr nOik
X Xi X
N Om = xik yikpm (4.22)
i=1 k=1 p=1
Smn Similarity between machine m and machine n based on the similarity be-
tween parts. This coefficient is high if the parts assigned to both machines
are similar. We have:
PnP
i=1 Imi · Ini
SJmn = P nP mn (4.24)
i=1 ai
where
(
0 if Imi = Ini = 0
amn
i =
1 otherwise
(Smm · N Om − 1) N Om
Sm = PN Om −1 (4.25)
2 · p=1 p
PnM PnM
(zmc · znc ) · Smn
Sc = m=1 n=m+1
Pn M (4.26)
m=1 zmc
74 CHAPTER 4. PROBLEM FORMULATION
PnC
c=1 Sc
S= (4.27)
nC
W Lc Workload of the cell c evaluating the total charge assigned to this cell:
nM nP npr nOik −1
X X Xi X
W Lc = zmc · xik · yikpm · qi · tik(p+1)n (4.28)
m=1 i=1 k=1 p=1
W L∗c Workload of the cell c evaluating the total number of operations assigned
to this cell:
nM nP npr nOik −1
X X Xi X
W L∗c = zmc · xik · yikpm (4.29)
m=1 i=1 k=1 p=1
UW L Unbalance workload between the cell with maximum workload and the
cell with minimum workload:
U W L∗ Unbalance workload between the cell with maximum workload and the
cell with minimum workload in term of number of operations:
4.3.3.5 Flexibility
OHLm Overstepping the high limit of use for machine m:
um − dm · hlm
OHLm = max 0; (4.32)
dm − dm · hlm
PnM
m=1 OHLm
OHL = (4.33)
nM
4.3.3.6 Costs
U LLm Under-filled the low limit of use for machine m:
dm · llm − um
U LLm = max 0; (4.34)
dm · llm
PnM
m=1 U LLm
U LL = (4.35)
nM
• the similarity between parts assign to a machine (in order to minimise the setup
time).
nMikp Number of machines that can achieve the pth operation belonging to kth
process of part i.
4.4.1.2 Definitions
To evaluate the search space, it is necessary to enumerate the number of possible ways
to assign all operations to a specific machine. Depending on the level of alternativity,
low or high, the number of ways can be counted. Based on the values found in the
literature, several hypotheses are made:
• For all parts, there are between one and three alternative processes (npri ).
The formula 4.37 expresses the average number of solutions in considering that
each operation can be achieved by nnM T
machines. In this case, the number of solutions
is computed for each part in summing the number of solutions for each process
belonging to the part because the choice must be made between each process. The
total of solution is the part of the solution for each part because all the solutions of
a part can be associated with all the solutions for the other parts.
If the number of machines that can achieve each operation (nMikp ) is known, the
exact formula (4.38) can be computed:
nP npr nOik
Y Xi Y
N SRP = nMikp (4.38)
i=1 k=1 p=1
For each process, the part of the number of ways to achieve each operation is
computed.
4.4.1.3 Applications
Formula 4.37 is applied on a medium case study containing 30 parts, 21 machines
that are to be grouped into 3 or 4 cells. Each process is formed by 6 operations.
Different situations are presented from low to high of alternativity.
30
Y
N SRP = (1)6 = 1 (4.39)
i=1
2. In presence of alternative process, each machine type contains just one machine.
But each part is characterised by two processes. For each part, the choice
between these two processes must be done.
30 2
!
Y X 6
N SRP = (1) = (2)30 = 1.0737 × 109 (4.40)
i=1 k=1
30
Y 30
N SRP = (2)6 = (2)6 = 1.5325 × 1054 (4.41)
i=1
30 2
!
Y X 30
N SRP = 26 = 2 × 26 = 1.6455 × 1063 (4.42)
i=1 k=1
78 CHAPTER 4. PROBLEM FORMULATION
30 3
!
Y X 30
6
N SRP = 4 = 3 × 26 = 4.8454 × 10122 (4.43)
i=1 k=1
As shown by this instance, the number of ways to solve this problem is really
huge. The situations given as example are computed with the average formula 4.37
where each process has the same number of operations (6). If this is not the case,
this formula gives a good approximation of the number of solutions.
To evaluate the exact number of alternatives for the Resource Planning Problem,
it is necessary to count the number of alternative machines available to achieve each
operation and the number of operations for each process. The size of the search space
presented in the chapter of case studies (8) is evaluated in checking all processes and
all operations for each part. The formula 4.38 is then used.
For instance, to allocate 10 machines into 4 cells, there are only 34,105 distinct
partitions. This number increases to 45,232,115,901 approximately, if 20 machines
are to be partitioned into 4 cells, which is a combinatorial explosion.
4.4. SIZE OF SEARCH SPACE 79
1 1 0
1
2 0 0
nR=3 2
1 0 0
3 0 0 0
Duplicated
Group 1 Group 2 Group 3 Group 4 combination
Figure 4.2: Spanning tree enumerating the possible instance for a repartition of three
free places in 4 groups (nR = 3)
This formula gives the number of solutions to group 21 machines in 4 cells with
one cell of 3 machines. This is limited and not correct because the denominator
would have to be “3!”. The solutions where two or more cells have one or more free
places are not included in this count. It is also important to note that the term C63
has no solution because it is impossible to make a group of 6 objects with 3 objects.
But in this equation, this term must be equal to “1”.
To define a generalised formula, it is necessary to enumerate the number of pos-
sibilities to distribute the machines through all the cells without filling the first cells.
To make this repartition and to write a formula available for all number of ma-
chines, a new parameter is introduced: nR . This parameter defines the number of
free places in the cell configuration. nR is equal to the difference between the to-
tal capacity of the cells and the number of machines. For the previous instance,
nR = mU B × nC − nM = 6 × 4 − 21 = 3.
Working with the nR free places rather than with the nM machines permits to
generalised the formula for any number of cells (nC ) and for any maximum cell
size defined by the number of machines (mU B ). For each distribution, the number
of possible combinations to assign nM machines to nC cells can be computed by
following the repartition of nR free places.
Figure 4.2 illustrates the set of possible distributions for nR = 3 with a simply
graph tree. The nR elements are distributed from 1 to 3 in the first cell. For the
next cell, the remaining free places are assessed until all nR elements are assigned to
a cell. The next cells are then completely filled up with mU B machines.
There are three ways to distribute the 3 nR elements of the instance:
1. The first cell contains 1 free places: In this case, the first cell contain 6 − 1 = 5
machines while two ways can be met for the last two free places.
• The second cell contains 1 free place: The second cell will contain 5 ma-
chines as for the third cell containing also 1 free place. The vector rep-
resenting the number of machines by cell is equal (5, 5, 5, 6). This vector
can be written in term of free places as on Figure (1, 1, 1, 0)
• The second cell contain 2 free places: The second cell will contain 4 ma-
chines. All free places are distributed and the last two cells will contain
4.4. SIZE OF SEARCH SPACE 81
6 machines. The vector of machines by cell is (5, 4, 6, 6). This vector can
be written in term of free places (1, 2, 0, 0).
2. The first cell contain 2 free places: The first cell contains 5 machines while the
second cell will contain the left free place (4 machines) and the two last ones
will contain 6 machines. The vector of machines is (4, 5, 6, 6) while the vector
of free places is (2, 1, 0, 0).
3. The first cell contains 3 free places: the first cell contains 6 − 3 = 3 machines
while the three other cells contain 6 machines. The vector of machines is
(3, 6, 6, 6) while the vector of free places is (3, 0, 0, 0).
1. If the number of cells (nC ) is lower than nR , several repartitions are not al-
lowed. In the example, if the number of cells is limited at 2 (nC = 2), the first
repartition (1 1 1 0) must be eliminated.
Indeed, the repartition of the nR elements will define the factorial term to di-
vide the total of combinations. The Qmdenominator composed by the total number of
permutations is equal to the part k=0 ñCk ! where ñCk defines the number of cells
UB
• 1110
• 2100
• 3000
82 CHAPTER 4. PROBLEM FORMULATION
The formula to count the number of ways to group 21 machines in 4 cells with
maximum 6 machines would have to give:
5 × C5 × C5 × C6
4 × C5 × C6 × C6
C21 16 11 6 C21 17 12 6
NS = +
3! (1110) 2! (2100)
3 × C6 × C6 × C6
C21 18 12 6
+
3! (3000)
4.4.2.4 Definitions
The equation 4.46 is available only when p ≤ n because the factorial of a negative
number does not exist. A new parameter, C̃np , is introduced to take into account
this last constraint. Indeed, in the enumeration of all combinations, the last term
computed for the last group can be considered as a not full group. In this case, the
number of ways to chose p objects among n (n < p) to create this last group is equal
to 1.
p
p Cn if (p ≤ n)
C̃n = (4.49)
1 if (p > n)
• the previous left free places to respect the rule of the valid repartition.
(n ,m )
The grouping combination parameter, G(n,n
C UB
R ,nR− )
, defines the number of combi-
nations to group nM objects into maximum nC groups with a maximum size of mU B
objects by group. The grouping combination parameter is computed recursively.
mU B −1
min nR ,n − !
R (m −i)
(nC ,mU B )
X C̃nMU B (nC −1,mU B )
G(nM ,nR ,nR− )
= × G(nM −(mU B −i),nR −i,i)
(4.50)
ηi
i=1
In the formula 4.50, nR represents the number of free places to distribute among
the cells. nR− corresponds to the number of free places in the previous cell (used for
the recursivity in creating a repartition of the nR elements in a decreasing order).
nM
The term C̃(m U B −i)
represents the number of ways to create the first group with
4.4. SIZE OF SEARCH SPACE 83
(mU B − i) objects, i.e. containing i free places. The right term computes all the
combinations to group the nM − (mU B − i) remaining objects into nC − 1 groups
with the same maximum size. The nR − i left free places will be distributed among
the next cells remembering that the structure of free places is decreasing (nR− = i
to keep the information on the number of free places used in the actual cell). The
sum is made from 1 to the minimum between mU B − 1, nR and nR− to share the nR
left free places between all groups without exceeding the maximum size of the group
(mU B − 1) and without increasing the repartition of free places. If the term mU B − 1
is replaced by mU B , the empty cells are allowed in the enumeration of all possible
combinations. The term ηi is a parameter without value to specify that the cell is
created with i free places.
The number of solutions available for the cell formation problem search space
containing nC cells, nM machines and a maximum cell size equal to mU B (nR =
nC × mU B − nM ) is:
(n ,m )
C
N SCF = G(nM UB
,nR ,nR ) (4.51)
In the formula 4.51, nR− is equal to nR because the repartition of the nR free
places is made in a decreasing order in beginning by the a cell with the maximum
nR . The decomposition with the recursive formula 4.50 must be done until all nR
(n ,mU B )
elements are assigned into a cell. The final term of the recursive function (G(nCM ,0,n −)
R
(1,m )
or G(nM ,n
UB
R ,nR− )
) can be computed by the properties defined here below. When
decomposition is completed, the parameters ηi can be transformed by the following
formula:
(ηi )k = k! ∀i (4.52)
This notation permits to use the recursivity without computing a factorial term
before the end of the decomposition.
4.4.2.6 Properties
• if nR = 0 ⇒ nM = nC × mU B and ∀nR−
QnC −1 mU B
(nC ,mU B ) i=0 C̃(n M −i×mU B )
G(nM ,0,nR− ) = (4.53)
(η0 )nC
• if nC = 1 and nM ≤ mU B
(1,m )
UB
G(nM ,n R ,n
= C̃nmMU B = 1 (4.54)
R− )
• if nC = 1 and ( nM > mU B or nR ≥ mU B )
(1,m )
UB
G(nM ,n R ,n
=0 (4.55)
R− )
The first property defines the recursive formula 4.50 in the particular case where
there is no free place to distribute among the cells. In this case, the formula 4.47 is
84 CHAPTER 4. PROBLEM FORMULATION
available. This formula defines the number of ways to group nM machines into nC
full cells with mU B machines.
The second property defines the only way to select mU B machines among nM
machines (nM ≤ mU B ).
The last properties specifies the impossibility to group nM machines in the last
cell if this number nM or the number of nR elements exceed the maximum size of
the cell.
4.4.2.7 Applications
The Generalised Formula 4.51 is developed for the cases where nR = 0 to nR = 3 in
appendix D. In this section, the Generalised Formula 4.51 is applied for the example
used in section 4.4.1 with 21 machines.
If the number of cells is set to 3 and the cell size is limited to 5 machines, nR = 0
and the number of ways to group 21 machines into 3 groups of 7 machines is:
Q3−1 7 ! !
i=0 C̃(21−i×7)
7 · C̃ 7 · C̃ 7
C̃21 14 7
N S(nR = 0) = =
3! 3!
(0000)
!
21! 14!
· · 1
= 14!7! 7!7! = 3.99 · 108
3!
If the cell size is limited to 6 machines with 4 cells, the number of free places is
equal to 3. Following the repartitions of Figure 4.2, the different possible repartitions
of the three free places are (1 1 1 0), (2 1 0 0) and (3 0 0 0). The number of ways to
group 21 machines into 4 groups of 6 machines is:
3
!
(6−i)
(nC ,mU B ) (4,6)
X C̃21 (3,6)
N S(nR = 3) = G(nM ,nR ,nR− ) = G(21,3,3) = × G(21−(6−i),3−i,i)
ηi
i=1
! !
5
C̃21 4
C̃21 3
(3,6) (3,6) C̃21 (3,6)
= · G(21−5,2,1) + · G(21−4,1,2) + · G(21−3,0,3)
η1 η2 η3 | {z }
(1...) (2...)
nR =0 (3...)
! Q2 6
!
i=0 C̃((18)−i · 6)
5 C̃ 5
C̃21 (2,6) C̃ 4 C̃ 5 (2,6)
3
C̃21
= · 16 · G(11,1,1) + 21 · 17 · G(12,0,1) + ·
η1 η1 η2 η1 | {z } η3 (η0 )3
(11..) (3000)
nR =0 (21..)
Q1 6
!
C̃ (5) C̃ 5 C̃ 5
i=0 C̃(12−i · 6)
4
C̃21 C̃ 5
(1,6)
= 21 · 16 · 11 · G(6,0,1) · 17 ·
+
η1 η1 η1 | {z } η2 η1 (η0 )2
(2100)
=C̃66 (nR =0)
(1110)
Q2 6
! !
i=0 C̃(18−i · 6)
3
C̃21 5 · C̃ 5 · C̃ 5
C̃21 16 11
+ · = · C̃66
η3 (η0 )3 3!
(3000) (1110)
4.5. CONCLUSION 85
! !
6 · C̃ 6
C̃12 6 · C̃ 6 · C̃ 6
C̃18
4 5 6 3 12 6
+ C̃21 · C̃17 · + C̃21 ·
2! 3!
(2100) (3000)
! !
21! 16! 11! 21! 17! 12!
16!5! · 11!5! · 5!6! 17!4! · 12!5! · 6!6!
N S(nR = 3) = +
3! 2!
(1110) (2100)
!
21! 18! 12!
· ·
+ 18!3! 12!6! 6!6!
= 2.776 · 1010
3!
(3000)
For this example, the size of the cell formation search space is equal or really lower
than the size of the resource planning search space (equation 4.40 to 4.43) depending
on the rate of alternativity used in the case study. The proportion of these two search
spaces will be used in the resolution (see chapter 6).
4.5 Conclusion
In this chapter, the mathematical model proposed in this thesis is presented. Different
assumptions and notations are described in order to understand the next section.
The mathematical formulation is presented after detailing the decision variables, the
constraints and the evaluation functions. The two main criteria used in the next
chapters are the maximisation of the intra-cellular flow and the maximisation of
the group efficacy. This parameter permits to evaluate the diagonalisation of an
incidence matrix in computing the number of elements outside the diagonal blocks
and the number of voids (holes) inside the diagonal blocks. Finally, the size of the
search space is evaluated and a generalised formula is provided to enumerate the
number of solutions to group nM machines to nC cells with a capacity constraints
for the cell size.
86 CHAPTER 4. PROBLEM FORMULATION
Chapter 5
Methodology
In this chapter, the methodology used to solve simultaneously the three problems (the
process selection, the resource planning problem and the cell formation problem) is
presented. This method is based on the genetic algorithm. For this reason, the main
components of the genetic algorithm are firstly presented to make the reader familiar
with the different concepts. The Ga adapted to the grouping problem and using
multi-criteria is succinctly described. A complete description is available in appendix
B.5. Next, the evolution of the methodology is explained through three different
methods. Only the final method will be completely described in the next chapter
6. However, these three methods will be compared in the chapter 8 presenting the
results, so it is necessary to explain these methods in this chapter.
1. Their flexibility to combine them with specific heuristics adapted to the given
problem.
As explained in section 3.2.1, Genetic Algorithms have become popular for finding
“good” solutions. These good solutions can be sub-optimal to many types of problems.
87
88 CHAPTER 5. METHODOLOGY
The main components of Genetic Algorithms defined by Aytug et al. are pre-
sented here below [14]:
Mutation the “exploration operator”, which tends to move the search to a new
neighbourhood;
Crossover the “focusing operator”, which helps the Ga move towards a local
or global optimum.
Crossover tends to make the individuals within the population more similar,
whereas mutation tends to make them more diverse [109].
Pongcharoen et al. described sixteen different crossover operators and eight
alternative mutation operators that were used for scheduling the production of
complex products in the capital goods industry [205].
7. Ga parameter settings.
Gas depend on a number of parameters, including the probabilities of crossover
and mutation, the population size and the number of generations. These pa-
rameters can have a large influence on the performance of Gas. Aytug et
al. specified that most of the research has failed to explore systematically the
settings of Ga parameters [14].
Gas use probabilistic transitions and not deterministic ones. For the same
instance of a problem (same initial conditions and data), the results may be
different after two different runs, except if a similar generator of random values
is used.
Population
initialisation
Evaluation :
Cost function
Genetic operators
no
Stop condition
GGA yes
One solution
Gga finds solutions that are as good as the best known solution to eight out of the
10 MPCF problems. For all test problems, the Gga finds solutions within 1.5% of
the best known. These high-quality solutions are found quickly (slightly more than 2
min for the largest problems) after at most 50 generations. For comparable solution
times, the Ga solutions are much lower in quality. Even when the Ga is allowed to
run 30 min, solution quality is consistently inferior. The largest problems imply the
greatest difference between the qualities of the solutions [29].
The algorithm presented in this thesis is completely inspired by the Gga. How-
ever, the method is adapted to solve several problems simultaneously. The flowchart
of the Gga is represented in Figure 5.1. The basis is exactly the same as for the Ga
but the coding and the operator are adapted and group oriented.
Population Population
initialisation initialisation
Evaluation:
Evaluation:
Multi-criteria
Cost function
Decision aided
Integration
no
no
Stop condition Stop condition
Multi-criteria
Decision aided
Solution
Figure 5.2: Solution selection for multiple objectives: classical Gga versus Mogga
integrating search and decision making.
to merge the search and the multi-criteria decisions, the idea proposed by Rekiek
[217] was to insert a multi-criteria decision aided method inside the Gga, in the
evaluation module. The proposed selection approach gave birth to Mogga as il-
lustrated in Figure 5.2. The multi-criteria decision-aid method used by the author
is called Promethee II [27]. This method permits the building of an outranking
between different alternatives. Thanks to Promethee, the population evolves at
each generation to the best compromise between all criteria.
The principle of the integration developed by Rekiek is used in this thesis in the
same way as the Mogga. The multi-criteria decision aided method inserted in the
evaluation module is Promethee as Rekiek and Francq [217, 71].
The complete description of this method is out of the scope of this thesis and
detailed in appendix B.6 . It is however important to know that it computes a net
flow φ which is a kind of relative fitness for each solution. This “fitness” yields a
ranking, called the Promethee II complete ranking, among the different solutions
in the population. The relative importance of the different objectives is set thanks
to weights associated with each criterion. An essential feature of these weights is
that their influence is independent from the evaluation of each criterion. Indeed, the
outranking or outranked character of each solution is computed thanks to a mapping
of the criteria evaluations onto [0, 1], through preference functions used to establish
in-two comparisons between alternatives for each criterion. So for a given set of
preference functions, weights set to (0.5, 0.5), for a problem with two criteria, means
that both criteria are given the same importance, irrespective of the exact nature of
their underlying evaluations.
92 CHAPTER 5. METHODOLOGY
Thus, the solutions are not compared according to a cost function yielding an
absolute fitness of the individuals as in a classical Ga, but are compared with each
other thanks to flows, depending on the current population. The use of Promethee
in the Gga permits to order all the chromosomes according to their quality. The
selection method used in the Mogga is the tournament selection. This method is
especially effective when the solutions are ordered. The Simogga developed in this
thesis is strongly inspired from the Mogga developed by Rekiek.
• The selection of the preferential process for each part (process selection prob-
lem, P S);
• The grouping of machines into independent cells (cell formation problem with
several constraints and several criteria, CF ).
The objective of this research is to adapt a Gga to solve these three sub-problems.
Two principles have been tested.
The selection process problem is included in the resource planning problem for
the first method. These methods will be analysed and the results will be compared
in the chapter 8.
Population Population
initialisation initialisation
Evaluation : CF
Multi-criteria embedded module
Decision aided
Evaluation:
Multi-criteria
Genetic operators
Decision aided
Integration
no
Stop condition Genetic operators
RPMOGGA yes
no
Stop condition
Set of solutions
RPMOGGA
with integrated CF yes
CF-module Solution
Solution
RP-Population
CF-Heuristic initialisation
CF-embedded
module
CF-Population
initialisation Evaluation :
Multi-criteria
Decision aided
Evaluation:
Multi-criteria
Decision aided
RP
Genetic operators
CF CF-embedded
Genetic operators module
no
no
Stop condition Stop condition
CF-MOGGA yes RP-MOGGA yes
Solution Solution
There are two different ways to embed these two sub-problems. The first solution
consists of allocating the operations on a machine and in embedding a module to
group the machines into cells. The second solution consists of grouping the machines
into cells and then in embedding a module to allocate operations on these machines.
The size of the search space between both problems has directed this choice. The
cell formation problem is easier to solve with a heuristic compared to the resource
planning problem whose search space is broader. The proposed algorithm is then
based on the first solution. It allocates all generic operations on specific machines,
which yields flows between them, and then, searches for a grouping of these machines
to minimise the inter-cellular traffic. This is also the method that will be manually
chosen by the designer, as it is more visual. The whole problem is solved with an
adapted Mogga (RP-Mogga), whose flowchart is illustrated in Figure 5.4, with an
embedded cell formation module.
The complete process of the RP-Mogga with an embedded CF module is pre-
sented in Figure 5.5. During the initialisation phase of the RP-Mogga, a population
of N RP-chromosomes is created. Each RP-chromosome represents a valid solution
for the resource planning problem i.e. a solution of allocation of operations on specific
machines, including the choice of processes. The embedded CF module is run for each
RP-chromosome to complete it with a CF-solution. At the end of the initialisation
phase of the RP-Mogga, each RP-chromosome includes solutions for both prob-
lems (RP and CF). The associated CF-solution will be used to evaluate the complete
solution. However the RP-Mogga works only with the RP-chromosomes.
The population of valid solutions is evaluated based on several criteria. Each
chromosome composed by a solution for both problems can be evaluated with any
criteria relative to the RP solution (e.g. similarity criterion) or CF solution (e.g.
flow criterion) because the integration of the embedded module is done before the
evaluation. All the RP-chromosomes are ranked using the multi-criteria decision-aid
algorithm (Promethee II) (see appendix A).
At each iteration of the main loop (RP), the Mogga evolves the solutions in using
the genetic operators on the RP-chromosomes. A sort is applied on the chromosomes
to order them. With the tournament selection, the best chromosome will be always at
the top of the list. The top 50% chromosomes will be used as parents for the crossovers
and the resulting children (offspring) will replace the bottom 50% chromosomes.
The mutation is used to explore the search space. The new individuals are then
incorporated into the original population. When a RP-chromosome is altered by the
genetic operators, the CF-module is called to reconstruct an adapted CF-solution to
complete the RP-chromosome with a new solution of machines grouping into cells.
If the RP-chromosome is not changed, the CF-solution will not be replaced. The
loop of RP-Mogga finishes when the termination criterion defined by the user is
reached.
RP-MOGGA
RP-population
RP-initialisation
RP-chromosome 1 CF CF-solution 1
RP-chromosome 2 CF CF-solution 2
......
......
RP-chromosome i CF CF-solution i
......
......
RP-chromosome N CF CF-solution N
RP-Evaluation
Multi-criteria decision
aided
RP-crossover
RP-parent i RP-offspring i CF CF-offspring i
x
RP-parent j RP-offspring j CF CF-offspring j
RP-mutation
RP-chromosome k RP-offspring k CF CF-offspring k
no Stop condition
yes
Input:
RP-chromosome ii CF
CF-MOGGA
CF-population
CF-initialisation
CF-chromosome 1
CF-chromosome 2
......
CF-chromosome j
......
CF-chromosome N
CF-evaluation
Multi-criteria decision
aided
CF-genetic operators
CF-crossover
CF-parent 1 CF-offspring 1
x
CF-parent 2 CF-offspring 2
CF-mutation
CF-chromosome CF-offspring 2
CF-reconstruction
Stop condition no
yes
Best CF-chromosome
Copy
Best CF-chromosome
In CF-solution i
Population initialisation
PS & RP & CF
Evaluation:
Multi-criteria
Decision aided
Genetic operators
PS & RP & CF
no
Stop condition
SIMOGGA yes
Solution
In the previous method, two similar Mogga are used with the same criteria, but
the operators are applied on each problem independently for RP-Mogga and for
CF-Mogga. Instead of using an embedded module to complete a RP-solution with
a CF-solution in a main Mogga, the Simogga deals with a chromosome containing
the double solution. The chromosome is composed by one part defining the operation
grouping into machines (RP-part), including the allocation of the operations on a
specific machine and the choice of the process, and another part defining the machine
grouping into cells (CF-part). The choice of the process for each product in the RP-
part is specific to each chromosome.
The Simogga is detailed in Figure 5.8. An initialisation phase permits to con-
struct the chromosomes on the basis of several heuristics. This initialisation can
be done randomly where each part of the chromosome is constructed independently.
The initialisation can also be done sequentially where one part of the chromosome
(RP or CF) is randomly constructed and the second part (respectively, CF or RP)
is constructed on the basis of the first solution. The CF module or the RP module,
composed by a specific heuristic, is run to complete the chromosome with the missing
part. In this way, some criteria (like the flow between cells) can be optimised during
the initialisation process.
5.4. PROPOSED SOLUTIONS - METHODOLOGY 99
SIMOGGA
Population
Initialisation
Chromosome 1
RP-part CF CF-part
Chromosome 2
RP-part RP CF-part
......
Chromosome i
RP-part CF CF-part
......
Chromosome N
RP-part RP CF-part
Evaluation
Multi-criteria decision
aided
x RP
Parent j Offspring j
RP-part CF-part RP-part CF-part
Parent k Offspring k
RP-part CF-part RP-part CF-part
x CF
Parent l Offspring l
RP-part CF-part RP-part CF-part
Mutation
Chromosome m Chromosome m
RP-part CF-part RP-part CF-part
no Stop condition
yes
Best chromosome
RP-part CF-part
Figure 5.8: Detailed Simogga: Adapted Mogga to solve simultaneously two inter-
dependent problems.
100 CHAPTER 5. METHODOLOGY
Simogga initialisation RP ⇔ CF RP ⇒ CF RP ⇐ CF
Initialisation
RP Heuristic Random Random Process
CF Heuristic Random Flow Random
Operator rates
RP RP
CF CF
Reconstruction
RP Heuristic Process
CF Heuristic Flow
The other advantage of this algorithm is its flexibility. Modifying some parame-
ters, this algorithm can be used as an algorithm with a sequential resolution (Problem
1 with an integrated module to solve problem 2). Table 5.2 presents the different
parameters that must be used to transform the Simogga in a RP-Mogga with an
5.4. PROPOSED SOLUTIONS - METHODOLOGY 101
RP-Mogga CF-Mogga
Algorithms with an integrated CF- with an integrated RP-
heuristic Mogga heuristic Mogga
Sequential RP ⇒ CF RP ⇐ CF
Initialisation
RP Heuristic Random Process RP-Mogga
CF Heuristic Harhalakis CF-Mogga Random
Operator rates
RP 100 0
CF 0 100
Reconstruction
RP Heuristic Process
CF Heuristic Flow
5.5 Conclusion
In this chapter, a brief description of the Ga characteristics has been presented. The
adaptation of the genetic algorithm (GA) for the grouping problem and the inclusion
of the multi-criteria decision aided method in the grouping genetic algorithm (Gga)
have been succinctly described. The evolution in the resolution method has been
explained to understand the proposed adapted Simogga. Finally, the fitting of the
algorithm to adapt the Simogga to the sequential resolutions was presented. This
algorithm will be described in details in the following chapter.
Chapter 6
Implementation
6.1 Introduction
The Simogga is based on the Mogga structure. All similar components are pre-
sented in appendix C. The distinct structures and specificities of the Simogga are
described in details in this chapter.
Firstly, the coding based on the Gga coding is presented and illustrated. Then,
the rates used for the genetic operators are computed on the basis of the data.
Secondly, the initialisation phase is described with the different heuristics used. In
this section, the complementary heuristics are also described to adapt the Simogga
to the Mogga with an integrated module. Each of these heuristics are specifically
adapted for the reconstruction phase.
The operators are then detailed to explain the adaptation of the crossover and
the mutation to the simultaneous resolution of the two problems. The inversion, as
in the Mogga, is only described in appendix.
This chapter closes with some modifications of the Simogga to improve the
results following by the stop condition specific to the Simogga.
6.2 Encoding
6.2.1 Coding of Chromosomes
6.2.1.1 Coding in a Gga
In a classic GA applied to the cell formation problem, the position of the gene (locus)
in the chromosome represents the identifier of the machine. The information included
in the gene represents the group (cell) in which the machine is allocated. Numbering
the objects from 0 to 7, the object part of the chromosome can be explicitly written:
0 1 2 3 4 5 6 7
A D B C E B D E
meaning the object 0 is in the group labelled (named) A, 1 and 6 in the group D, 2
and 5 in B, 3 in C and finally 4 and 7 in group E.
The Gga coding augments the standard chromosome with a group part, encoding
the groups on a one gene for one group basis. Considering the previous example, a
chromosome in a Gga can be encoded as follows:
103
104 CHAPTER 6. IMPLEMENTATION
A D B C E B D E : A D B C E
0 1 2 3 4 5 6 7 :
A D B C E B D E : ...
The group part of the chromosome represents only the groups. Thus
... : A D B C E
expresses the fact that there are five groups in the solution. This chromosome repre-
sents the following solution:
A D B 0 0 C B E 0 0 E C : A D B C E
3. A group encoding to define the machine grouping into cell. Each cell can be
completed with specific information about its size, location, etc.
p q q p q : p q
6.2. ENCODING 105
Parts Machines
O0 O1 O2 MA MB MC
P1
O3 O4
O5 O6 O7
P2 MD ME MF
O8 O9
P3 O10 O 11
Cells
C1 C2
This solution corresponds to the following string defining the process selection:
The chromosome encodes the solution for the double grouping problem where the so-
lution of the first problem, composed by 12 objects/operations and 5 groups/machines,
can be depicted as:
Objects/operations 3 and 4 are not used in this solution, as are objects 8 and 9. The
first process has been chosen for part 1 and part 2. The second problem solution
composed of 5 objects/machines (equivalent to the groups for the first problem) and
2 groups/cells, is:
{0}A {1}D {2, 6}B {5, 11}C {7, 10}E : {A, D}p {B, C, E}q .
or visually
Mach. MA MD MB MC ME : Cells C1 C2
Oper. 0 1 2, 6 5, 11 7, 10 : Mach. A, D B, C, E
This solution is shown in Figure 6.2. In this figure, the cells can easily be iden-
tified. Two processes are not used in the solution, thus the operations belonging
to these processes do not appear in the cell design. Following the arrows between
machines, the flows between machines and between cells could be easily computed.
This encoding allows us to apply the genetic operators to the groups rather than
the objects.
MC
MB
O5 MD MA
O 11 O2
O6
O1 O0
ME
O10
O7
C1 C2
O3 O4
O8 O9
100
RP=70%
CF=70%
6.3 Initialisation
The objective for both problems is to explore the space without limiting the search. It
means that we need to test the solution with and without all the groups.For example,
the best solution producing the minimal inter-cellular moves can be the one with two
non used machines.
To accelerate the search for the best solution, the initialisation can be based on a
specific heuristic optimising the flow between cells. In this case, the created solutions
will be good in terms of flow but not other criteria. For this reason, and to ensure a
population sufficiently diversified, the initialisation is decomposed as followed:
6.4 Heuristics
All the heuristics used are described in this section. Some of them are illustrated with
an example. For each heuristic, an adaptation is proposed to use the heuristic during
the reconstruction phase. Indeed, after the crossover and mutation operators, it is
necessary to reinsert all the non assigned objects. During the reconstruction phase,
the chromosome contains already a partial solution for one part or for both parts.
Some objects are already included into some groups. Some groups can be unused.
The heuristics used to reconstruct the chromosomes must take these assigned objects
into account.
If only one part of the chromosome is modified, the specialized heuristic (Flow
Heuristic RP or CF) is applied to reconstruct the modified part. If both parts are
crossed and/or mutated, one part is randomly selected and reconstructed with the
Rand Group Heuristic, and the specialized heuristic is then applied to reconstruct
the second part of the chromosome.
the first group able to accept the object (respecting the capacity and acceptable
constraints of the group). If no group is able to accept the object, a new group is
created in the system. Following this process, the number of used groups is always
minimal.
RP-Random Heuristic. This heuristic is used to group the operations onto ma-
chines to solve the Resource Planning problem (RP). In the RP-Random Heuristic,
all the objects to group are connected to an operation and own all characteristics of
this operation:
• the processing time (an average operating time for all machines belonging to
the machine type and/or a specific operating time to each machine able to
achieve the operation).
CF-Random Heuristic. This heuristic is applied to group the machines into cells
and to solve the cell formation problem (CF). In the CF-Random Heuristic, each
group is connected to a specific cell with a maximum size, while each object is con-
nected to a specific machine with all characteristics associated:
M axGroups − N bGroups
p=
M axGroups
where: M axGroups = maximum of allowed groups and N bGroups = actual number
of used groups.
This probability is used to authorize the creation of solutions with different num-
bers of groups. The probability to create a new group before assigning the object
in a group, p, decreases with the number of created groups. The number of used
groups will not necessarily be minimal. Randomly, solutions will not contain the
same number of groups except if the capacity requires it.
With this new group, the list of used groups is randomized. After this step, the
search of a group is achieved as for the first fit heuristic.
110 CHAPTER 6. IMPLEMENTATION
6.4.1.4 Reconstruction
The Random heuristics (RP or CF) are the basic heuristics always available, for
the initialisation or the reconstruction. The Cell Formation Problem is a simple
grouping problem. The Random Heuristic CF can be used to assign randomly the
objects/machines letf unassigned respecting the hard constraints (limit the cell size
and the maximum number of available cells). For the Resource Planning Problem,
the objects/operations left unassigned are allocated to specific groups/machines re-
specting the vector of selected processes of the chromosome and the capacity and
availability constraints.
6.4.2.1 Initialisation
During the initialisation phase, the flow matrix φmn (see section 4.3.3) is computed
to assign the operations onto the machines recorded in the RP-part. On the basis of
the RP-solution, the traffic between machines can be evaluated following the selected
process of each part.
All machines used in the RP-part are put in the randomized list of object/machine.
Each object/machine is treated following the order of this list.
Two parameters are computed to find the best group/cell to assign to the ob-
ject/machine:
• Find the cell c̃ with which the current object/machine has the maximum flow
φmaxCell .
• Find the machine ñ not yet assigned with which the current object/machine
has the maximum flow φmaxM ach .
If the current object/machine has a greater flow with a cell (c̃) than with a
not yet assigned machine (ñ), the object/machine is assigned to the cell c̃. On the
contrary, if the greater flow is one with the not yet assigned machine (ñ), the current
object/machine is assigned in a new group/cell. This assignment is available only if
the maximum number of groups/cells is not reached. If it is not possible to create a
new group/cell, the object/machine is grouped randomly with the Random Heuristic.
The priority will be given to a group/cell with two free places. Thanks to this priority,
the not yet assigned object/machine could be assigned later in the same cell.
6.4.2.3 Illustration
The Flow heuristic CF is illustrated with the instance presented in the encoding
section 6.2. This heuristic is applied to construct the CF-part when the RP-part
has been constructed with the Random Heuristic RP. The RP-part is schematized in
Figure 6.4. The proposed solution is composed by the first process of each part. In
this solution, the machine MF is not used and operations O5 and O11 are allocated
on machine MC , operations O2 and O6 on machines MB , operations O7 and O10 on
machine ME , operation O1 on machine MD and finally the operation O0 is allocated
on machine MA . The operations O3 , O4 , O8 and O9 are left unassigned because they
belong to a non used process.
Parts RP-solution
O8 O9
To simplify the problem, all operating times are equal to 1 and the production
volume for each part is also equal to 1. In this way, each arrow represents a flow,
a transfer of the part between two machines. The flow matrix can be computed for
this RP-solution (see Table 6.1).
112 CHAPTER 6. IMPLEMENTATION
φmn A B C D E F
A 1
B 1
C 1
D 1
E 1
F
Table 6.1: Flow matrix for the RP-solution found by the Random Heuristic RP.
MD ME MC MB MA
O 10 O5 O2
O1 O7 O 11 O6 O0
All objects are treated following a randomized list. The first object/machine MD
is studied. There is no flow between this machine and the cells while there is one flow
with machine MA and one with the machine MB . Machine MD is put in a new cell
C1 . The next object/machine ME is selected and inserted in a new cell C2 because
there is no flow between machine ME and cell C1 . However there is a flow between
machine ME and machines MB and MC . Figure 6.6 shows this configuration.
MC MB MA
O5 O2
O 11 O6 O0
MD ME
O 10
O1 O7
C1 C2
Figure 6.6: Assignment of three first objects/machines by the Flow Heuristic CF.
MA MB
O2 MC
O0 O6
O5
O11
MD ME
O10
O1 O7
C1 C2
Figure 6.7: Final assignment of objects/machines into cells by the Flow Heuristic
CF.
6.4.2.4 Reconstruction
To apply the Flow Heuristic CF to reconstruct a CF-part of the chromosome, the
flow matrix is computed on the basis of the RP-solution. The objects/machines left
unassigned are reinserted as if the assignment process has been interrupted. The
objects/machines left unassigned are put in a randomized list. The object/machine
is assigned to the cell maximising the flow or in a new cell if the maximum flow is
found with a object/machine not yet assigned.
6.4.3.1 Initialisation
During the initialisation phase, the process selection is done. All operations belonging
to the selected processes are put in a list. The object/operation in this list will be
treated randomly.
Next, a vector of priority cells is defined. A priority cell Ck is assigned to each
process k. This priority cell corresponds to the cell that can achieve the maximum
of operations belonging to the process.
This heuristic is applied during the initialisation phases because the choice of
processes is based on the machine grouping. As explained in section 6.5.2, when the
heuristic is called to reconstruct the chromosome, the process selection is already
done.
6.4.3.3 Illustration
As for the Flow Heuristic CF, the Process Heuristic RP can be illustrated. Figure 6.8
represents an available grouping given by the Random Heuristic CF. This assignment
is completely random and respects the constraints in terms of the maximum number
of groups/cells and the maximum size of the groups (3 in this case).
MC
MB MD MA
ME MF
C1 C2
For each process, the priority cell is computed in maximising the proportion of
the operations that the cell can achieve (#operations in cell/#operations) as shown
in Figure 6.9. The processes are selected randomly. The operations belonging to the
selected processes are put in a randomized list (Figure 6.10).
From this randomized list of objects/operations to insert in some specific ma-
chines, the search can start. For the first object/operation (O10 ) of this list, the
search of an acceptable group/machine begins for all machines included in the pri-
ority cell, C1 . No machines included in cell C1 can accept this object/machine, so
the search continues to cell C2 to find machine ME . A similar search is done for
Figure 6.9: Computation of the priority cell for each process and selection of the
process.
6.4. HEURISTICS 115
MC
MB MD MA
O 2:B O 9:A-C
ME MF
O10:E
C1 C2
Figure 6.11: Assignment of three first objects/machines by the Flow Heuristic CF.
MC
MB MD MA
ME
MF
O10:E
C1 C2
Figure 6.12: Final assignment of objects/machines into cells by the Flow Heuristic
CF.
Figure 6.12 represents the final assignment found by the Process Heuristic RP.
This RP-solution contains one inter-cellular flow from cell C2 to cell C1 . Machine
MF is not used. This one will be removed from the final CF-solution. Indeed, a valid
solution for both problems cannot include an empty cell or an empty machine. This
choice is made because there is no reason to put this machine in cell C2 rather than in
another cell. A possible reason to put a machine into a cell will be that the machine
increases the flexibility of the cells to reorient the production in case of failure. This
choice can be made at the end of the problem in computing a flexibility parameter.
With a different process selection (Figure 6.13) and another arrangement of the
116 CHAPTER 6. IMPLEMENTATION
list of objects/operations (Figure 6.14), the final solution can be completely different.
In this case (Figure 6.15), the solution contains no inter-cellular flow.
Figure 6.13: Computation of the priority cell for each process and selection of the
process.
MC
MB MD MA
O5:C-A
O11:C-A
O 6:B O3:C-A
ME
MF
O7:E
O10:E O 4:F
C1 C2
Figure 6.15: Final assignment of objects/machines into cells by the Flow Heuristic
CF.
6.4.3.4 Reconstruction
The specialized heuristic for RP-part of the chromosome is slightly different in this
reconstruction phase. The process selection for the chromosome depends on the
crossover (explained in section 6.5). After a crossover, the vector of selected processes
is completed. There is one selected process for each part. After a mutation, on the
other hand, the vector of selected processes can contain some parts without a selected
process.
In these cases, the reconstruction phase for the RP-part begins with a verification
of the selected processes. All assigned objects/operations must belonging to a selected
process. If not, the object/operation is removed from the solution before the re-
assignment phase. If there is no selected process for a part, the priority cell (presented
6.4. HEURISTICS 117
here above) is computed to choose the best process for the CF configuration given
by the CF-part.
As in the heuristic used for the initialisation, each object/operation is also prefer-
ably assigned to a machine belonging to the priority cell of the treated process.
6.4.5.1 Reconstruction
The Harhalakis Heuristic is used in the RP-Mogga with an integrated CF heuris-
tic. In this case, the CF-part of the chromosome is always completely emptied before
reassigning all objects/machines.
6.4.6 CF-Mogga
To construct the CF-part of a chromosome with the SImogga oriented CF, a new
Simogga is run with the following parameters:
1. The population will consist of 8 or 16 chromosomes depending on the problem
size.
2. The number of generations is set to 30.
3. The RP operators are put at 0% and the CF operators are put at 100%.
4. During the initialisation, the RP-parts are copied from the treated chromosome
of the principal Simogga.
118 CHAPTER 6. IMPLEMENTATION
5. During the initialisation, the CF-parts are constructed with the Flow Heuristic
CF.
6.4.6.1 Reconstruction
In the same way for the previous integrated heuristic, the CF-part is canceled before
running a new Simogga oriented CF to reassign all objects/machines.
6.4.7 RP-Mogga
To construct the RP-part of a chromosome with the Simogga oriented RP, a new
Simogga is run with the following parameters:
3. The RP operators are put at 100% and the CF operators are put at 0%.
4. During the initialisation, the CF-parts are copied from the treated chromosome
of the principal Simogga.
5. During the initialisation, the RP-parts are constructed with the Process Heuris-
tic RP.
6.4.7.1 Reconstruction
As the RP-Mogga, the RP-part is deleted before running a new Simogga oriented
RP to reassign all objects/operations.
6.5 Operators
The important point of the genetic operators is that they will work with the group
part of the chromosomes. The standard object part of the chromosomes serves to
identify which objects actually form which group. In particular, this implies that
operators will have to handle chromosomes of variable length with genes representing
the groups.
6.5. OPERATORS 119
6.5.1 Selection
The tournament strategy is chosen to select the chromosome for the application of op-
erators. The idea is to create an ordered list of the individuals with the best solution
always at the top, and the others ordered according to a specific method described
below. The upper part of the list will further be used when "good" chromosomes
are needed, while the lower part for "bad" chromosomes. An initial set of all the
individual identifiers is established. Two identifiers of this set are chosen randomly,
and their fitness values are compared. The best of the two - the one corresponding to
the individual with the best fitness value - is reinserted into the set, while the other
is pushed into the list. This method is repeated until all the identifiers in the set
are in the list; this process leads to the construction of an ordered list of individuals,
with the best one at the top of the list. The operator can then presume that the
chromosomes ranked in the top half will be used as parents for the crossovers and
the resulting children will replace the chromosomes in the bottom half.
6.5.2 Crossover
At each generation, the crossover is applied after tournament selection. The half of
the chromosomes are crossed and replace the worst ones. The process of crossover
is the following: two parents are selected in the half top of the list. Then, the basic
circular crossover (explained below) is applied between both parents to create two
childs E1 and E2. This process is applied until half of the chromosomes top of the
list have been crossed.
The basic circular crossover will be applied on the RP problem and CF problem
with a probability of respectively CrossRP % and CrossCF %. In this way, if the
sum of CrossRP and CrossCF is greater than 100% the crossover can be applied
simultaneously on both parts as explained in section 6.2.2. In practical, for each
child, a random number is selected (rand ∈ [0 − 100]). If (rand < CrossRP ), the
basic circular crossover is applied between the RP-parts and the CF-parts are just
copied. Then, if (rand > (100 − CrossCF )), the basic crossover is again applied
between the CF-parts.
P1 C D E A F : p q
9, 11 8 10 3 4 C, F A, D, E
MC MB
MD MA
O 9:A-C
O 11:C-A
O8:D O3:C-A
MF
ME
O4:F
O10:E
C1 C2
P2 C D B E A : q p
9 1, 8 2 10 0, 11 A, E B, C, D
MA MF
MD MC
O0:A-C
O11:C-A O 1:D
O 8:D O 9:A-C
ME MB
O 10:E O2:B
C1 C2
The basic circular crossover is explained below and illustrated with two parents
P 1 and P 2 (shown in Figures 6.16 and 6.17). The parent 1 owns the following process
vector: 1 0 0 1 1 while the process selection for parent 2 is the vector: 0 1 0 1 1.
This example is a small case with alternative processes and alternative routings. The
number of cells limited to 2 implies that the illustration of the crossover on the CF-
part will not be relevant or significant. In this example, the basic circular crossover
is applied between the RP-part of P 1 and P 2.
The complete coding for this crossover is detailed in the appendix C. The different
steps for the basic circular crossover are the followings:
Step 1 A crossing site is selected in each parent (see Figure 6.18). This crossing
site is defined by a two-point crossover (two crossing sections). This site
can include internal groups as well as groups located at the extremities of
the chromosome.
1 2
P1 C D E A F : p q
9, 11 8 10 3 4 C, F A, D, E
2 1
P2 D C B E A : q p
9 1, 8 2 10 0, 11 B, C, D A, E
Step 2 Groups selected by the crossing site of the parent 1 are inserted at the
second parent crossing site (see Figure 6.20). At this stage, some objects
may appear in more than one group.
In Figure 6.19, the group A (in bold) in parent 1 is inserted in parent 2.
Group/Machine MA is initially in cell C1 in parent 2. The cell arrange-
ment is not modified by a crossover between the RP-parts. At this stage,
the machine MA appears twice.
6.5. OPERATORS 121
2 1
E1 D C A B E A : q p
9 1,8 3 2 10 0,11 B, C, D A, E
MA MF
MD MC
O0:A-C
O11:C-A O 1:D
O 8:D O 9:A-C
MA ME
MB
O 3:C-A O10:E
O2:B
C1 C2
Step 3 The objects cannot appear twice in one solution. The new injected objects
have the priority. So, the existing groups containing objects that are
already in the inserted groups are eliminated. If some groups are empty,
they are also removed from the solution.
The machine MA initially in parent 2 is removed from the solution (see
Figure 6.20). The objects (operation O11 and O0 ) contained in this group
are left unassigned, and must be reinserted in the solution.
2 1
E1 D C A B E A : q p
9 1,8 3 2 10 0,11 B, C, D A, E
MA MF
MD MC
O0:A-C
O11:C-A O 1:D
O 8:D O 9:A-C
MA ME
MB O0:A-C
O11:C-A
O 3:C-A O10:E
O2:B
C1 C2
Figure 6.20: Removed similar groups after injection of the crossing site from parent
1 in parent 2.
Step 4 The validity of the solution is verified according to the hard constraints of
the cell formation problem. The process used can be different for the two
parents. Two processes on the same part cannot coexist in the solution.
Moreover, a specific machine cannot appear twice. Compatibility is tested
between inserted groups and existing groups. If the existing groups con-
tain operations belonging to another process or the groups corresponding
122 CHAPTER 6. IMPLEMENTATION
2 1
E1 D C A B E : q p 11
9 1,8 3 2 10 B, C, D A, E
MF
MD MC
O 1:D
O 8:D O 9:A-C
MA ME
MB O0:A-C
O11:C-A
O 3:C-A O10:E
O2:B O4:F
C1 C2
2 1
E1 D C A E : q p 11, 4
9 8 3 10 B, C, D A, E
MF
MD MC
O 8:D O 9:A-C
MA ME
MB
O11:C-A
O 3:C-A O10:E
O4:F
C1 C2
Step 5 The objects left unassigned are reinserted into the solution in the recon-
struction phase (explained in section 6.4.3.4).
6.5. OPERATORS 123
2 1
E1 D C A E F : q p
9 8 3 ,11 10 4 B, C, D A, E
MA MF
MD MC
O11:C-A
O 3:C-A O4:F
O8:D O 9:A-C
ME MB
O 10:E
C1 C2
Figure 6.23: Child C1 issue from the first crossover and reconstructed by the Process
Heuristic RP.
6.5.3 Mutation
The role of the mutation operator is to insert new characteristics into the population
to enhance the search space of the genetic algorithm. But the operator must be
defined as the smallest possible modification with respect to the encoding, and it must
be applied as minimally as possible to let the population evolve with the crossover
[67].
To respect these considerations, the mutation is not applied as in classic genetic
algorithm with a probability between 1% and 5%. The mutation is applied when there
124 CHAPTER 6. IMPLEMENTATION
2 1
E1 D C A E F : q p
9 8 3 ,11 10 4 C, D A, E, F
MA
O11:C-A MD MC
O 3:C-A
O8:D O9:A-C
MF ME
O 4:F O10:E
C1 C2
Figure 6.24: Final child C1 issued from the first crossover and reconstructed by the
Flow Heuristic CF.
This mutation implies that the objects removed from the groups are reinserted
to construct a new solution. For the CF-part, the problem is a classical group-
ing problem. However, for the RP-part, the problem is more complex because the
process selection is made at the same time as the grouping of operations on ma-
chines. The classical mutation operator does not modify the process selection. The
objects/operations belonging to the selected processes are reinserted after being re-
moved from their groups/machines.
The mutation operator is then adapted to take into account this feature and to
force the use of another process for a few parts. This feature is just applied when
the RP-part is touched by the mutation. When a group/machine is removed from
the RP-solution, the selected process is deleted from the process selection. The
process including a removed object/operation is put to 0 in the selected process
vector. This modification implies the suppression of all objects/operations belonging
to these removed processes. The specific heuristic used to reconstruct the RP-part
will start with choosing a new process for these parts without selected process.
For this hard mutation, no limitation exists on the number of groups to re-
move. The hard mutation is illustrated with the instance of Figure 6.25 where the
group/machine MA is removed from the RP-part (Figure 6.26).
P A D B C E : p q
0, 9 1, 8 2 11 10 B, C, E A, D
MC
MB MD MA
ME MF
O10:E
C1 C2
P D B C E : p q 5, 3, 6, 7, 4
11 10 B, C, E A, D
MC
MB MD MA
O11:C-A
ME MF
O10:E
C1 C2
C1 C2 Cell prior
To finish the reconstruction phase, the CF-part of the child must be reconstructed
with the Flow Heuristic CF. Objects/machines not used in the RP-parts are elimi-
nated from the solution (object/machine MD ) and remaining objects/machines are
reinserted (object/machine MF is reinserted in the group/cell C2 ) in minimising the
flow inter-cellular. This final solution (Figure 6.28) contains three flows inside the
cells and no flows between cells.
This illustration shows the impact of the number of removed groups in the hard
mutation. Removing one group in the RP-part, the chromosome is strongly modified.
The importance of the modification of the chromosome is function of the number of
different parts allocated to the removed machines.
6.5.4 Reconstruction
The reconstruction process depends on the construction property of the chromosome
(RP => CF or CF => RP ). A mutated chromosome keeps its property. A
mutation of the best chromosome cannot just copy the construction property because
6.5. OPERATORS 127
P D B C E A F : p q F
6 11, 5 10, 7 3 4 B, C, E A, D
MC
MB MD MA
O5:C-A
O11:C-A
O 6:B O 3:C-A
ME MF
O 7:E
O10:E O4:F
C1 C2
P B C E A F : p q
6 11, 5 10, 7 3 4 B, C, E A, F
MC
MB MA MD
O5:C-A
O11:C-A
O 6:B O 3:C-A
ME MF
O 7:E
O10:E O4:F
C1 C2
the reconstruction of this chromosome will be always in the same order. For this
reason, when a mutation of the best chromosome is made, the construction property
is copied with a probability of 50%. Otherwise, the value of this parameter can take
randomly the value RP => CF or CF => RP . Moreover, at the end of each
generation, the population is analyzed. If more than 80% of the population has the
same property then, the property of each chromosome has a probability of 50% to
be changed.
For the crossover, a crossed chromosome depends on the used parent. The created
child corresponds to the parent 2 in which the section site of parent 1 is inserted.
The construction property of parent 2 is copied.
For the reconstruction process, if the chromosome has the RP => CF property,
the heuristic RP is run. If the CF-part has been modified, the Random Heuristic
RP is applied; otherwise the Process Heuristic RP is used. The CF-part is then
reconstructed with the Flow heuristic CF.
On the other hand if the chromosome has the CF => RP property, the heuristic
CF is run to reconstruct firstly the CF-part. As for the previous case, if the RP-part
has been modified, the Random Heuristic CF is used, while the Flow Heuristic CF
128 CHAPTER 6. IMPLEMENTATION
6.6 Improvements
Different modifications and optimisations have been applied to improve the results
found in section 7.3. A local optimisation has been developed to optimise the use of
the priority process (process that can be achieved completely inside a cell). Moreover,
several modifications are proposed for the mutation.
1. If the CF-part is constructed, the vector of priority cells for each process is
computed. Another vector containing the number of cells able to completely
achieve each part is already completed.
2. For all parts, the verification is made: if the priority cell is not the used cell,
the operations allocated to another cell than the priority cell are removed from
their machine.
3. The process selection must contain the processes owning a priority cell able to
achieve completely the part.
The Process Heuristic RP is then run to complete the RP-solution. For all parts
which have no priority process defined, the classic process selection is achieved. This
local optimisation improves the search for the best solution. It increases the efficacy
of the algorithm and decreases in the same way the resolution time.
6.6.2 Mutation
In the mutation used to modify the chromosomes to increase the diversity of the
population, the following changes have been made:
• Probability of 50% to modify the mutation rate for the operator: 90% on RP-
part and 90% on CF-part
• If the diversity of the population is less than 50% or less than 70% (with a
probability of 50%), a strong mutation is applied, otherwise, a simple mutation
is made.
6.7. STOPPING CONDITION 129
In the mutation of the best chromosome, when the best chromosome has not
changed for 10 generations, the strong mutation is applied as described above. This
strong mutation permits to reinsert a random solution in the population.
Sometimes, the best solution stagnates into a local optimum without evolving.
To force the change of the best solution, an analysis is made before the mutation
(=PreMutation). Before running the mutation, the vector of priority processes is
updated. If each part has a priority process, the CF-part permits to construct the
ideal RP-part. In this case, the mutation is applied only on the RP-part.
The improvement of the results with these modifications is shown in Table 7.15
in the next chapter (section 7.4).
1. When the number of generations increases, the quality of the GA should also
increase (because a larger portion of the search space is analysed).
2. When the number of generations decreases, the computational time needed also
decreases.
6.8 Conclusion
The Simogga has the advantage to solve simultaneously two problems. The principle
of the algorithm is based on the Mogga. The main modifications of the Mogga
are explained in this chapter while the flowchart of each step of the algorithm is
presented in appendix C.
130 CHAPTER 6. IMPLEMENTATION
This chapter describes in details the main characteristics inherent to the Simogga:
encoding, heuristics for the initialisation and the reconstruction phase, operators and
stop condition. In the section for the heuristics, some of them concern the adapta-
tion of the Simogga in the limited versions of the algorithm presented in chapter
5. These explanations permits to transform the Simogga in a RP-Mogga with an
integrated CF module or a CF-Mogga with an integrated RP module.
In this chapter, a lot of parameters are used: operator rates, probability to used
heuristics during the initialisation phases, etc. All these parameters are justified in
the following chapter which presents the validation phase.
Chapter 7
Validation
7.1 Introduction
This chapter is dedicated to the validation of the algorithm. Firstly, the best heuris-
tics are selected analysing of each problem individually. On the basis of these selected
heuristics, the algorithm Simogga is validated with one criterion: the maximisation
of the traffic within cells. After these comparisons, some adaptations are proposed
to optimise the algorithm. This simultaneous resolution is then compared with the
sequential or successive resolution with an integrated module (an heuristic or another
Gga). The study and the explanation of all parameters are finally presented.
It is known that a genetic algorithm contains a lot of random parts. To evaluate
the effect of the parameters or heuristics and not the random part, the algorithm is
always run 10 times. All figures and comparative tables in this section are based on
the average results computed on these 10 runs.
A set of ideal case studies have been created to validate and optimise the heuris-
tics, the operators and all the fixed parameters in the Simogga. All these cases
are ideal and can be grouped into completely independent cells without inter-cellular
traffic. 20 cases including between 4 and 40 machines compose the set of ideal case
studies. Several variants of these cases have been constructed taking the rate of al-
ternative routing (RA) and the rate of alternative process (P A) into account. These
parameters, RA and P A, vary between 0% and 100% to generate cases with less or
more alternativities. All combinations of these parameters’ have been used to create
a large variety of different cases (RA and P A taking respectively the values 0, 20,
60, 100). For instance, a case with RA = 20 and P A = 60 means that 20% of the
machine types include more than one machine and that 60% of the parts have more
than one process route available. For such cases, the RP problem will deal with
the search of the best process for this 60% of parts and with the search of the best
machine among the 20% of machine types. A total of 320 cases (20 cases with 16
variants RA-PA) have been used to validate all the parameters of the Simogga.
Table 7.1 gives the size of the RP-search space and CF-search space for all the
ideal case studies corresponding to the four sets of alternativity ((0 − 0), (0 − 100),
(100 − 0) and (100 − 100)). All the results presented in this thesis are based on these
cases.
131
132 CHAPTER 7. VALIDATION
(RA, P A)
(0 − 0) (0 − 100) (100 − 0) (100 − 100)
RP CF RP CF RP CF RP CF
CI1 1 3 × 1000 3 × 1002 3 × 1000 1 × 1002 3 × 1000 1 × 1004 3 × 1000
CI2 1 1 × 1001 1 × 1003 1 × 1001 2 × 1006 1 × 1001 1 × 1009 1 × 1001
CI3 1 2 × 1001 4 × 1003 2 × 1001 4 × 1006 2 × 1001 4 × 10 10
2 × 1001
CI4 1 3 × 1002 1 × 1005 3 × 1002 8 × 1006 3 × 1002 1 × 1016 3 × 1002
CI5 1 6 × 1003 5 × 1005 6 × 1003 1 × 1011 6 × 1003 2 × 10 19
6 × 1003
CI6 1 1 × 1005 2 × 1007 1 × 1005 1 × 1028 1 × 1005 3 × 10 38
1 × 1005
CI7 1 2 × 1004 8 × 1006 2 × 1004 9 × 1018 2 × 1004 1 × 10 24
2 × 1004
CI8 1 3 × 1006 3 × 1007 3 × 1006 6 × 1030 3 × 1006 8 × 1044 3 × 1006
CI9 1 5 × 1008 9 × 1009 5 × 1008 4 × 1041 5 × 1008 1 × 10 50
5 × 1008
CI10 1 1 × 1011 4 × 1012 1 × 1011 6 × 1052 1 × 1011 3 × 10 62
1 × 1011
CI11 1 2 × 1013 7 × 1013 2 × 1013 6 × 1065 2 × 1013 2 × 10 82
2 × 1013
CI12 1 1 × 1006 8 × 1006 1 × 1006 2 × 1038 1 × 1006 3 × 1045 1 × 1006
CI13 1 3 × 1009 3 × 1007 3 × 1009 5 × 1048 3 × 1009 2 × 10 64
3 × 1009
CI14 1 5 × 1012 7 × 1010 5 × 1012 7 × 1060 5 × 1012 3 × 10 78
5 × 1012
CI15 1 1 × 1016 7 × 1013 1 × 1016 2 × 1065 1 × 1016 3 × 10 88
1 × 1016
CI16 1 3 × 1019 4 × 1016 3 × 1019 3 × 1074 3 × 1019 2 × 10101 3 × 1019
CI17 1 3 × 1012 2 × 1010 3 × 1012 2 × 1054 3 × 1012 2 × 1069 3 × 1012
CI18 1 7 × 1016 7 × 1013 7 × 1016 7 × 1063 7 × 1016 4 × 10 78
7 × 1016
CI19 1 2 × 1021 7 × 1016 2 × 1021 1 × 1081 2 × 1021 5 × 10110 2 × 1021
CI20 1 7 × 1025 6 × 1020 7 × 1025 6 × 1089 7 × 1025 6 × 10114 7 × 1025
• The selection of good process for each part (process selection - PS).
• The grouping of all machines into independent cells (cell formation problem -
CF).
To test these three problems separately, Table 7.2 presents the values used for
the parameters RA and P A to test and to validate each heuristic separately.
RA PA
Cell formation problem (CF) 0 0
Process selection (PS) 0 100
Resource planning problem (RP) 100 0
Table 7.2: Value of parameters, RA and PA, to test the three individually problems.
with the search of the best machine for each operation to generate independently
cells. On the contrary, if it is the parameter P A that is equal to 100, the RP problem
must solve a process selection problem. The search will be on the choice of the best
process for each part to generate independently cells.
To reach these separately objectives, four ideal case studies are used. The char-
acteristics of these cases are presented in Table 7.3.
RA = 0, P A = 0 CI5 CI10 CI15 CI20
No machine types 11 23 29 40
No machines 11 23 29 40
No parts 19 41 45 63
No process 19 41 45 63
No operations 81 183 194 286
No cells 3 4 5 6
Cell size 4 6 6 7
No solution RP 1 1 1 1
No solution CF 5775 9.62 × 1010 1.14 × 1016 6.82 × 1025
RA = 0, P A = 100 CI5 CI10 CI15 CI20
No machine types 11 23 29 40
No machines 11 23 29 40
No parts 19 41 45 63
No process 38 82 90 126
No operations 169 335 387 597
No cells 3 4 5 6
Cell size 4 6 6 7
No solution RP 5.24 × 105 4.40 × 1012 7.04 × 1013 5.90 × 1020
No solution CF 5775 9.62 × 1010 1.14 × 1016 6.82 × 1025
RA = 100, P A = 0 CI5 CI10 CI15 CI20
No machine types 8 11 15 19
No machines 11 23 29 40
No parts 18 36 47 58
No process 36 72 94 116
No operations 81 158 209 260
No cells 3 4 5 6
Cell size 4 6 6 7
Nb solution RP 1.38 × 1011 6.49 × 1052 1.76 × 1065 6.14 × 1089
Nb solution CF 5775 9.62 × 1010 1.14 × 1016 6.82 × 1025
RA = 100, P A = 100 CI5 CI10 CI15 CI20
No machine types 8 11 15 19
No machines 11 23 29 40
No parts 18 33 46 63
No process 36 66 92 126
No operations 136 282 405 519
No cells 3 4 5 6
Cell size 4 6 6 7
No solution RP 2.32 × 1019 2.61 × 1062 3.01 × 1088 5.82 × 10114
No solution CF 5775 9.62 × 1010 1.14 × 1016 6.82 × 1025
Table 7.3: Value of parameters for each case study used to validate the heuristics.
The parameters used for the validation of the Simogga in separate resolutions
are the following:
1. The number of generations is set to 100. The objective is to find the solution
within a minimum of generations to limit the time to search.
2. The population size is set to 16. It is a small population size to solve small and
134 CHAPTER 7. VALIDATION
3. The mutation rate is set to 10. One chromosome is mutated when the best
chromosome is not changed after 10 generations.
7.2.1 Heuristic CF
As explained below, to validate the CF heuristic, it is necessary that the heuristic
can solve a simple grouping problem without alternativity. Setting the parameters
RA and P A equal to zero, the Simogga will solve only the CF problem. The case
studies used for the validation of the CF heuristic are the four cases (CI5, CI10,
CI15 and CI20) with RA = 0 and P A = 0 presented in Table 7.3. All the operator
rates are equal to 0% for the RP problem. The Simogga will work exactly as a
simple Mogga that solves a CF problem.
Respecting the description of the data in chapter 4, all heuristics tested in this
section try to group nM machines into nC cells respecting the cell size limit (mU B ).
A number of cells inferior to nC can be used. If the capacity constraints permits
the use of less than nM machines, the proposed solution does not need to use all the
machines.
Four heuristics have been analysed and compared to validate the final heuristic,
the Flow Heuristic CF. The different heuristics are succinctly described here under:
Heur2 The Max Flow Cell Heuristic CF. Using the same principle as for the
Random Heuristic CF, the machine m is allocated into the cell c maximising
the intra-cellular traffic (φm−c ).
Heur4 The Random Harhalakis Heuristic CF. For each machine (m), the traf-
fic is computed with all used cells and with all unassigned machines. The
machine with the maximum flow with a machine n (φm−n ) or with a cell
c (φm−c ) is treated. If φm−n > φm−c , the machine m is grouped with the
other machine n in a new group. Otherwise, the machine m is injected in
the cell c with the maximum flow.
intra-cellular flow for the best solution during the 15 first generations. One colour is
used for each of the four tested heuristics. Four cases (CI5, CI10, CI15 and CI20)
without alternativity represented respectively by the symbols (∆,O,×, ⋄) are drawn
with the four heuristics. As shown the figure, the third heuristic (continuous line)
presents the best evolution for the 4 cases.
Table 7.4 shows the number of generations needed to find 100% of intra-cellular
flow and the intra-cellular flow for the best solution found after 100 generations
with different heuristics. The symbol “-” in the first column signifies that 100%
of intra-cellular flow is not reached before 100 generations. The Random Heuristic
(Heur1) presents a continual evolution, but the ideal solution is not found after 100
generations. The heuristics 2, 3 and 4 are different mainly regarding the rapidity to
find the best solution.
Table 7.4: Best values with a same initialisation after 100 generations using 4 CF
heuristics for the cases RA = 0 and P A = 0.
Table 7.5: Best values after 100 generations using 4 CF heuristics for the cases
RA = 0 and P A = 0.
136
100,00
90,00
CI5_Heur1
CI5_Heur2
CI5_Heur3
80,00
CI5_Heur4
CI10_Heur1
Intra-cellular flow
CI10_Heur2
70,00
CI10_Heur3
CI10_Heur4
CI15_Heur1
60,00
CI15_Heur2
CI15_Heur3
CI15_Heur4
50,00
CI20_Heur1
CI20_Heur2
CI20_Heur3
CHAPTER 7. VALIDATION
40,00
CI20_Heur4
30,00
0 5 10 15
Generations
Figure 7.1: Evolution of the intra-cellular flows for 4 different CF heuristics applied to a same initial population.
7.2. SEPARATED RESOLUTION 137
The third CF heuristic based on the intra-cellular flow will be used for the Simogga
to initialise the chromosome when the RP-part has been created. This heuristic
is also used to reconstruct all chromosomes after the genetic operators when the
chromosome must be constructed firstly by the RP-part. The random CF heuristic
is used in all the other cases.
7.2.2 Heuristic RP
To validate the RP heuristic as the CF heuristic has validated, it is necessary to run
the Simogga as a simple Mogga solving the RP problem. To reach this objective,
the parameters RA is set to 100% while the parameter P A is set to 0. Moreover, it
is necessary to give the best solution for the CF problem. The operator rates for the
CF problem are set to 0.
To validate this RP heuristic, the initialisation phase is slightly different compared
to the Simogga. A copy heuristic CF is used. This heuristic just copies the best
solution CF (given as a data of the problem) rather than constructing the CF solution
with a random or a specialised heuristic. 100% of the population is constructed with
the Process Heuristic RP after the copy of the best CF-solution.
Respecting the description of the data in chapter 4, all heuristics compared in
this section try to allocate nO operations onto nM machines respecting the capability
and the capacity constraints. Four heuristics have been analysed and compared to
validate the final heuristic, the Process Heuristic RP. The different heuristics are
succinctly described below:
Heur2 The Max Flow Cell Heuristic RP. Using the same principle as for the
Random Heuristic RP, the operation is allocated onto the machine maximis-
ing the traffic between the previous operation and the next operation.
Heur3 The Cell Flow Heuristic RP. A flow matrix is computed at the initial-
isation and updated after each assignment of an operation on a specific
machine. The search of a machine to allocate the first operation belonging
to the random list proceeds as follows: each machine is tested and the intra-
cellular flow generated by this assignment is computed. The operation will
be allocated to the machine maximising the total intra-cellular flow.
Heuristic RP, the first operation of the random list is selected and all ma-
chines belonging to the priority cell are visited. The operation is allocated
to the first machine able to accept the operation.
The second and third heuristics are based on the matrix flow, while the fourth
one is based on the priority cell and the number of operations achievable by a cell.
All these last three heuristics can be used only if the machine/cell grouping is already
made.
Table 7.6: Best values with a same initialisation after 100 generations using 4 RP
heuristics for the cases RA = 100 and P A = 0.
90,00
CI5_Heur1
CI5_Heur2
CI5_Heur3
80,00
CI5_Heur4
CI10_Heur1
Intra-cellular flow
CI10_Heur2
70,00
CI10_Heur3
CI10_Heur4
CI15_Heur1
60,00
CI15_Heur2
CI15_Heur3
CI15_Heur4
50,00
CI20_Heur1
CI20_Heur2
40,00 CI20_Heur3
CI20_Heur4
30,00
0 5 10 15
Generations
Figure 7.2: Evolution of the intra-cellular flow for the best solution constructed with RP heuristics applied to a same initial population.
139
140 CHAPTER 7. VALIDATION
Table 7.7: Best values after 100 generations using 4 RP heuristics for the cases
RA = 100 and P A = 0.
95,00
90,00
CI5_Pr1
CI5_Pr2
85,00
Intra-cellular flow
CI10_Pr1
CI10_Pr2
80,00 CI15_Pr1
CI15_Pr2
CI20_Pr1
75,00
CI20_Pr2
CI21_Pr1
70,00 CI21_Pr2
65,00
60,00
0 5 10
Generations
Figure 7.3: Evolution of the best intra-cellular flow for a same initialisation with the random heuristic.
141
142 CHAPTER 7. VALIDATION
Table 7.8 permits to study the complete evolution during 100 generations. This
table gives the initial flow, the number of generations needed to find 100% of intra-
cellular flows and the best flow found after 100 generations. The first row shows the
results of the random process selection while the second and third rows are based on
the specific process selection. The random process selection is used to initialise the
population in the second row. This row permits to compare the evolution between
the two process selections from a similar population. On the other hand, for the third
row, the specific process selection is used to construct and to make the population
evolve.
RA = 0, P A = 100 Init Gen Flow Init Gen Flow
CI5 CI10
Proc. 1 76 14 100 76 25 100
Proc. 2 with init. 1 76 7 100 76 12 100
Proc. 2 100 0 100 100 0 100
CI15 CI20
Proc. 1 75 39 100 67 37 100
Proc. 2 with init. 1 75 15 100 67 21 100
Proc. 2 100 0 100 100 0 100
Table 7.8: Best values after 100 generations for two process selections.
For a similar initialisation, the best intra-cellular flow is found after half the
number of generations. It is interesting to see that when the best CF-solution is
found (here, a copy of the best CF-solution is used), the best process selection is
found immediately by the Process Heuristic RP with the specific process selection
based on the priority cell. The choice of the RP heuristic has been done on the basis
of cases with RA = 100 and P A = 0 while the process selection has been validated
on the basis of the cases with RA = 0 and P A = 100.
To evaluate the efficiency of the process selection, the cases (RA = 100 and
P A = 100) are used. Table 7.9 and Figure 7.4 presents the best intra-cellular flow
for the cases with a large alternativity (CI5, CI10, CI15 and CI20 with RA = 100
and P A = 100). The Process Heuristic RP selected in section 7.2.2 is combined
with the specific process selection and compared with the RP heuristic 1 (Random
Heuristic), 2 and 3 with the Random Process Selection.
Table 7.9: Best values with a same initialisation after 100 generations using 4 RP
heuristics for the cases RA = 100 and P A = 100.
Figure 7.4 and Table 7.9 show that the result for the four case studies are always
good when the alternative routings and the alternative processes exist in the case.
The fourth heuristic combined to the process selection is optimum as soon as the
optimal CF-solution is known, except for the case CI10. Indeed, this case is initialised
at 45.5% and the population evolves until 98.5% is found at the 61st generation. The
population does not evolve during the last 40 generations.
7.2. SEPARATED RESOLUTION 143
Table 7.10 shows the values of the initialisation flow, the best number of gener-
ations to find 100% and the best flow found after 100 generations when the specific
heuristics are used for the initialisation. The best value (97.7%) found at the initiali-
sation of the case CI10 evolves until the same optimum as the previous test (98.5%).
The solution stays blocking in a local optimum.
Table 7.10: Best values after 100 generations using 4 RP heuristics for the cases
RA = 100 and P A = 100.
The test has been made on a last case (CI21) composed by twice the elements
of the case CI20. In grouping 80 machines in 12 cells, the results are similar for the
four heuristics (67, 73, 93 and 100% of intra-cellular flow with the heuristics 1, 2, 3
and 4). The fourth heuristic finds still the best solution at the initialisation.
7.2.4 Conclusion
Four heuristics for each problem, RP and CF, and two different process selections have
been tested and compared in this section. All these heuristics have been validated on
a specific part of the problem. The selected heuristics have different characteristics.
The best heuristic CF is based on a maximisation of the intra-cellular flow. The
best heuristic RP is based on the maximisation of the number of operations by part
assigned inside the same cell, in computing a priority cell for each process. The
process selection is based on the best heuristic RP and on the maximisation for each
part of the number of operations inside the priority cell.
In the next section, these chosen heuristics are analysed on the complete problem,
process selection (PS), resource planning problem (RP) and cell formation problem
(CF).
144
100,00
90,00
CI5_Heur1
CI5_Heur2
CI5_Heur3
80,00
CI5_Heur4
CI10_Heur1
Intra-cellular flow
CI10_Heur2
70,00
CI10_Heur3
CI10_Heur4
CI15_Heur1
60,00
CI15_Heur2
CI15_Heur3
CI15_Heur4
50,00
CI20_Heur1
CI20_Heur2
40,00 CI20_Heur3
CHAPTER 7. VALIDATION
CI20_Heur4
30,00
0 5 10 15
Generations
Figure 7.4: Evolution of the best intra-cellular flow for an similar initialisation with the random heuristic for cases with RA = 100 and
P A = 100.
7.3. SIMULTANEOUS RESOLUTION 145
1. The number of generations is set to 100. The objective is to find the best
solution in a reduced time.
3. The mutation rate is set to 10: one chromosome is mutated when the best
chromosome is not changed after 10 generations.
4. The crossover rate is set to 80%. For RP and for CF problem, the operators
are fixed to 80%. In this way, at each use of operators, there is a probability
of 20% to apply the crossover only between the RP-part, 20% on RP-part and
60% on both problems (as explained in section 6.2.2).
Table 7.11: Values of the intra-cellular flow for ideal case studies (Average for different
values of RA and P A) - Complete value in Tables E.1, E.2 and E.3.
Table 7.11 presents the average results of the Simogga applied to the 20 cases for
the different sets of alternativity RA and P A ((0 − 100), (100 − 0) and (100 − 100)).
The first column shows the best intra-cellular flow found at the initialisation of the
population. The next columns show the percentage of cases and the average number
of generations to reach 90%, 95%, 98% and 100% of intra-cellular flow. The last
146 CHAPTER 7. VALIDATION
column gives the best flow found after 100 generations. The average number of
generations is computed only with the cases reaching the studied level. The symbol
“-” for a case signifies that none of the 10 runs reached this level.
Table 7.12 summarizes the best average values for each cases (average for each
case of all the four sets of alternativities). The first column contains the percentage
of cases reaching 100% of intra-cellular flow (%), the second one defines the number
of generations to reach this value (Gen) and the last one shows the maximum intra-
cellular flow found after 100 generations (Flow). The last column gives an idea of
the resolution time. This resolution time is defined in seconds. These values are
just written as an indication. Indeed, this value is not usable to make comparison
because the algorithm stops when the complete grouping is reached with 100% of
intra-cellular flow. The resolution time will be very different if the value of 100% is
found or not. For instance, if a case reaches 100% after 30 generations and another
one does not reach 100% after 100 generations, the resolution time for the first case
will be three times less than for the second case.
Average of RA, P A
% Gen Max Time
CI1 100 0 100 0.1
CI2 100 0.35 100 0.1
CI3 100 0.53 100 0.1
CI4 100 4.6 100 0.4
CI5 100 16.28 100 1.1
CI6 75 25.15 98.88 2.8
CI7 97.5 15.53 99.93 1.6
CI8 77.5 23.8 99.08 4.3
CI9 77.5 36.58 98.6 5.9
CI10 50 7.6 95.03 10
CI11 50 12 94.83 16
CI12 60 12.13 97.43 4.8
CI13 52.5 21.07 98.03 8
CI14 50 9.35 94.58 13.1
CI15 60 35.2 94.25 18.8
CI16 50 14.2 96.68 25.2
CI17 52.5 23.53 97.88 9.3
CI18 52.5 31.3 94.93 15.2
CI19 50 15.1 92.25 27.5
CI20 50 19.05 92.45 35.4
Aver. 70.25 18.98 97.23 10
Table 7.12: Comparative table with best values for a population of 32 chromosomes
after 100 generations (Average of four sets of alternativity). See complete Table E.4.
Different observations can be made looking at the resolution of these case studies
(see complete Tables E.1, E.2, E.3 and E.4 in appendix E):
Average of cases
% Gen Max
RA = 0, P A = 0 100 0.1 100
RA = 0, P A = 100 100 14.4 100
RA = 100, P A = 0 44.5 34.7 95.2
RA = 100, P A = 100 36.5 26.7 93.7
Aver. 70.25 18.98 97.23
Table 7.13: Comparative table with best values for a population of 32 chromosomes
after 100 generations (Average of cases). See complete Table E.4.
Table 7.13 summarizes the best average values for each set of alternativity (aver-
age of all the cases). From these results different conclusions can be made:
1. The algorithm Simogga finds the optimal solution in 70.25% of the ideal case
studies.
148 CHAPTER 7. VALIDATION
2. The Simogga solves easily all cases without alternativity or with alternative
processes (0 − 100). These cases correspond to a search space of maximum
10 × 1046 solutions.
3. The optimal solution is found for the first five cases with 100% of alternative
routings. The size of search space for these cases is maximum 10×1043 solutions.
4. Between 10 × 1043 and 10 × 1060 solutions, the optimal solution is found in 50%
of the cases.
7.4 Adaptation
In section 6.6, different improvements, with a local optimisation, have been explained.
These modifications have been made to respond to the observations in section 7.3.
The algorithm with the local optimisation is run for all cases (CI1 to CI20 with
four sets of alternativity RA and P A ((0 − 0), (0 − 100), (100 − 0) and (100 − 100)).
The results of Table 7.14 present for each set of alternativity the average values
on the 20 case studies for different parameters: the percentage of cases and the
number of generations needed to reach 90%, 95%, 98% and 100% of intra-cellular
flow. The last row defines the average values of the best intra-cellular flow found
after 100 generations. For each set of alternativity, the first column gives the results
without the local optimisation and the second column with local optimisation. The
percentage of cases is evaluated on 200 tests (10 runs for the 20 cases corresponding
to 200 tests). The average percentages of the number generations include only the
cases that reached the level.
Table 7.14: Best evolution values without and with the local optimisation.
Table 7.15 presents the final average results for the algorithm without and with
the local optimisation after 100 generations for each of the 20 cases (average of the
four sets of alternativity). The shown values are the percentages of cases reaching
100% of intra-cellular flow, the best generation number to achieve this level, and the
best flow found. The last row defines the average of 20 cases while the three last
columns present the average for each case on the four sets of alternativity.
7.4. ADAPTATION 149
Table 7.15: Comparative table with best values for the algorithm without and with
the local optimisation after 100 generations and 32 chromosomes (average of four sets
of alternativity). See complete Table E.5.
• On the cases (0 − 100), the best flow is always found, without or with local
optimisation. However, the number of generations to find this best flow is
slightly reduced. The best improvement is observed on the case CI20 with a
reduction of 8 generations out of 36 to find the optimal solution.
• A great improvement is noticed on the cases (100 − 0). All cases are improved
in terms of the number of generations and/or the best found flow. Because of
the local optimisation, the optimal solution is found for 87% of the cases while
99% of the cases reach 95% of intra-cellular flow. This improvement represents
an increasing of 37%. The evolution to find the best flow is quicker with the
local optimisation than without. In average, the best flow passes from 95.2%
to 99.6% with the local optimisation. The maximum average increase of total
intra-cellular flow is seen in CI10 with 11.4% more and CI20 with 11.2% more.
• As explained previously, the average time is just given for information because
the algorithm stops when 100% of intra-cellular flow is reached. It shows that
the algorithm with local optimisation finds the best solution in a lower number
of generations but takes more time.
• In taking the 20 cases for the four sets of alternativity (a total of 800 tests)
into account, the percentage of cases reaching 100% of grouping passes from
70.25% to 86.88% with the local optimisation.
2. The mutation rate is set to 10. One chromosome is mutated when the best
chromosome is not changed after 10 generations.
3. The crossover rate is set to 80%. If a random number (∈ [0, 100]) is smaller
than 80, the operator is applied on the RP-part and if this number is greater
than 20, the operator will be applied on the CF-part. Thus, the operators are
applied with a probability of 20% only between the RP-part, 20% on RP-part
and 60% on both problems.
Table 7.16: Average best values for the algorithm with the local optimisation after
1000 generations (32 and 64 chromosomes) for four sets of alternativity. See complete
Tables E.6 and E.7
(RA − P A) Pop. size=32 Pop. size=64
% Gen Max % Gen Max
CI1 100 4.33 100 100 2.6 100
CI2 100 0.08 100 100 0 100
CI3 100 0.3 100 100 0.05 100
CI4 100 2.38 100 100 0.88 100
CI5 100 3.33 100 100 2.78 100
CI6 100 14.1 100 100 7 100
CI7 100 8.33 100 100 3.5 100
CI8 100 17.13 100 100 6.43 100
CI9 100 11.7 100 100 9.8 100
CI10 80 70.68 99.73 95 73.63 99.95
CI11 87.5 81.25 99.73 87.5 69.13 99.75
CI12 97.5 37.88 99.95 97.5 11.2 99.95
CI13 97.5 37.5 99.98 100 19.5 100
CI14 95 93 99.78 95 27.75 99.73
CI15 90 92.33 99.3 100 51.98 100
CI16 92.5 98.75 99.93 92.5 51.68 99.9
CI17 87.5 38.15 99.73 95 24.53 99.93
CI18 95 65.58 99.95 100 54.33 100
CI19 95 148 99.83 97.5 71.03 99.9
CI20 95 155.93 99.9 90 83.6 99.6
Average 95.63 49.05 99.9 97.5 28.58 99.95
Table 7.17: Average best values for the algorithm with the local optimisation after
1000 generations (32 and 64 chromosomes) for four each of 20 cases. See complete
Tables E.6 and E.7
is greater than 98.65% for all cases except one. The average best flow for the case
CI15 with RA = 100 and P A = 100 is equal to 97.2%.
For a population of 64 chromosomes, the results are similar in terms of the av-
erage best flow found after 1000 generations. In terms of the percentage of cases
reaching 100% of intra-cellular flow, this quantity passes from 95.63% to 97.5% with
a population of 64 chromosomes. Moreover, the minimum flow found after 1000 gen-
erations is 98.7%. This result is observed for the most complex case (CI20) with a
complete alternativity (100 − 100).
1× Population size). Table 7.18 presents the results of the test made on four cases
(CI5, CI10, CI15 and CI20) with four variants for the alternativity parameters RA
and P A ((0 − 0), (0 − 100), (100 − 0) and (100 − 100)). The algorithm is tested for
a population size of 32 chromosomes. The number of generations is fixed to 100. In
comparing the best flow found after 100 generations, this table permits to compare
the speed of the evolution with and without crossover. Without the crossover, the
search corresponds to a random search only done by mutation.
Init With crossover Without crossover
flow % Gen Max Time % Gen Max Time
RA = 0 and P A = 0
CI5 100 100 0 100 0.18 100 0 100 0.1
CI10 100 100 0 100 0.27 100 0 100 0.3
CI15 100 100 0.3 100 0.36 100 0 100 0.4
CI20 88 100 1.8 100 0.91 100 0.9 100 1.2
RA = 0 and P A = 100
CI5 79 100 6.5 100 1.27 100 23.4 100 5.3
CI10 64 100 15.2 100 10.82 0 - 97.8 100.2
CI15 64 100 23.5 100 17 0 - 95.4 132.9
CI20 52 100 36.3 100 55.36 0 - 88.3 320.8
RA = 100 and P A = 0
CI5 79 100 24.2 100 0.27 100 8.8 100 1
CI10 66 50 38.8 99.1 14.91 0 - 93.7 33.3
CI15 69 100 81.75 100 13.36 0 - 95 51.8
CI20 59 40 60.3 97.6 43.55 0 - 94.2 76.9
RA = 100 and P A = 100
CI5 83 100 34.4 100 1.18 50 16 99.2 10.3
CI10 78 30 37 97.4 35.36 0 - 93.3 73
CI15 70 20 80 91.8 77.55 0 - 86.6 146
CI20 66 0 - 91.8 125.91 0 - 85.6 239.2
Table 7.18: Best flows when the Simogga is run with and without crossover.
This table shows that the simple cases without alternativity are solved as effi-
ciently as the Simogga with crossover. On the other side, as soon as the case has
a little alternativity, the optimal solution is no longer found, in particular for the
cases (0 − 100). The best solution without crossover is always less good for all cases
with 100% of alternative routings. Moreover, the resolution time is especially long
without crossover. The results are not so bad because before each reconstruction of
the chromosome (during the mutation), the local optimisation is applied.
remind the best results found with a population of 32 chromosomes and after 100
generations. Each time the population size is multiplied by 10, the resolution time
is also multiply by 10. Analysing the best flows found for each population size, the
increase of the values can be watched. The greater the population size, the better
the intra-cellular flow. A population size of 10000 chromosomes permits to solve the
small cases. The best flows found are always largely worse than the best flow found by
the Simogga. To find a better solution for the last case (CI20) containing 10× 10140
solutions, the population size should be significantly increased. The random search
is completely not efficiency.
Table 7.19: Best flows when the Simogga is run with a large population without
generation.
3. The mutation rate is not modified relative to the previous tests: one chromo-
some is mutated after 10 generation without a new best solution.
2. The population size is set to 16. This size limits the resolution time.
3. The operator rates are set to 0% for the RP and 100% for the CF. The Simogga
focalises on the CF problem like a CF-Mogga solving the CF problem only.
In the CF-Mogga, the RP-part is constructed with the Copy Heuristic RP. The
RP-solution is provided by the main Simogga.
1. The number of generations is set to 30. Too great a number increases strongly
the resolution time.
2. The population size is set to 16. This size permits to limit the resolution time.
7.5. EFFICIENCY OF THE ALGORITHM 155
3. The operator rates are set to 100% for the RP and 0% for the CF problem.
The Simogga focuses on the RP problem like a RP-Mogga solving the RP
problem only.
In the CF-Mogga, the RP-part is constructed with the Copy Heuristic RP. The
RP-solution is provided by the main Simogga.
7.5.4.3 Results
Table 7.20 and Table 7.21 present some comparative results for the Simogga, the
RP-Mogga with an integrated module CF and the CF-Mogga with an integrated
module RP (an integrated heuristic and another integrated Mogga). The results
are organised as follows: Table 7.20 gives the average values for the three sets of
alternativity RA and P A ((0 − 100), (100 − 0) and (100 − 100)), while Table 7.21
gives the average values for each of cases (CI5, CI10, CI15 and CI20).
Table 7.20: Average values (on 4 cases) found with the Simogga and the Mogga
with an integrated module (RP and CF). See complete Table E.8
For each case, the results contain the average percentage of cases reaching 100%
of intra-cellular flow, the average best intra-cellular flow after 100 generations as well
as the resolution time. These values are given for the proposed algorithm (Simogga)
in the first row. The second row addresses the value relative to the (Mogga) with
an integrated RP heuristic while the third one the value relative to the (Mogga)
with another integrated (RP-Mogga). The fourth and fifth rows present the results
for the integrated CF module (heuristic and CF-Mogga). These tables permit to
compare the proposed method to the successive resolutions.
The successive resolutions should never be more efficient than the Simogga be-
cause there are a lot of random parts in the successive resolution. The main Mogga
is always based on the Random heuristic RP or CF. The crossover and the mutation
are applied on this random part. To find the best solution, the main Mogga with an
other integrated Mogga must be run for a high number of generations. In this way,
the main grouping could approach the best grouping and the integrated Mogga will
find the best complementary solution.
1. The average best flow found by the four first methods is close for each case
(above 95%). The Mogga with the integrated CF-Mogga is the only method
that always gives inferior results to the other tested methods. For the case
CI15, the optimal solution is never found. The reasons for this difference are
found in the size of the search space and the efficiency of the heuristics. The
156 CHAPTER 7. VALIDATION
Table 7.21: Average values (on RA and P A) found with the Simogga and the
Mogga with an integrated module (RP and CF). See complete Table E.8
search space of the RP problem is greater than for the cell formation problem.
The main Mogga does a random search, thus the search will be faster if
the search space is reduced (for CF problem). Moreover, the specific Process
heuristic RP includes the process selection and the best solution will be quickly
found. For the integrated heuristic CF, the main Mogga used the random
heuristic RP with the random process selection. Thus, the search for the best
solution will be longer than with the integrated heuristic RP.
2. The CF-Mogga with an integrated RP-Mogga gives very good results, better
than the Simogga but the resolution time is always much longer. An adapta-
tion of the Simogga would permit to improve its results without significantly
increasing the resolution time.
3. The results of the Mogga with an integrated heuristic are always better than
the Simogga. With an integrated heuristic, the main problem is always solved
by a random search in the main Mogga. But the heuristics are efficient enough
to find a good solution, almost optimal. The more complex the case studied,
the worse the solution. With the complete alternativity, the optimal solution
is almost never found.
Contrary to the Mogga with another integrated Mogga, the resolution time
of the Mogga with an integrated heuristic is lower as the Simogga. Thus,
the number of generations could be increased to 200 generations and the popu-
lation size could be increased to 32 chromosomes. Increasing these parameters,
the results of Table 7.22 are found (average of four cases and three sets of al-
ternativity). These results stay below the 70% of cases reaching the optimal
solution for an increased resolution time. However, these results are slightly
different in terms of the best flow found.
To conclude, the Simogga with these actual parameters (32 chromosomes, 100
7.6. ADAPTATION OF PARAMETERS 157
Table 7.22: Average values found with the Mogga with an integrated heuristic (RP
and CF).
Table 7.23: Values (Average, minimum and maximum) for the percentage of cases
reaching 100% and the best generation for different generations and two population
sizes (32 and 64).
generations, mutation rate (10), Operator rates (80%) and 45%-45%-10% for the
initialisation, see the beginning of section 7.5) is not efficient enough to overcome the
performance of the Mogga with an integrated heuristic. The CF-Mogga with an
integrated RP-Mogga has shown its efficiency but the resolution time is excessive.
Thus the idea to nest two Mogga is good but not usable.
• With 100 generations, the optimal solution is never found for some cases. On
the contrary, from 500 generations, a minimum of 3 runs out of 10 gives the
158 CHAPTER 7. VALIDATION
Generations
log10(Size) # cases 100 150 200 300 500 750 1000
Pop.size=32 Percentage of cases reaching 100%
0-40 53 100 100 100 100 100 100 100
40-80 15 95.4 96.3 96.8 97.8 98.4 98.4 98.4
80-120 10 88.2 90.8 92.6 94.5 95.8 95.8 96.2
120-160 3 85.2 88 89.9 92.2 94.3 95.2 95.7
Pop.size=32 Average intra-cellular flow
0-40 53 100 100 100 100 100 100 100
40-80 15 99.4 99.7 99.8 99.8 99.9 99.9 99.9
80-120 10 96.9 97.8 98.2 98.7 99 99.2 99.4
120-160 3 92.8 94.4 95.7 97.6 98.5 99.7 99.7
Pop.size=64 Percentage of cases reaching 100%
0-40 53 100 100 100 100 100 100 100
40-80 15 97.9 98.4 98.5 98.5 98.5 98.8 99.1
80-120 10 94.4 96.3 96.8 97.4 97.7 97.9 98.3
120-160 3 92.2 94.6 95.3 95.9 96.7 97.2 97.5
Pop.size=64 Average intra-cellular flow
0-40 53 100 100 100 100 100 100 100
40-80 15 99.8 99.9 99.9 99.9 99.9 99.9 99.9
80-120 10 98.7 99.3 99.5 99.7 99.8 99.8 99.8
120-160 3 95.5 96.9 97.5 98.2 99.2 99.3 99.3
Table 7.24: Percentage of the number of cases reaching 100% and the average best
flow for different generations and a population size of 32 and 64 chromosomes.
optimal solution for each case. With a population size of 64 chromosomes, this
ratio is obtained for 100 generations. Moreover this ratio rises to 50% (5 runs
out of 10) for 150 generations.
There is a correlation between the number of generations and the size of the
search space. Table 7.24 presents the percentage of the cases reaching 100% and the
average flow found after different maximum numbers of generations. The logarithm
(log10) is used to group the cases according to their size. Four categories are used
based on five levels: 0, 10 × 1040 , 10 × 1080 , 10 × 10120 and 10 × 10160 . The first
column defines the number of cases belonging to each category.
In Table 7.24, the first percentage of cases reaching 100% above 95% is high-
lighted. The same thing is done for the first average flow above 99%. The values
show that the greater the size of the search space, the greater the number of genera-
tions necessary to obtain a good result. Compared with the values for a population
size of 32, all the highlighted values are moved to the left. A reduced number of
generations is necessary to find the same results.
These tables are just written for information. Indeed, no choice can be made
from these results because the number of generations must be editable by the user
to run different tests.
To guide the user in this choice, Figure 7.5 shows, for the four categories of size,
the evolution of the average best intra-cellular flow from 100 to 1000 generations.
In the same graphic, the percentage of cases reaching 100% of intra-cellular flow is
7.6. ADAPTATION OF PARAMETERS 159
100 100
99 99
% of cases reaching 100%
98
98
97
91 93
90 92
100 150 200 300 500 750 1000
Generations
% 0-40 % 40-80 % 80-120 % 120-160
Flow 0-40 Flow 40-80 Flow 80-120 Flow 120-160
Figure 7.5: Intra-cellular flow and percentage of solution reaching 100% in func-
tion of the number of generations and search space size for a population size of 64
chromosomes.
represented. The evolution of the graphic is very similar for the population size of
32 chromosomes.
If a very good solution for an ideal case study is found in maximum 100 gen-
erations, a good assumption is that to solve some non ideal case, the solution will
be found after maximum 100 generations. If there is still an evolution of the best
solution near the 100th generation, the algorithm can be run with a greater number
of generations.
Table 7.25: Values of the percentage of cases and the generation for the cases reaching
100% of intra-cellular flow and the best flow found after 100 generations (Average of
10 runs for 80 cases).
1060 , 10 × 10100 and 10 × 10160 . The values are organised by category of size. In the
first column, the number of concerned cases is defined (#). The second one defines
the percentage of cases reaching 100% of intra-cellular flow (%). This average is
evaluated with the 10 runs for each case. The next column is a new one with the
percentage of cases where the optimal solution is never found after 10 runs (%(0)).
Zero in this column expresses that the case is solved for at least one run. The
next columns are the classical values: the average number of generations to find the
optimal solution (Gen), the average best flow found after 100 generations (Flow) and
the resolution time (Time).
This table permits to determine the ideal population size in function of the size of
the case. If the case has a number of solutions inferior to 10×1020 , the Simogga with
a population size of 16 chromosomes finds the optimal solution in all cases. Between
10 × 1020 and 10 × 1060 , 99.1% of the cases are solved with the optimal solution
with a population size of 32 chromosomes. For all cases above 10 × 1060 solutions,
the optimal solution is found for at least one of the 10 runs with 64 chromosomes.
However, the average best flow of the cases with a size greater than 10×10100 solutions
goes from 97.1% to 99% with a population size of 128 chromosomes compared to 32
chromosomes.
Thanks to these results, the population size is previously fixed to 16, 32 and 64
chromosomes in function of the size of search space (respectively for the intervals
[0 , 10 × 1020 ], [10 × 1020 , 10 × 1060 ] and [10 × 1060 , ...]).
The two following figures (7.6 and 7.7) are given for information for the reader.
The evolution of the best flow in function of the size of the search space is represented.
The first figure also gives the resolution time. The values on the secondary axis X
gives the maximum of the resolution time found for each population size while the
7.6. ADAPTATION OF PARAMETERS 161
Maximum time
0,3 2,7 1 0,5 1,3 2,6 20,8 2,8 36,5 107 29,2 64 34 263 116 349
100 100
90 98
% Resolution time (Pop/Pop128)
80 96
70 94
Intra-cellular flow
60 92
50 90
40 88
30 86
20 84
10 82
0 80
4 9 11 15 16 23 24 26 30 47 64 73 81 105 116 141
Size of the search space (log10)
16 32 64 128
Flow16 Flow32 Flow64 Flow128
Figure 7.6: Intra-cellular flow and average resolution time to find the best solution
in function of the search space size and the population size.
column defines the proportion of the resolution time compared to this maximum
value. The column of the second figure defines the proportion of cases whose optimal
solution is found. These figures can guide the user in his choices if he wants to change
the population size.
During the validation stage, 45% of the chromosome were constructed in the order
RP (randomly) following by CF (Flow Heuristic CF). Conversely, 45% of the chro-
mosomes were constructed first by the CF-part (randomly), and then by the RP-part
(Process heuristic RP). The last 10% of chromosomes is completely randomly con-
structed.
Figure 7.26 gives the average of all the cases and all the sets of alternativity for
different rate of initialisation. The rate used for the validation stage was not the
best. Indeed, when the population is initialised with 50% RP => CF and 50%
CF => RP , the best flow found after 100 generations and the best percentage of
cases reaching the optimal solution are the best.
The rate (50−0−50) will be used for the next sections. This rate implies that the
search is directly very oriented to minimise the flow. However, the mutation reinserts
some random solutions in the population.
162 CHAPTER 7. VALIDATION
100 100
90 98
70 94
Intra-cellular flow
60 92
50 90
40 88
30 86
20 84
10 82
0 80
4 9 11 15 16 23 24 26 30 47 64 73 81 105 116 141
Size of the search space (log10)
16 32 64 128
Flow16 Flow32 Flow64 Flow128
Figure 7.7: Intra-cellular flow and percentage of solution reaching 100% in function
of the search space size and the population size.
Table 7.26: Values of the percentage of cases and the generation for the cases reaching
100% of intra-cellular flow and the best flow found after 100 generations for different
values of initialisation rate.
Table 7.27: Best values found after 100 generations for different values of operator
rates (Continued).
To verify this result, this rate is applied on the 80 cases (20 cases with four set
of alternativity). The results are written in Table 7.29. The averages of four sets
of alternativity are presented for two operator rates equal to 80% and 100%. The
results are still better. The number of cases reaching 100% rises to 94.2% while the
average best flow found after 100 generations rises to 99.87%.
Table 7.30 shows that for the two first sets of alternativity, there is no change.
For the cases with alternative routings, the results are slightly worse. But for the
cases with complete alternativity, the percentage of solved cases passes from 76.5%
to 84% with the operator rate equal to 100%.
In watching the evolution of the solutions in Table 7.31, the number of cases
reaching 90%, 95%, 98% and 100% is always improved between 0.13% and 1.61%.
The number of generations needed to find these levels is very similar with the two
operator rates. The resolution time is reduced. This reduction is explained by the
increasing number of cases reaching 100% of intra-cellular flow. Indeed, the algorithm
finishes as soon as this level is reached and thus the resolution time decreases.
7.6.5 Mutation
Over 1000 generations, the average number of generations without evolution is equal
to 21.9 for 32 chromosomes and 14.5 for 64 chromosomes with a maximum of 188
(average of 10 runs). It is interesting to decrease this number. The objective is
to obtain an algorithm that permits to evolve regularly the quality of population
without having a high number of generations between two best found solutions. The
mutation parameters will be adapted to reach this objective.
164 CHAPTER 7. VALIDATION
Table 7.28: Best values found after 100 generations for different values of operator
rates.
Table 7.29: Comparative table with average values based on the four sets of alterna-
tivity for the algorithm after 100 generations for two operator rates (80 and 100).
7.6. ADAPTATION OF PARAMETERS 165
Table 7.30: Average values by set of alternativity for the algorithm after 100 gener-
ations for two operator rates (80 and 100).
% Gen
80 100 80 100
90 99.75 99.88 7 6.45
95 98.52 99.51 9.78 9.55
98 96.91 97.53 12.72 12.23
100 92.96 94.57 15.79 15.62
flow Time
Best 99,78 99,87 22,29 19,53
Table 7.31: Evolution parameters (average values based on the four sets of alterna-
tivity) for the algorithm after 100 generations for two operator rates (80 and 100).
1. The mutation frequency is set to 10. When the best solution in the population
has not evolved for 10 generations, a mutation of this solution is performed.
2. The best mutation frequency is set to 20. When the best solution ever found
has not evolved for 10 generations, a mutation of this best solution is made.
3. The mutation rate is set to 1. When a mutation is applied, only one chromosome
is mutated.
4. The mutation intensity is set to 20%. 20% of the groups existing in the best
solution is removed. The unassigned objects are reinserted.
To validate the choice of the mutation parameters, different values can be tested:
5 or 10 for the mutation frequency and the mutation rate multiplied by 1 or 2. Table
7.32 gives the final mutation rate for these sets of values. These values must be in
the interval [0.001, 0.05].
Mutation rate
Prob. Mut ×1 ×2
Mut freq.=5 0.0125 0.025
Mut freq.=10 0.00625 0.0125
The results found for the 20 cases and the sets of alternativity with RA = 100
gives Table 7.33. The results are given for two values of mutation intensity (0.2 and
0.4), i.e. that 20% and 40% of the groups are removed (with a minimum of 2 groups).
The results for RA = 0, with alternative processes, are similar whatever the chosen
mutation parameters. Indeed, the solutions are found for 100% of the cases and in
less than 30 generations. The mutation is almost never applied for these cases.
Mutation parameters
Mut. int. 0.2 0.4
Mut. rate 1 2 1 2
Mut. freq. 5 10 5 10 5 10 5 10
Percentage of cases reaching 100%
PA = 0 95.5 93.5 94.5 96.5 92.5 93 95.5 93.5
P A = 100 77 76.5 72.5 78.5 73 77.5 79 77.5
Aver. 86.25 85 83.5 87.5 82.75 85.25 87.25 85.5
Average flow after 100 generations
PA = 0 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9
P A = 100 99.3 99.1 99.1 99.2 99.1 99.4 99.4 99.4
Aver. 99.62 99.49 99.49 99.58 99.49 99.65 99.65 99.65
Table 7.33: Best average values by set of alternativity for different mutation param-
eters (RA = 100).
The maximum of best flow and percentage of cases reaching 100% of intra-cellular
flow are not found for the similar choice of parameters. All sets of parameters give
very similar values. But in average, the set of parameters given the best average
percentage is (0.2, 2, 10) with 87.5% following by (0.4, 2, 5) with 87.25%. For the
best flow, the sets of parameters (0.4, 1, 10), (0.4, 2, 5) and (0.4, 2, 10) give the same
results (99.65%). Between these four selected set of parameters, two are good (not
ideal) for both results (0.4, 2, 5) and (0.2, 2, 10). Both sets of parameters are good.
Depending on the studied case, one or the other set will be better. As the sensitivity
of the parameters is not important, the mutation parameters are fixed on the basis
of the second set (0.2, 2, 10). The mutation intensity is set to 0.2 and the mutation
rate to 2 for the following tests.
The values of Table 7.34 show the results with a low mutation frequency (20, 40
and 100). For a number of generations equal to 100, a mutation frequency equal to
100 is equivalent to a Simogga without mutation applied to the best solutions. The
results are unexpected. The results without mutation are the best one. The optimal
solution is found for 90.6% of the cases with RA = 100. And the average best flow
7.6. ADAPTATION OF PARAMETERS 167
Mutation frequency
RA = 100 100 40 20 10
% of found optimal solutions
PA = 0 95.5 96.2 95.2 93.8
P A = 100 85.8 81.6 84.2 75.3
Aver. 90.6 88.9 89.7 84.5
Average best flow
PA = 0 99.93 99.92 99.91 99.88
P A = 100 99.65 99.42 99.63 99.05
Aver. 99.79 99.67 99.77 99.46
# generations without evolution
PA = 0 6 5.6 5.8 4.4
P A = 100 8.7 10.5 8.4 9.6
Aver. 7.3 8 7.1 7
Table 7.34: Best average values for large mutation frequencies with Mut. Rate= 1
and Mut. Int.= 0.2 (RA = 100).
found after 100 generations is 99.79%. These results decrease when the mutation
frequency decreases.
The average number of generations without a new best solution is equal to 7.34.
The average values for all tests made are included in the interval [7.08, 8.03]. Without
mutation, the average number of generations without evolution of the best solution is
completely included in this interval. Moreover, the maximum number of generations
without new best solution is equal to 53 without mutation. This value is at the lower
limit of all tests in this section (the results are included in the interval [50, 86]).
It is important to note that the mutation is applied to modify the similar chro-
mosomes with a probability of 90%. The problem of the local optimum is not met
because these modified chromosomes permit to explore another parts of the search
space. The missing element is the mutation of the best solution.
1. The number of generations is fixed at 100. However the user can easily change
this number in function of Figure 7.5.
2. Three categories of problems are defined in function of the size of the search
space (0, 10 × 1020 , 10 × 1060 , and greater). The population is respectively
defined at 16, 32 and 64 chromosomes.
5. No mutation is applied on the best solutions but the mutation modifies the
similar chromosomes with a probability of 90%.
168 CHAPTER 7. VALIDATION
Table 7.35: Comparative table with best values for the final algorithm after all adap-
tations after 100 generations for four sets of alternativity.
6. The Process Heuristic RP is used to reconstruct the RP-part when the CF-part
is already constructed while the Flow Heuristic CF is used to reconstruct the
CF-part when the RP-part is already constructed.
Tables 7.35 and 7.36 summarize Table E.9 in appendix E with all values for the
80 cases used in this chapter to validate the method. The first table gives the average
(on RA and P A) best values obtained by the final Simogga for each case (CI1 to
CI20).
The optimal solution is found for 100% of the cases with RA = 0 and without the
analysis of the parameters and without the local optimisation. The modifications of
the Simogga implies an improvement of the results for the cases with RA = 100. For
the cases with complete alternativity, the optimal solution is found for a minimum of
30% of the cases (3 cases on 10). Moreover, the majority of cases are solved with a
flow greater than 98% except for 7.8% (25/320) of the cases that are included between
92% and 98%.
To facilitate the comparisons, Table 7.36 gives the average (on 20 cases) best
values for the final Simogga, the final Simogga with the specific Process Heuristic
RP for the initialisation of the RP-part, and the Simogga before the adaptation of all
parameters after 1000 generations with a population size of 32 and 64 chromosomes.
The values are presented for each set of alternativity. These results approach the
results of the Simogga after 1000 generations (see Tables E.6 and E.7).
The final results of previous table were obtained with a invalid set of parameters.
In section 7.6.3, the initialisation was defined in two parts: 50% of the population
was constructed by the Random Heuristic CF following by the Process Heuristic RP
while the remained 50% was constructed by a Random Heuristic RP following by the
Flow Heuristic CF.
7.6. ADAPTATION OF PARAMETERS 169
Init.
Rand. Heur. Init.
Specific Heur.
% Gen Max % Gen Max
RA = 0, P A = 0 100 0.4 100 100 0.4 100
RA = 0, P A = 100 100 15.8 100 100 16.3 100
RA = 100, P A = 0 90 18.9 99.9 94.5 19.6 99.9
RA = 100, P A = 100 86 29.8 99.7 76.5 28.2 99.3
Aver. 94 16.23 99.9 92.75 16.13 99.8
Aver. Time 17.2 19
1000 Gen, Pop. Size=32 1000 Gen, Pop. Size=64
RA = 0, P A = 0 100 0.5 100 100 0.4 100
RA = 0, P A = 100 100 13.8 100 100 11.5 100
RA = 100, P A = 0 98 42.3 100 97.5 44.2 100
RA = 100, P A = 100 84.5 139.6 99.6 92.5 58.2 99.8
Aver. 95.63 49.05 99.9 97.5 28.58 99.95
Aver. Time 54.78 66.63
Table 7.36: Comparative table with best values for the final algorithm after all adap-
tations after 100 generations for four sets of alternativity.
The characteristic of these results is the use of the random heuristic to construct
50% of the population for the CF-part following by the RP-part. The final per-
centage was equal to 92.5% with the specific Process heuristic RP used during the
initialisation phase while 94% of cases reaches the optimal solution with the random
initialisation for RP-part. The major difference is situated at the case RA = 100 and
P A = 100 with an increasing of 10% of the cases reaching 100% of intra-cellular flow.
Table 7.36 presents the solutions with the random initialisation for the RP-part.
Table 7.37 gives the best solutions for the 20 cases (CI1 to CI20) with RA = 100
for the Simogga before and after the local optimisation, and after the specification
of all parameters. The average best flow found after 100 generations goes from 94.5%
without local optimisation up to 99.8% after the adaptations. The largest difference
is between the number of cases reaching the optimal solution, 40.5% at the basis,
73.8% with the optimisations and 88% with the adaptation of all parameters.
Now that all the parameters are correctly set, the Simogga is very efficient.
Tables 7.38 and 7.39 give the average performances of the Simogga compared to
the Mogga with an integrated module (heuristic or Mogga). Table E.9 with all
the values is given in appendix E. The results given for the Mogga with another
integrated Mogga are the ones of Table E.8 in appendix E. The results for the
Mogga with an integrated heuristic are found with a population of 32 chromosomes
and 200 generations to approach the parameters of the Simogga.
Table 7.38 gives the average (on alternativity) values for the four cases while
Table 7.39 gives the average (on the four cases) values for each set of alternativity.
These tables show that the Simogga is always more efficient than the successive
resolution. This observation is applicable for all sizes of case studied. The three
cases presented here are a good representation of different sizes of problems that can
be met with or without alternative routings and processes. The resolution time is
the shortest whatever the treated case. The Simogga has also the best percentage
of cases reaching the optimal solution. The cases with full alternativity (RA = 100
and P A = 100) present the most differences. The optimal values is found for 85%
of cases with the Simogga compared to maximum 40% for the other methods. This
efficiency is observed for the best intra-cellular flow found after 100 generations while
170 CHAPTER 7. VALIDATION
Table 7.37: Comparative table with average values (RA = 100) before and after local
optimisation and after adaptation of parameters after 100 generations.
7.7 Conclusion
The proposed algorithm, Simogga, has been validated in this chapter. Firstly the
choices of the best CF heuristic, RP heuristic and process selection are made in
solving each of the three problems individually (Cell formation, Resource planning
and process selection).
The heuristic solving the cell formation problem is based on the intra-cellular
flow to maximise. The heuristic used for the resource planning problem is based on
the minimisation of the different cells visited by each part. The process selection is
included in the initialisation of this heuristic.
All the choices made on the three problems separately continue to be efficient
when the three problems are solved simultaneously. Because the results were not
optimal, different modifications have been proposed. In particular, a local optimi-
sation applied before the reconstruction phases of the genetic operators significantly
improves the solutions. This optimisation is based on the computation of a priority
7.7. CONCLUSION 171
Table 7.38: Average (on RA and P A) best values found with the final Simogga and
the Mogga with an integrated module (RP and CF). See complete Table E.10
Table 7.39: Best values found with the final Simogga and the Mogga with an
integrated module (RP and CF). See complete Table E.10
• a random research during 100 generations without crossover, to show the effi-
ciency of crossover;
The results of these different tests were always inferior to the best solution found
by the Simogga. Strongly increasing the resolution time and for medium cases,
these three methods can approach to the optimal solution without reaching it. But
the optimal solution is never found when the size of the search space increases.
Finally, all the parameters are fixed after testing different values. A last list is
given with the best parameters. The population size is based on the size of the search
172 CHAPTER 7. VALIDATION
space. The mutation is not applied for the best solution but only to modify some
similar solutions. The most important parameter is the operator rate defining the
rate to apply the operators on each part of the chromosome representing each prob-
lem. Initially, these parameters were set to 80% to let the solution evolve sometimes
separately on each problem. The analysis in this chapter concludes that this rate
must be set to 100% for both problems, i.e. the genetic operators are applied each
time on both problems. Thus the idea to construct a chromosome including both
problems was a good concept to solve simultaneously two interdependent problems.
Chapter 8
Applications
8.1 Introduction
Now that the Simogga is validated, it is applied on all case studies (CS) found in
the literature and its results are compared. In the previous chapter, all the case
studies used for the validation of the Simogga were ideal. There was always an
optimal solution without inter-cellular flow. The construction of independent cells
was possible. The literature case studies are not ideal cases. The objective is to
construct cells as independent as possible.
In this chapter, the found case studies are sorted in two categories. The first
section presents the algorithm applied to the cases without alternativity, while the
second one compares the cases with alternative processes (RA = 0 and P A = 100)
and/or with alternative routings (RA = 100 and P A = 0) and (RA = 100 and
P A = 100). All case studies analyzed in this section are described as an incidence
matrix in appendix F.
The case studies found in the literature are small to medium cases with often a
basic coding with the incidence matrix. The use of the incidence matrix implies the
evaluation of the matrix diagonalisation. The parameter of group efficacy is used to
compare the cases. As explained in chapter 3, an important criticism of resolutions
based on binary incidence matrices is that they do not take other information into
account, such as production volumes, cost of machines, maximum cell size, that may
significantly influence cell formation [230]. Moreover, these methods are not usable
for large scale problems. This chapter only permits to compare the results without
comparing the efficiency of the methods.
173
174 CHAPTER 8. APPLICATIONS
compared methods used for the cases without alternativity. The compared value is
the group efficacy parameter. The final method (25) addresses the proposed method
developed in this thesis.
The first column of the Simogga gives the number of exceptional elements. The
best value for the group efficacy parameter and the maximum associated intra-cellular
flow is given when the evaluation function of the Simogga is the maximisation of
the group efficacy (max GE) and the maximisation of the intra-cellular flow (max
flow). The last column gives the CPU time in second. The used heuristic for the
cell formation problem is based on the flow. The maximum flow found in this case
is often greater than those with the maximisation of the group efficacy.
The results of the Simogga presented in Table 8.4 are similar to the optimal
values given by the other authors except for five problems (2, 11, 14, 15 and 16).
These results can be explained since the used heuristic is oriented to the flow and
not to the group efficacy. Despite this heuristic, the optimal solution is frequently
found, particularly for the largest cases. Another remark can be made about the best
flow found in maximising the intra-cellular flow. This flow is equal or greater than
the flow found in maximising the group efficacy. In function of the used evaluation
function, the best found intra-cellular flow will be different. It is important to note
that to compute the flow between cells for these cases; the operation sequences must
be given. As this information is not described in the incidence matrix, the used
sequence of operations is the sequence of the data, generally written in the ascending
order of machines. If the case is encoded differently, the best flow found will be also
different while the group efficacy will be always similar.
8.2. CS WITHOUT ALTERNATIVITY
Pb Number of 1-19;22-25 20-21
no Source Year Mach. Parts Op. Sol. Number of Cells
Pb1 King and Nakornchai [140] 1982 5 7 14 10 2 2
Pb2 Waghodekar and Sahu [296] 1984 5 7 20 10 2 2
Pb3 Seifoddini [236] 1989 5 18 46 10 2 3
Pb4 Kusiak [151] 1987 6 8 22 10 2 2
Pb5 Askin and Standrige [12] 1993 6 8 18 85 3 *
Pb6 Zolfaghari and Liang [323] 2002 7 8 21 175 3 *
Pb7 Kusiak and Chow [151] 1987 7 11 23 175 3 5
Pb8 Boctor [25] 1991 7 11 21 175 3 4
Pb9 Seiffodini and Wolfe [237] 1986 8 12 35 280 3 4
Pb10 Chandrasekharan and Rajagopalan [41] 1986 8 20 61 805 3 3
Pb11 Chandrasekharan and Rajagopalan [42] 1986 8 20 91 35 2 2
Pb12 Moiser and Taube [186] 1985 10 10 24 3.7 × 1003 3 5
Pb13 Chan and Milner [39] 1982 10 15 46 3.7 × 1003 3 3
Pb14 deWitte [53] 1980 12 19 76 5.8 × 1003 3 *
Pb15 Askin and Subramanian [8] 1987 14 24 59 1.4 × 1006 5 7
Pb16 Stanfel [259] 1985 14 24 61 1.4 × 1006 5 7
Pb17 McCormic et al. [173] 1972 16 24 86 1.1 × 1008 6 8
Pb18 Srinivasan et al. [258] 1990 16 30 116 1.2 × 1008 4 6
Pb19 King [139] 1980 16 43 124 7.7 × 1008 5 8
Pb20 Burbidge [33] 1979 16 43 123 7.7 × 1008 5 *
175
176 CHAPTER 8. APPLICATIONS
Table 8.3: Comparative table with the group efficacy for cell formation problem without alternativity (continued).
177
178
Methods 25: 2PMogga
Pb Max GE Max Flow
no 16 17 18 19 20 21 22 23 24 GE Flow Flow GE CPU(s)
1 * * 73.68 * 82.35 82.35 82.35 82.35 73.68 82.35 100 100 82.35 0.4
2 73.68 c 54.55 c 60.87 73.7 c 69.57 69.57 69.57 69.57 69.57 62.5 46.15 53.85 69.57 0.4
3 * * 79.59 * 79.59 79.69 79.59 79.59 79.59 79.59 50 82.14 79.69 1
4 * * * * 76.92 76.92 76.92 76.92 76.92 76.92 78.57 78.57 76.92 0.5
5 61.9 52.17 * 88.9 * * * 88.9 * 88.89 70 70 88.9 0.4
6 60 44.4 * 62.1 * * * 73.91 * 73.91 69.23 69.23 73.91 0.4
7 * * 53.13 * 60.87 a 60.87 a 60.87 a 58.62 58.62 58.62 50 58.33 58.62 0.4
8 * * 70.37 * 70.83 a 70.83 a 70.83 a 70.37 70.37 70.37 80 80 70.37 0.5
9 * * 68.29 * 69.44 a 69.44 a 69.44 a 68.29 68.3 68.29 73.91 73.91 68.3 0.7
10 * * 85.25 * 85.25 85.25 85.25 85.25 85.25 85.25 70.73 75.61 85.25 1.2
11 * * 58.72 * 58.72 55.32 58.72 58.72 58.72 56.88 47.89 71.83 58.72 2.6
12 * * 70.59 * 75 a 75 * 70.59 70.59 70.59 100 100 75 0.6
13 37.84 38.46 92 81.8 92 92 * 81.8 92 92 100 100 92 0.9
14 35.71 34.55 * 57 d * * * 56.57 * 50.88 63.16 56.57 1.9
CHAPTER 8. APPLICATIONS
53.54
15 * * 69.86 * 72.06 a 72.06 * 68.92 70.83 68.83 72.97 72.97 72.06 1.3
16 * * 69.33 * 71.83 a 71.83 * 69.33 70.51 68.83 72.97 75.68 71.83 1.3
17 * * 51.96 * 52.75 a 51.58 53.26 a 52.58 51.96 51.02 38.71 45.16 52.58 1.7
18 * * 67.83 * 68.99 68.61 68.99 a 67.83 67.83 67.83 67.44 73.26 68.99 2.6
19 * * 54.86 * 57.53 a 55.48 57.53 a 54.91 54.86 55.49 62.96 62.96 55.49 2.9
20 17.57 21.1 * 51.1 * * * 51.15 * 51.15 43.75 55 51.15 3.2
Table 8.4: Comparative table with the group efficacy for cell formation problems without alternativity.
8.3. CS WITH ALTERNATIVE ROUTINGS AND PROCESSES 179
CHAPTER 8. APPLICATIONS
PA12 Sofianopoulou [250] 12 12 20 109 4 3 4.8 × 1001 5.8 × 1003
PA13 Nagi [192] 15 15 15 190 4 4 4.1 × 1003 2.6 × 1006
PA14 Nagi [192] 15 15 15 190 5 3 4.1 × 1003 1.3 × 1005
PA15 Won [308] 26 26 28 301 7 5 7.0 × 1010 5.0 × 1015
PA16 Kazerooni [133] 30 30 40 344 7 5 3.3 × 1012 8.8 × 1017
PA17 Kazerooni [133] 10 10 16 144 7 6 3.3 × 1012 1.2 × 1020
181
182
Id Authors Year Methods
1 Kusiak [146] 1987 Clustering algorithm,
2 Gupta [88] 1993 Algorithm based on similarity coefficient,
3 Kazerooni et al. [134] 1996 Genetic algorithm,
4 Kazerooni et al. [133] 1997 Genetic algorithm, Minimise the cost
5 Vivekanand and Narendran [294] 1998 Heuristic procedure, maximise the flexibility
6 Sofianopoulou [250] 1999 Simulated annealing,
7 Won [306] 2000 p-median approach, Maximise a similarity coefficient
8 Won [306] 2000 p-median approach, Maximise a similarity coefficient
9 Adenso-Diaz et al. [1] 2001 Tabu Search
10 Suresh and Slomp [263] 2001 A Multi-objective Procedure
11 Yin and Yasuda [317] 2002 Two-stage algorithm, Minimise the cost (new machines)
12 Wu et al. [310] 2004 Tabu Search, Minimise the number of exceptional elements
13 Lei and Wu [157] 2005 Tabu Search,
CHAPTER 8. APPLICATIONS
14 Spiliopoulos and Sofianopoulou [253] 2007 Bouding scheme,
15 Wu et al. [311] 2009 Simulated annealing, Minimise the number of exceptional elements
16 Proposed method, Simogga 2010 Genetic algorithm, Minimise the intra-cellular flow
Table 8.8 gives the number of exceptional elements to quantify the inter-cellular
traffic while Table 8.9 presents some more detailed results for [311] and the Simogga.
The results of the Simogga are composed of different values: the number of excep-
tional elements (to compare the methods), the group efficacy and the best intra-
cellular flow associated are separated in two parts: when the group efficacy (Max
GE) is maximised and when the intra-cellular flow is maximised (Max Flow). The
last column gives the computational time (CPU, in second).
Some of these cases are described with the sequence of operations, the production
volume and the machine capacity as Kazerooni et al. or Gupta [134, 133]. These
case studies are treated with the proposed sequence of operation and production
volume. For the other case studies, only described by an incidence matrix, the used
sequence is defined by an ascending order as explained in the previous section for the
cases without alternativity.
The results of Wu et al. [311] are the only ones which permit a real comparison.
They found the best results of all previous methods. However, they treat only the
alternative process routes, without combining the alternative processes with alterna-
tive routings. The maximum cases that they treat are composed by 100 parts and 198
routings. The hypothesis of two process routes by part can be made. This hypothesis
implies that the RP size of the case is 1.27 × 1030 . There is no more information
about their capacity to treat large scale problems. Their results are better than those
given by the Simogga. However, the Simogga is able to solve more complex cases
containing all type of alternativity.
The algorithm is run with the cell constraints specified in the literature. However,
the Simogga can run with variable cells. In this case, the algorithm is run iteratively
with different cell constraints from the initial constraints. If the data are consistent,
the number of cells is increased of two cells, and decreased of two cells. The cell size
is then computed to permit the allocation of all machines.
Table 8.8: Comparative table for problems with alternativity with the number of
Exceptional Elements (EEs) (continued).
8.4. CONCLUSION FOR LITERATURE CASE STUDIES 185
Pb 16 - Simogga
Pb 15 Max GE Max flow
PA1 0 90 0.003 0 90 100 100 90 0.4
PA2 0 100 0.013 0 100 100 100 100 0
PA3 * * * 0 93.33 64 100 88.89 0.5
PA4 * * * 0 66.67 34.9 55.7 58.33 0.8
PA5 5 83.33 0.031 0 80 85.71 100 77.78 0.4
PA6 2 72.22 0.05 0 72.22 88.89 88.89 72.22 1
PA7 3 81.48 0.034 6 81.48 76.92 76.92 77.78 0.9
PA8 5 69.44 0.048 1 70.27 64.71 73.33 67.57 1.4
PA9 * * * 5 73.81 97.39 97.39 73.81 1.5
PA10 2 82.86 0.102 6 83.33 82.61 90.48 82.86 1.4
PA11 3 80.65 0.105 11 80.65 77.78 77.78 77.42 1.1
PA12 29 49.47 0.216 3 49.12 46.15 55.38 44.26 3
PA13 * * * 7 49.15 39.71 53.86 42.06 6.4
PA14 1 79.52 0.528 6 50 41.16 53.86 42.06 6.5
PA15 13 72.48 0.569 12 54.12 61.54 70.93 41.46 24.3
PA16 * * * 10 55.11 91.26 89.26 100 56.5
PA17 * * * 0 83.84 72.17 0 0 95.71
RA1 * * * 5 72.73 66 68.67 63.64 0.3
RA2 * * * 25 53.47 67 67 53.47 2.4
RA3 * * * 20 73.33 81.17 85.6 68.82 4
RA4 24 54.29 0.313 27 50.85 58.46 61.54 49.17 2.3
RA5 26 47.45 0.506 35 43 53.49 61.63 39.64 5.2
RA6 * * * 10 60.4 89.42 90.88 59.05 2
RA7 * * * 8 39.47 92.5 97.5 36.84 3.8
RAPA1 * * * 14 63.83 100 100 48.08 0.7
RAPA2 10 61.9 0.134 9 64.52 75.76 85.29 61.43 1.7
RAPA3 * * * 17 73.81 90.8 93.53 72.41 4
Table 8.9: Comparative table for problems with alternativity with the number of
Exceptional Elements (EEs), GE and best flow.
186 CHAPTER 8. APPLICATIONS
shown in the chapter 7. These cases cannot be solved by all the methods based on
the incidence matrix.
It is precisely in analysing the difficulty to minimise the group efficacy parameters
that the Process heuristic RP is developed. This heuristic permits to minimise the
group efficacy in allocating all the operations to concentrate them inside a unique
cell. The Simogga can solve a problem with alternative routing and processes better
thanks to this heuristic. On the contrary for problems without alternativity, only the
Flow heuristic CF is applied and the results are not so good. They approach the
results found in the literature, but the solution is sometimes not as good as the
optimal solution proposed by the authors.
Chapter 9
Conclusions
This chapter closes this work. The main results are presented in the following section.
The next and last sections are devoted to the open perspectives.
187
188 CHAPTER 9. CONCLUSIONS
groups. The availability constraints are essentially used in the RP problem to define
the availability of each machine to perform each operation.
The main characteristics of the Sigga are the following:
3. The main genetic operator is the crossover that is applied on both parts of the
chromosomes.
4. The second genetic operator, the mutation has been determined as useless:
the results are better without mutation. However, the mutation is indirectly
applied. Indeed, when the population contains too similar solutions, the same
solutions are modified with the operator of mutation.
cells with a maximum cell size, defined by the maximum number of machines that
can be grouped in a cell. Thanks to this formula, a real evaluation of the size of the
search space can be done. This size permits to compare the case studies in terms of
the number of solutions and not in function of the number of parts, machines, etc.
1. When the machines are grouped in cells and all the operations are allocated
to a specific machine, the allocation of the machine operators can be done. As
the allocation of the operators in the cells represents also a grouping problem,
the algorithm could be adapted to solve three interdependent problems. There
would be some constraints about the different qualification and capabilities of
the machine operators. The cost of training could be taken into account.
2. The next step in the use of the alternative processes and routings is to allow lot
splitting. By this way, the capacity constraints would be respected in selecting
two different routings for two parts of the production volume.
3. The alternative processes and routings can be used when the cell formation
problem is solved on multi-periods. The proposed algorithm is so fast that it
could be used to reassign the production for each new period, when there are
different production volumes, or when parts are added and/or removed during
this period.
This appendix contains a short introduction to the multi-criteria decision aid system
Promethee. For a full description, [27],[293] should be consulted. The Promethee
method permits the building of an outranking between different alternatives.
Let A be a set of solutions, each a ∈ A, fj (a) represents the evaluation of a
solution, a, to a given criterion, fj . Table A.1 represents a generic evaluation table.
• 0 ≤ Pj (a, b) ≤ 1;
193
194 APPENDIX A. ANNEXE - THE PROMETHEE METHOD
P(a,b)
d = f(a) - f(b)
-p -q 0 q p
k
X
π(a, b) = wj Pj (a, b) (A.1)
j=1
where wj > 0 are weights associated with each criterion. These weights are positive
real numbers that do not depend on the scales of the criteria.
It is interesting to note that if all the weights are equal, π(a, b) will simply be the
arithmetical average of all the Pj (a, b) degrees.
π(a, b) expresses how, and to what extend, a is preferred to b, while π(b, a) ex-
presses how b is preferred to a over all the criteria. The values π(a, b) and π(b, a) are
computed for each pair of alternatives a, b ∈ A. In this way, a complete and valued
outranking relation is constructed on A.
A.3. EXPLOITATION FOR DECISION AID 195
1 X
Φ+ (a) = π(a, b)
n−1
b∈A,b6=a
1 X
Φ− (a) = π(b, a)
n−1
b∈A,b6=a
The positive outranking flow expresses to what extent each alternative outranks
all the others. The higher Φ+ (a) is, the better the alternative will be. Φ+ (a) repre-
sents the power of a, i.e. its outranking character.
The negative outranking flow expresses to what extend each alternative is out-
ranked by all the others. The smaller Φ− (a) is, the better the alternative will be.
Φ− (a) represents the power of a, i.e. its outranked character.
aS + b if f Φ+ (a) > Φ+ (b) aS − b if f Φ− (a) < Φ− (b)
aI + b if f Φ+ (a) = Φ+ (b) aI − b if f Φ− (a) = Φ− (b)
aI I b a and b are indifferent. The positive and the negative outranking flows of
a and b are equal.
196 APPENDIX A. ANNEXE - THE PROMETHEE METHOD
aRI b a and b cannot be compared. In this case, a higher power of one alterna-
tive is associated to a lower level of weakness of the other. This usually
happens when a is good on a set of criteria on which b is weak, and
vice-versa. As the information corresponding to the alternatives is not
consistent, it seems natural that the method would not decide which one
of the alternatives is better. In such a case, it is up to the decision-maker
to assume the responsibility and to decided.
Φ+ Φ− Φ
Car 3 0.583 0.417 0.166
Car 1 0.417 0.417 0.000
Car 2 0.417 0.417 0.000
Car 4 0.417 0.583 −0.166
Table A.3: Promethee ranking when the weights of all the criteria are 1
Φ+ Φ− Φ
Car 4 0.666 0.333 0.333
Car 3 0.619 0.381 0.238
Car 2 0.381 0.524 −0.143
Car 1 0.238 0.667 −0.429
Table A.4: Promethee ranking when the weight of the power criterion is 4
The results show that the best solution is Car 3, and that the worst is Car 4.
The method cannot rank Car 1 and Car 2 differently. Let us now compute the
Promethee ranking of these solutions when the weight of the power criterion is
set at 4 (Table A.4).
The effect of this change is clearly to seen in the results, because, as expected,
the Car 4 turns out to be the best solution.
198 APPENDIX A. ANNEXE - THE PROMETHEE METHOD
Appendix B
199
200 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
F is called the search space, i.e. the set of all possible solutions for an instance,
and c is a cost function calculated for each solution of F and used to determine the
performances of each solution.
B.1.2 Algorithms
To solve problems it is necessary to develop methods, often called algorithms in
computer science, that describe the set of actions to be performed under given cir-
cumstances. In one of the definitions found in the literature [73], an algorithm is
stated as the list of precise rules that specify what to do under all possible condi-
tions. This definition includes the one of the Turing Machine [273], which is an
abstract representation of a computing device. Another definition is to describe an
algorithm as a finite set of instructions (evaluations and assignations), which leads
to a solution.
The complexity, O, of an algorithm defines a relationship between the size of an
instance, such as the number of objects in the Bin Packing Problem, and the necessary
resources to solve it, i.e. the amount of memory and number of CPU cycles required.
A complexity of O(n2 ) for example signifies that the resources required evolutes as the
square of the size of the instance, i.e. an instance two times larger than another one
needs four times more resources. An important category of problems consists of the
NP-hard ones, for which no polynomial time algorithm has been found so far. With
these problems, the CPU time increases exponentially with the size of an instance.
In other words, when the size of the problem increases, it becomes impossible to
compute all the valid solutions. For example, in a Bin Packing Problem involving
5 objects it is possible to compute all the different solutions to determine the best
one. Whereas for 500 objects, it is no longer possible. NP-hard problems can only
be solved by specific algorithms which try to reach an optimal solution, or at least a
solution as close as possible to one optimal in a reasonable time.
B.1.3 Heuristics
When dealing with NP-hard problems it is often necessary to use algorithms that do
not guarantee an optimal solution. This class of algorithms is known as heuristics.
A heuristic is an intuitive way to find a valid and often reasonably good solution
for a given problem in a reasonable lapse of time, i.e. a heuristic is based on rules-
of-thumb, ideas that seem to be helpful in some typical instances though, without
providing any guarantee of the quality of the solution.
For example, in the Bin Packing Problem, the first-fit descending heuristic which
consists in treating objects in descending order of size and putting one into the
first group that can take it, is a well-known heuristic giving good results in such
cases (depending on the sizes of the bins), though not necessary the optimal one.
The biggest problem about heuristics is that it is strongly instance- and problem-
dependent, and that the results may be very poor.
When the essential factor is the time of execution, heuristics is the best solution.
For example, a first-fit heuristic is used in an operating system to find a memory
zone when a program allocates a given amount of data. In this case, the fact that the
solution proposed is not the best one is less important than the time needed. But,
because of their drawback, heuristics rarely reach the status of being able to provide
B.1. ALGORITHMS, HEURISTICS AND META-HEURISTICS 201
dsL0
Accept
Temperature
decreases
Evaluation
Accept
dsG0 with probability
a solution when the quality of the solution is important, and cannot be considered
as a general approach.
B.1.4 Meta-heuristics
The two disadvantages of heuristics are that the solutions proposed can often be very
low-quality and are strongly instance- and problem-dependent. Computer science has
developed several methods to work-around these disadvantages. All these methods
use heuristics in some way or another, but enable the entire search space to be
search in; this is the reason why the term meta-heuristics is employed. Most meta-
heuristics need cost functions which are, most of the time, a mathematical expression
that represents the quality of a solution for a given problem. A brief introduction
to simulated annealing and tabu search ends this section. Another meta-heuristic
known as the genetic algorithms will be presented in the next sections.
Initial Solution
List of Modifications
Verify Yes
In Tabu List?
Tabu List No
Worser Worser
Evaluate Solutions
Continue
Best Solution Hold
Stop
Final Solution
A second type of meta-heuristic is the tabu search [77, 76]. It is based on the idea
that making the same mistakes twice in searching for an optimal solution, cannot
be acceptable. As in the case of the simulated annealing, this method effects local
improvements to an initial solution obtained by a heuristic. The best solution in
selected in the whole neighborhood of the current one. The tabu list, a list of fixed
length contains all the modifications carried out so far without producing any im-
provements. When a specific modification is close to one on the list it is excluded and
the next best one is adopted. Of course, the tabu list is updated at each iteration
by removing the oldest modification and inserting a more recent one. Figure B.2
illustrates the paradigm of the tabu search.
The efficiency of this meta-heuristic depends of the local improvements chosen
for a given problem and of the size of the tabu list.
1. GAs use a parameter coding rather than the parameters themselves, i.e. GAs
do not have to know the meaning of the parameters.
2. GAs work on a population of solutions and not on a single one, and so explore
a larger portion of the search space.
3. GAs use a value of the function to be optimized and not another form of this
function. The function does not need to have any special analytical properties.
4. GAs use probabilistic transitions and not deterministic ones. For the same
instance of a problem (same initial conditions and data), the results may be
different after two different runs.
In fact, this can make debugging more difficult because it is impossible to repro-
duced the bugs from one run to another. Because of this, there is generally a debug
mode in GAs, where under same initial conditions two different runs will produce the
same results. This can be done by implementing a pseudo-random generator which
reproduces the same sequence of numbers if asked.
204 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
Reproduction
Crossover
No
Mutation
Inversion
Evaluation
B.2.4 Paradigm
This section describes a basic Genetic Algorithm in order to illustrate its principal
features. A well-known example [80] will be developed progressively to help the
reader to master the concepts: we will try to optimize function f (x) = x2 , where x
is a coded number using five bits (x ∈ [0, 31]). In this particular case the function,
f (x), can, of course, be used as a cost function which is maximum for the number
31, i.e. when all the bits coding the chromosome are set to 1 For the example, the
size of population is fixed as 4.
7. GAs stop if and when a final condition is reached that indicates that an accept-
able solution exists; otherwise GAs make a new generation by repeating step
2.
All modern GAs hold a separate copy of the best chromosome ever computed so
as to avoid its accidental destruction when it forms part of the population. Both
the operators and the encoding scheme are equally important. The remainder of
this section is devoted to a brief introduction to each operator. We assume that the
chromosomes are represented by a string of five bits, with each bit {0, 1} representing
a gene of the chromosome.
Remark : The reader should note that the operators have to be chosen so that
they reach the optimal solution, i.e. the operators described in the next sections
cannot be used to solve every optimization problem.
B.2.4.3 Reproduction
Historically, the reproduction step is the re-creation of a simple copy of a set of
selected chromosomes that will replace a set of other selected chromosomes. In fact,
no modern GA does the reproduction step by really copying chromosomes. Moreover,
the role of reproduction is to select the way in which the different chromosomes will
be treated by the crossover.
Different methods exist for the construction of these sets, but only two of them
will be dealt with here:
206 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
4
2
A ball is run around the wheel and will stop at a random point, so determining
the section selected. When this method is applied t times, this method selects t
individuals that will be used as parents in the crossover. The individuals with high
fitness values will naturally have more chance of being selected, and those with low
fitness values have more chance of disappear in the next generation. The chromo-
somes selected are then randomly regrouped to form t/2 pairs, which are used as
the parents for the crossovers. The number, t, of chromosomes to be selected for the
crossover can be obtained by the crossover probability. For example, in a population
of 10 chromosomes and a crossover probability of 0.6, 6 chromosomes will be selected
as parents.
Let us apply proportional selection to our exampleP(Table B.2). The length of
the arc for each chromosome is computed with f (x)/ f (x). By multiplying this
length by the size of the population, it is possible to compute the number of copies
expected for each chromosome.
As expected, the result of the selection is that the chromosome with the highest
cost value function is copied twice while the chromosome with the lowest cost value
function is not copied at all. So, after the proportional selection, there are 2 copies
B.2. WHAT ARE GENETIC ALGORITHMS? 207
Population
"Good" Chromosomes
i
j 1 i
3
3
"Bad" Chromosomes
1
j i
of the chromosome stored in C2 and no copy of the chromosome which was stored in
C3 . This means that a copy of the chromosome stored in C2 is stored in C3 , i.e. at
this stage chromosomes stored in C2 and C3 are identical. The two pairs forming the
parents for the potential random crossovers are (C1 , C2 ) and (C3 , C4 ). A disadvantage
of this method is the fact that there are no guarantees that the best individual will be
reproduced because proportional selection uses a probabilistic approach. Also, if the
best chromosome ever computed is stored outside the population, it will not change
the problem for the best chromosome in the population, which may be different from
the best ever computed. The advantage is that the best solution is not always the
parent of a global optimum, but sometimes of a local one. This strategy enables the
local optimum to be overcome.
Parent 1 a b c d e a b C D E
Parent 2 A B C D E A B c d e
crossing site
(a) (b)
1. The chromosomes are not copied but just ordered, i.e. there is always only one
copy of each chromosome. In fact, for well-designed operators the population
must be as diversified as possible.
2. The last inserted chromosome in the list is the best chromosome of the pop-
ulation, i.e. in comparison with proportional selection the best individual is
always reproduced.
B.2.4.4 Crossover
The role of a crossover operator is to create new chromosomes in the population
by using characteristics of parent chromosomes. If the crossover operator copies
interesting characteristics, the corresponding new chromosomes should enjoy a better
potential of adaptation, i.e. their fitness values should increase. The basic idea is
that information, known as genes, is exchanged between two selected chromosomes
to create two new chromosomes. An example of such an exchange is shown in Figure
B.6. A position called the crossing site is selected randomly from the string. The
new chromosomes are constructed by mixing the left- and the right-hand sides of
each parent’s crossing site.
Let us apply this crossover operator to our example by choosing as parents the
result of the tournament where the randomly selected crossing site is 2 (Table B.4).
The result of the crossover shows that the newly created chromosome, C1 , has a
better fitness value than the parents. In the example, chromosome C3 however has
a very low fitness value and will probably be destroyed during the next reproduction
step.
B.2. WHAT ARE GENETIC ALGORITHMS? 209
A B C D e A B C D E
(a) (b)
B.2.4.5 Mutation
If the crossover was the only existing operator, it would be impossible for GAs to
scan the whole search space. Indeed, if we assume that, in the initial population
in our example no chromosomes has the third bit set to 1, the best solution for the
problem (corresponding to a chromosome with all the bits set to 1) would never be
found.
The role of the mutation operator is to randomly insert new characteristics in the
population to diversify it. An example of a mutation is shown in Figure B.7, where
a gene is randomly chosen and modified.
In our example, this modification could consist in transforming a bit from 1 to 0
or from 0 to 1. If chromosome C1 in our example is chosen and the randomly selected
bit is the third, C1 will be modified as 1 1 1 1 1, and the best solution to the problem
will have been found (Table B.5).
Even if the interest of such an operator is clear in certain cases, the probability
that a mutation occurs is very low because too many mutations would destroy the
effects of the reproduction and the crossover. So it is necessary to adopt a method
to choose the chromosome and, eventually, the gene on which the mutation must be
carried out.
Section B.2.4.7 discusses some strategies that can be used.
B.2.4.6 Inversion
The crossover and the mutation operators affect the information contained in the
chromosomes. The inversion operator does not change the information, but will
change its presentation. Again, to understand this point, it is important to know
that the position of a given gene in a chromosome has an influence on whether this
gene will, or will not, be used by the different operators. For example, in the basic
crossover presented at Figure B.6, a gene which is in the middle of a chromosome has
more chance of being involved in a crossover than the first or last gene. It is therefore
sometimes interesting to change the way information is presented. Figure B.8 shows
210 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
A B C D E A B C E D
(a) (b)
a simple inversion operator if it is assumed that the value of the corresponding fitness
function is not changed: two genes have been randomly chosen, here D and E, and
their information exchanged.
The pertinence of this operator strongly depends on the problem studied be-
cause an operator may not affect the information contained in a chromosome. This
means that this operator is not always applied in real situations. In our example, the
inversion of two bits in the chromosome will automatically change the number repre-
sented if the two bits are set to different values. In the Grouping Genetic Algorithms
presented in section B.4, the inversion operator is applied.
• A given number of generations has been run through without the best chromo-
some in the population changing. When a given threshold is reached, the best
chromosome in the population is chosen, a mutation is effected on it and the
result replaces a chromosome in the population.
The following approaches can be used to choose the chromosome to be replace:
• The chromosome with the lowest fitness value is used for the mutation.
• A chromosome is chosen using the roulette wheel strategy (see section B.2.4.3.1).
In some GAs, when a mutation must be carried out on the best chromosome ever
computed, a special operator known as the strong mutation is used. In general, this
type of operator modifies the chromosome profoundly while a normal mutation only
brings about minor changes.
B.2. WHAT ARE GENETIC ALGORITHMS? 211
fi
pi = X
fi
i
The number of times each string, Ai , should be reproduced is given by |A| · pi .
So, after reproduction, the quantity of schema H in generation g + 1 (their index
specifies that only the reproduction is involved) is given by:
m(H, g) · |A| · f (H)
mr (H, g + 1) = X
fi
i
where f (H) represents the average fitness of the strings containing the schema H in
generation g.
But, the average fitness, f , of the population is given by:
X
fi
i
f=
|A|
So the quantity of schema H can be written as:
m(H, g) · f (H)
mr (H, g + 1) =
f
212 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
It will be seen from this last expression that schemata with a better level of
adaptation than the average will have more copies in the next generation. Let us
suppose that a given schema, H, has a higher fitness value than the average for a
quantity, c · f , where c is constant, i.e. f (H) = f + c · f . The previous equation can
be rewritten as:
m(H, g) · (f + c · f )
mr (H, g + 1) = = (1 + c) · m(H, g)
f
If we assume that c has a constant value during the whole process and that we
begin with g = 0, we then have:
δ(H)
ps = 1 −
L−1
If the assume that the crossover occurs with a probability given by, pc , the survival
probability will be given by:
pc · δ(H)
ps ≥ 1 −
L−1
We know that for the two independent variables A and B it is possible to write:
P (A ∩ B) = P (A) · P (B)
So, if the assumption is made that the reproduction and crossover operators
are independent, it will be possible to compute the quantity of schema H in the
B.2. WHAT ARE GENETIC ALGORITHMS? 213
population following these operators (the rc index specifies that reproduction and
crossover are involved):
f (H) δ(H)
mrc (H, g + 1) ≥ m(H, g) [1 − pc ]
f L−1
With this latter expression it is possible to understand the effects of the two
operators. The survival of a schema depends on two factors; namely:
1. that the fitness value of a schema is above or below the average fitness value.
f (H) δ(H)
mrcm (H, g + 1) ≥ m(H, g) [1 − pc − o(H) · pm ]
f L−1
The conclusion is that a small, low-order schema showing better performances
than the average schemata, will be tested exponentially in the following generations.
This is known as the fundamental theorem of Genetic Algorithms. Of course, this
demonstration depends closely on the string and the operators used to represent the
schemata, but it illustrates the viability of this method.
(L − Ls + 1) · |A|3
ns =
4
where Ls is the length of the schema and, as previously, L is the length of the
chromosomes, and |A| the size of the population.
214 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
It will be seen that the number of schemata is proportional to the cube of the size
of the population. It can be assume that this number is |A|3 , i.e. O(|A|3 ) In choosing
the size of a population it is important to remember that large populations process
more schemata and search most of the search space, but the resources needed in terms
of CPU and memory also increase. When the genetic operators are well designed, a
population of 16 or 32 can be chosen.
Schemata with little usable length and good performances enjoy such an impor-
tance in the Genetic Algorithms theory, that they are called building blocs GAs use
a juxtaposition of these building blocks to find a quasi-optimal solution. There are
many empirical results that have confirmed the importance of these building blocks
[15, 219, 86].
No
similarity criteria, the algorithm tries to change the assignments of the objects until
a stop condition is met. The main steps of this method involve:
• Choosing a center for each group, this can be carried out by randomly choosing
k objects in a set.
Figure B.9 illustrates the k-Means clustering algorithm. For the stop condition,
most k-Means stop when the centers have not changed between two iterations.
Many variants of the k-Means algorithm have been developed [7] Some methods
have been proposed for choosing the initial group centers optimally, for example by
creating sub-sets from the initial set (subsampling) and by running a specific k-Means
on each set to construct the initial centers that will be used to group the whole set
[26].
As already explained, a main problem is to choose the correct number, k, of
groups. Some developments enable groups to be split or merged during the k-Means
by using conditions relative to the distances between the centers of the groups and be-
tween the centers and the objects in the same group, such as the ISODATA algorithm
[17]
V2 V2
2 2
1 5 1
V3 2 V3
4 2 2
V1 V5 V1 V5
1 2 1
3 V4 V4
(a) (b)
Figure B.10: An edge-weighted undirected graph (a) and its minimum spanning tree
(b).
graph can have a zero length. When a path starts and finishes with the same vertex
and has a length greater than zero, it is called a cycle. A path or cycle not containing
two or more occurrences of the same vertex is known as a simple path or cycle.
A tree is a connected graph without cycles. A spanning tree of a connected graph
is a subgraph, which is a tree and that contains all the vertices of the corresponding
graph. In the case of edge-weighted undirected graphs there is a spanning tree with
the least total edge weight, which in fact is the minimum spanning tree. Figure B.10
shows a graph and the corresponding minimum spanning tree.
A simple algorithm proposed to solve the minimum spanning tree problem is
described by Prim [206]. The algorithm starts with an arbitrary vertex which is
considered to be the initial tree. It is assumed that a vertex has the attribute distance
and via − edge. The former is used to register the shortest distance from a vertex
not yet in the tree to a vertex already selected. The latter represents the edge
through which this shortest connection is made. In each iteration, the vertex with the
minimum distance value is added to the tree together with the edge of its via − edge
attribute. The values of the distance and via − edge attributes are then updated for
the vertices not yet in the tree.The algorithm stops, of course, when all vertices are
in the tree.
An application of the minimum spanning tree problem adapted to data clustering
has been proposed by Zahn [321]. The idea is to construct a minimum spanning tree
for the points representing the data, and then remove the edges with largest lengths
so that each set of connected points forms a group. Figure B.11 shows an illustration:
after the minimum spanning tree has been computed, the edges corresponding to the
greatest lengths of 6 and 4 are removed so that the resulting clustering consists of 3
groups.
No
Next Object Group with At least Insert in Group with All Objects Yes Final
to Insert Minimum Dist one Nearest Neighbor Inserted? Solution
No one
Insert in
New Group
E C F A H D
3,7,4 2,5 1,6,8 3,5 2,7,8 1,4,6
C1 C2
∀q ′ ∈ P .
An interactive clustering method has been proposed by Lu and Fu [165]. Each
unassigned object is inserted into the group of its nearest neighbor if the correspond-
ing distance is below a given threshold. Figure B.12 illustrated this algorithm.
B.4.1 Encoding
There are other applications of Genetic Algorithms to solve grouping problems, but
what makes GGA a well-designed solution is the coding used by Falkenauer. To
illustrate the encoding scheme used in GGA, let us consider the example in Figure
B.13 where letters represent the groups and the numbers represent the objects.
Most GAs dealing with grouping problems will choose the objects assignation as
the information to store in a gene. Such chromosomes could be written:
C1 = F C E E C F E F
C2 = D H A D A D H H
For C1 , the first object is assigned to group F , the second object to group C,
the third and the fourth objects to group E and so on. Each group is different,
which explains why other letters are used for both chromosomes. Assuming that
a conventional crossover with the crossing site take at 3 is applied, the resulting
chromosomes will be:
218 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
E C F A H D
3,7,4 2,5 1,6,8 2,5 1,6,8 3,7,4
C1 | C2 = F C E | D A D H H
C2 | C1 = D H A | E C F E F
It is evident that these newly created chromosomes have no sense. Moreover, if
some constraints on the groups exist, the resulting chromosomes will certainly contain
many illegal groups. In GGA, the chromosomes are enhanced with a group element
containing the group composition:
C1 = F C E E C F E F : F C E
C2 = D H A D A D H H : D H A
All the operators work on the group element of the chromosomes. This coding
respects the spirit of the building blocs because GGA always manipulate groups. This
coding has however a technical consequence, namely that the different chromosomes
in the same population have different lengths.
F C E E C F E F → 1 2 3 3 2 1 2 1
The first object is in group 1 = F The second object has been put into group
2 = C because it is not grouped with the first object. The third object is put into
group 3 = E because it is not grouped with the first or the second object. The fourth
object is put into the same group 3 as the third one. The other objects are handled
identically.
When the same method is employed with the second chromosome, the new object
element becomes:
H A D D A H D H → 1 2 3 3 2 1 2 1
The two newly created objects elements are identical, i.e. the groups are identical.
When GGA identifies the same solution on numerous occasions in the same popu-
lation, it decides to keep only one chromosome coding this solution. All the other
B.4. GROUPING GENETIC ALGORITHMS 219
B A C D Appear_twice
3,7 1,4 2,5 6,8
1._Select_Crossing_Section 2._Inject_CS_Bin(s)
B E D 2 B E H
3,7 1,4,5 6,8 Left_aside 3,7 1,4,5 6,8,2
3._Eliminate_Empty_Bins 4._Reinsert_Missing_Objects
and_bins_with_doubles
B.4.3 Initialization
Once the coding has been defined, the Grouping Genetic Algorithms must initialize
the population. The method used depends on the particular problem because the dif-
ferent solutions must satisfy the hard constraints. Because GGA are meta-heuristic,
heuristics are used most of the time to initialize the population. For example, Falke-
nauer suggests a first-fit heuristic for the bin packing.
An important point concerning GGA is the diversity of the population, i.e. the
heuristic must be adapted to produce different solutions each time it is called upon.
If the objects are treated in random order in the case of the first-fit heuristic, the
diversity of the population will be saved. Some heuristics cannot therefore be used for
the initialization. In the bin packing problem the first-fit descending heuristic, where
the objects are processed in descending order of size, is not used for initialization
because if all the objects have different sizes, the heuristic will always produce the
same solution and construct a population with identical chromosomes. This is of no
particular interest.
B.4.4 Crossover
Crossover is one of the most important operators in genetic algorithms. The crossover
paradigm used for the Grouping Genetic Algorithms is shown in Figure B.15.
The crossover consists of four steps:
2. The bins selected by the crossing site of one parent are inserted at the crossing
site of the second parent. At this stage, some objects may appear in more than
one bin.
220 APPENDIX B. ANNEXE - MULTI-CRITERIA GROUPING PROBLEMS
B A C D E C F B C 1,4,6,8
3,7 1,4 2,5 6,8 3,7,4 2,5 1,6,8 3,7 2,5 Left_aside
1. Choose randomly a few groups 3._Reinsert missing objects 2. Eliminate them from the solution
B A C D B D C A
3,7 1,4 2,5 6,8 3,7 6,8 2,5 1,4
3. The existing bins containing objects that are already in the inserted bins are
eliminated, i.e. some objects are no longer in a bin. If some bins are empty,
they are also removed from the solution.
With two parents it is possible to create two children by inserting the selected
bins of the first parent into the second one, and by doing the reverse. Concerning this
latter step, a method must be used for the insertion. This method in fact depends on
the problem to be solved because the operator must construct some valid solutions.
Falkenauer suggests a first-fit descending heuristic for example for the bin packing
problem.
B.4.5 Mutation
The role of a mutation operator is to insert new characteristics into a population
to enhance the search space of the Genetic Algorithm. In the case of the Grouping
Genetic Algorithms, the operator proposed is illustrated in Figure B.16.
The idea is to randomly choose a few bins and to remove them from the solution.
The objects attached to these bins are then reinserted into the solution. The method
used for the insertion of the objects is generally the same as the one used for the
crossover operator.
B.4.6 Inversion
The role of the inversion operator is to propose the same solution to the Grouping
Genetic Algorithm, but differently. As pointed out in B.3, a single solution may have
different presentations, and because crossovers work through crossing sites, the way
in which a solution is presented influences the crossover operator’s results. The first
group appearing in the group element of a chromosome is likely less probability to
be chosen than the other groups. It is therefore important to include this operator
in the Grouping Genetic Algorithms. Figure B.17 gives the philosophy behind the
inversion.
Two groups are randomly selected and their positions are switched in the solution.
In the example, when the crossing site selected is between the second and third
B.5. MULTIPLE OBJECTIVE PROBLEMS 221
positions, the bins inserted will be A and C in the first solution and D and C in the
second.
• non-Pareto approaches;
• Pareto-based approaches;
The few methods proposed here have been chosen to illustrate the different tech-
niques used to combine GAs and multiple objective optimization. But the widely
used concept of Pareto fronts will be described first at all.
f1 f1 dominated by A
PARETO−optimal
D front D
O O
A A
C C
B B
f2 f2
(a) (b)
Figure B.18: The Pareto optimal front (a) and dominance relations in objective
space (b).
of the criteria values. Once this cost function has been constructed, a method is
used to solve the problem by optimizing this single function. This type of solution
works when the objectives do not compete, i.e. an improvement of one criterion does
not lead to the negative influence of any other criteria. Moreover, it is impossible to
apply this approach when the criteria do not have the same physical existence, i.e.
when the price criterion is expressed in keand the power criterion in kW .
Another approach widely used in research into multiple objective optimization is
due to Pareto [199]: given a set of solutions, a multi-objective decision is used to
select a solution in this reduced space search. The concept of the Pareto front is
illustrated in Figure B.18, where five solutions A, B, C, D and O are represented for
a problem with two criteria, f1 and f2 . The solution, O, is not dominated by any
other solution, i.e. it is not possible to improve one criterion without downgrading
another. This solution is called a Pareto optimal. All the Pareto optimum form
the so-called Pareto-optimal front.
population population
initialisation initialisation
cost
function
evaluation evaluation
stop criterion stop criterion
genetic multi−criteria
operators decision−aid
integration genetic
operators
set of solutions
multi−criteria
decision−aid
solution
solution
(a) (b)
of the search space. Because the method is called upon each time an evaluation of a
population is needed, it is important for the method to be rapid. The Promethee
method [27, 293] is the multi-criteria decision-aid used by Rekiek.
A brief introduction to the Promethee multi-criteria decision aid system is given
in appendix A. At each generation all the chromosomes are considered to be a set of
solutions to be ranked with regard to the criteria, i.e. at each generation a multiple
objective problem must be solved. Each problem is defined by a set solutions C1 ,
C2 , · · · , C|A| and CBest consisting of the |A| chromosomes of a population, the best
chromosome ever computed CBest , and a set of |O| objectives.
An evaluation criterion, fo , and a weight, wo , are associated with each objective,
o. An evaluation fo (Cc ) corresponds to each couple of chromosome Cc and evaluation
criterion fo . This evaluation fo (Cc ) is given as a real number and representing the
quality of the chromosome for the given criterion. All these evaluations make up the
evaluation table used as input by the Promethee method (Table B.7).
Promethee computes a net flow φi for each chromosome. The net flow is the
result of comparisons with all the other chromosomes for all the criteria. Using net
flows it is possible to establish an inter-chromosomes ranking order from the highest
to the lowest values of φi
better solutions
PROMETHEE
function.
The method used in this thesis to determine the value of the cost function depends
on a case to case basis. Let us suppose that at generation g the chromosomes were
ranked as s1 , · · · , s|A|+1 where s1 was the best ranked chromosome and s|A|+1 the
worst ranked one. Let us also assign to each chromosome a cost function, cg (Cc ). The
cost functions can then be computed using the ranking obtained with Promethee:
g + 1.1 Cc = s1 and s1 6= CBest
c
Best,g−1 Cc = s1 and s1 = CBest
cg (Cc ) = 1 Cc = s2 and s1 6= CBest
|A|+1−r Cc = sr 6= CBest and o > 2
|A|
The value of the cost function of the highest ranked chromosome is set to g +1.1 if
it is not the best chromosome ever computed, else the cost function remains identical.
The value of the cost function for the second highest ranked chromosome is set to
1. The values of the cost function for the other chromosomes are proportional to the
corresponding ranking.
Let us illustrate the method on the example given in Table B.8 for a population of
4 chromosomes and the first 2 generations. The ranking is shown for each generation
and the cost function is computed.
The best chromosome was not computed after initialization, and is therefore
ranked as the worst one. Chromosome C1 is ranked as the best one and also has
the greatest cost value function and will be chosen as the best chromosome. After
B.6. MULTIPLE OBJECTIVE GROUPING GENETIC ALGORITHMS 227
the first generation, the ranking leaves the best chromosome in its position as the
best ranked one, i.e. its cost function has not changed and is still better than that
of the others. After the second generation, the best ranked chromosome is no longer
CBest but C3 The cost function of C3 becomes 2 + 1.1 and is then better than that
of the best chromosome. Chromosome C3 will be chosen as the best chromosome
ever computed. The fact that C1 is better ranked than CBest and has worse cost
function is not important because CBest will be replaced by C3 and disappear from
the Genetic Algorithm.
The advantage of the described method is that the GGA operators do not know
anything about the multi-criteria decision-aid method used for ranking the chromo-
somes. Integration into standard GGA is therefore facilitated.
pared. When dealing with approximate multi-criteria problems2 , the values of the
criteria for each solution are just an approximation of the quality of the solution. It
can be added that when comparing two solutions, S1 and S2 , where S1 is a better
solution than S2 , the values of the other solutions relative to the population lead S2
as the best to be ranked, i.e. the actual best solution is replaced. This can be seen
as a problem where a local optimum hides the global optimum.
To avoid this problem, an idea could be to store in a separate set all the dif-
ferent solutions that the MO-GGA has ranked as the best solution on an occasion.
At the end of the Genetic Algorithm, these best solutions are checked by applying
Promethee to this set. The solution ranked as first is then return as the result of
the Genetic Algorithm. This solution has been successfully applied to a cell formu-
lation problem where an application of MO-GGA was used [286]. While this method
decrease the probability that a local optimum hides the global optimum, it does not
avoid it. In fact, the ranking done after the run of the Genetic Algorithm can also
be incorrect.
B.7 Conclusions
One field of Computer Science involves the development of algorithms to solve opti-
mization problems. An important category of these problems consists of the NP-hard
ones, for which no polynomial time algorithm has been found so far. When dealing
with NP-hard problems it is, most of the time, necessary to use meta-heuristics.
One set of the meta-heuristics are the Genetic Algorithms (GAs) which are based on
the natural evolution of a population of individual. Grouping problems must find a
optimal clustering of a given set of objects that must respect constraints. Most of
the grouping problems are NP-hard, it is therefore necessary to use meta-heuristics.
The Grouping Genetic Algorithms (GGA) are a class of GAs suited for these group-
ing problems. Sometimes, it is impossible to find a cost function that represent the
quality of a solution but several criteria are needed. In these cases, the problems are
multiple objective problems. Some grouping problems are multiple objective prob-
lems, and a set of algorithms known as the Multiple Objective Grouping Genetic
Algorithms (MO-GGA) were developed to solve these problems.
2
An approximate multi-criteria problem is a problem where the aim cannot be accurately char-
acterized in any way. A set of criteria is defined to approximate the characteristics of the target
without guaranteeing an exact match between the criteria and the characteristics. Each criterion
defines a function that must either be maximized or minimized
Appendix C
This appendix details the coding of heuristics and operators used in the algorithm
Simogga while the chapter 6 details the encoding and illustrates the method with
some examples.
The guideline of this appendix follows the flowchart presented in the figure C.1.
Firstly, the coding based on the GGA coding is presented and illustrated. The rate
used for the genetic operators are computed on the basis of the data. Secondly, the
initialisation phase is described with the different used heuristics. The generation is
then completely detailed with the selection process and the different genetic operators
following by the evaluation phases and the stopping condition. This chapter finishes
with a study of all used parameters.
Analyze
Compute Initialisation :
Population –
Begin 2PMOGGA Problem Encoding Operator Create initial
Fitness
Rate population
Evaluation
Stop Condition :
End 2PMOGGA Problem Decoding Final Solution Y 200 Gen or Intracell Generation
flow=100
C.1 Initialisation
C.1.1 Rand Group Heuristic
Our Rand Group Heuristic is based on a first fit with a first random part. In a first
fit heuristic, the different objects are treated in random order. The different objects
are inserted in the first group able to accept the object (respecting the capacity and
acceptable constraints of the group). The search of an acceptable group is made
among the used group. If no group is found, a new group is inserted in the system.
By this process, the number of used group is always minimal.
To authorize the creation of solutions with different numbers of groups, a proba-
bility p is used to create a new group before the search of a acceptable group for the
object.
229
230 APPENDIX C. ANNEXE - ALGORITHM SIMOGGA
Y N N
Run Initialization
Random
Init1<50 ? Init2<50 ? Y Heuristic for RP Y
Value(100) = init1
problem
Initialisation:
Begin - Delete not allowed objects
Heuristic - Put allowed objects not
grouped in a random list
Y
N N
Find a group
Select next object
Group is for Next
to insert End Heuristic
founded? object
= CurObj
=FindGroup
Figure C.3: Flowchart of the random heuristic used for the RP and CF problem
C.1. INITIALISATION 231
Step H0 Initialisation:
M axGroups − N bGroups
p=
M axGroups
where:
M axGroups= maximum of allowed groups for the problem,
N bGroups = actual number of used groups.
The probability p to create a new group before the assignment of the ob-
ject in a group decreases with the number of created groups. The number
of used groups will not necessary be minimal. Randomly, solutions will
not contain the same number of groups except if the capacity requires it.
Initialisation:
- Compute flow matrix
Begin
- Delete not allowed objects
CF Heuristic
- Put allowed objects not
Grouped in a random list
Find a group
for Next Group is
End CF Heuristic RP is done? N N
Y object founded ?
=FindGroup
Find
Find MaxCell = cell with Find MaxObj with
max(MaxCell,MaxObj)
maximum flow from and maximum flow from and
in creating a new cell for
to CurObj/machine to CurObj/operation
MaxObj
For Flow Heuristic CF, the different steps of the Rand Group Heuristic are com-
pleted as followed:
1. Find the cell c̃ with which the current object/machine has the max-
imum flow φmaxCell . φ̃1mc represents the total flow between the ma-
chine m and all machines belonging to the cell c.
nM
X
φ̃1mc = znd · φimn (C.1)
n=1
φmaxCell = φ̃1mc̃ c̃ = maxc φ̃1mc
(C.2)
P C
2. Find the machine ñ not yet assigned ( nc=1 znc = 0) with which the
current object/machine has the maximum flow φmaxM ach .
n !
X C
φ̃1mñ φ̃1mn
φmaxM ach = ñ = maxn znc = 0 (C.3)
c=1
Initialisation :
- Processes selection
Begin
- Delete not allowed objects
RP Heuristic
- Put allowed objects not
grouped in a random list
Group is N
founded ?
Find a group
Select next object
for Next
N to insert End RP Heuristic
object
= CurObj
=FindGroup
The selected process for part i will be the one with the maximum
value m̃ij . If there are several processes with the same value m̃ij , a
draw is made.
This heuristic is applied during the initialisation phases because the choice of pro-
cesses is based on the machine grouping. As explained in the section C.2.2 here below,
when the heuristic is called to reconstruct the chromosome, the process selection is
already done.
C.2 Generation
This section details the different step in a generation. Firstly, the selection is de-
scribed, followed by the operators (crossover, mutation and inversion). This section
closes with the reconstruction phases applied after the operators.
The important point is that the genetic operators will work with the group part
of the chromosomes. The standard object part of the chromosomes serves to identify
which objects actually form which group. Note in particular that this implies that
operators will have to handle chromosomes of variable length with genes representing
the groups.
Operators defined here below are applied with a different probability on the first
or second part of the chromosome. These probabilities are computed by the program
during the encoding process and described in section 6.2.2.
236 APPENDIX C. ANNEXE - ALGORITHM SIMOGGA
Selection :
Tournament
Crossover AnalysePop Inversion
strategy=>
Ordered List
AnalysePop Mutation
End Generation
Set of
C1->Fitness >
End Selection Y chromsomes is
C2->Fitness
empty
N Y Y
C.2.1 Selection
The tournament strategy is chosen to select the chromosome for the application of op-
erators. The idea is to create an ordered list of the individuals with the best solution
always at the top, and the others ordered according to a specific method described
below. The upper part of the list will further be used when "good" chromosomes
are needed, while the lower part for "bad" chromosomes. An initial set of all the
individual identifiers is established. Two identifiers of this set are chosen randomly,
and their fitness values are compared. The best of the two - the one corresponding to
the individual with the best fitness value - is reinserted into the set, while the other
is pushed into the list. This method is repeated until all the identifiers in the set
are in the list; this process leads to the construction of an ordered list of individuals,
with the best one at the top of the list. The operator can then presume that the
chromosomes ranked in the top half will be used as parents for the crossovers and
the resulting children will replace the chromosomes in the bottom half.
C.2.2 Crossover
The crossover (illustrated in figure C.8) is applied at each generation. After tour-
nament selection, half chromosomes are crossed and overwrite the worse half. The
process of crossover is the following: two parents are selected in the half top of the
list. Then, the basic circular crossover (explained here below) is applied between both
parents to create two children E1 and E2. The children E1 is created in injecting the
crossing sections from parent P1 in the crossing site of parent P2. The children E2
is created in reversing the process: in injecting the crossing sections from parent P2
in the crossing site of parent P1. This process is applied until half chromosomes at
the top of the list have been crossed.
The basic circular crossover will be applied on the RP problem and CF problem
with a probability of respectively CrossRP % and CrossCF %. In this way, if
the sum of CrossRP and CrossCF is greater than 100 % the crossover can be
applied simultaneously on both parts. In practical, for each children, a random
number is selected (rand ∈ [0 − 100]). If (rand < CrossRP ), the basic circular
crossover is applied between the RP-parts in copying the CF-parts. Then, if (rand >
(100 − CrossCF )), the basic crossover is again applied between the CF-parts.
238 APPENDIX C. ANNEXE - ALGORITHM SIMOGGA
2 1 1
P1 A1 B1 C1 : D1 E1 A2 C1 A1 B2 C2 : D2
P2 A2
1
B2
2
C2 : D2
→ E2 A1 B1
1
B2 C1 : D1
The basic circular crossover (schematized in figure C.10) will fit the following
pattern:
Step 1 A crossing site is randomly selected in each parent. Each part of chromo-
some is represented by a ring. In this case, the crossing site defined by
two-point crossover can include intern groups as well as groups located at
the extremities of the chromosome.
Step 2 Groups selected by the crossing site of one parent are inserted at the
second parent crossing site. At this stage, some objects may appear in
more than one group.
Step 3 The objects cannot appear twice in one solution. The new injected objects
have the priority. So, the existing groups containing objects that are
already in the inserted groups are eliminated. If some groups are empty,
they are also removed from the solution.
Step 4 The validity of the solution is verified according to the hard constraints
relative to the cell formation problem. The used process can be differed in
two parents. Two process of a same product cannot coexist in the solution.
Moreover, a specific machine cannot appear twice. Compatibility is tested
between inserted groups and existing groups. If the existing groups con-
tain operations belonging to another process or the groups corresponding
to an inserted machine, these groups are also eliminated.
Step 5 The objects left aside are reinserted into the solution. It is the reconstruc-
tion phases (explained in section C.2.5).
Parent 1: A D B 0 0 C B E 0 0 E C : A D B C E :: p q q p q 0 : p q
P1 A D B C E : p q
0 1 2, 6 5, 11 7, 10 A, D B, C, E
Parent 2: 0 0 0 A F A B E 0 0 E C : A F B E C :: q p p 0 p q : q p
C.2. GENERATION 239
Begin End
BasicCrossover BasicCrossover
Copy all objects before Copy all groups before Copy all objects after Copy all groups after
pos2 and compatible with pos2 and compatible with pos2 and compatible with pos2 and compatible with
inserted groups from P1 inserted groups from P1 inserted groups from P1 inserted groups from P1
Y N
P2 A F B E C : q p
3, 5 4 6 7, 10 11 A, F B, C, E
The crossover is applied to the first part of the chromosome. The crossing sites
are defined.
1 2
P1 A D B C E : p q
0 1 2, 6 5, 11 7, 10 A, D B, C, E
2 1
P2 A F B E C : q p
3, 5 4 6 7, 10 11 A, F B, C, E
The groups A and D belonging to the crossing site from parent 1 will be injected
in the crossing section 1 to parent 2. The objects/operations 0 and 1, belonging to
the first process of part 1, must be inserted in parent 2 and have the priority for the
process selection. The process selection of the new children 1 (C1) will be defined by
the processes used by parts in the crossing site (1 0 ∗ ∗ ∗). This string is completed
with the process selection from parent 2 for the missing parts (1 0 1 0 1). The inserted
groups are in bold to distinguish them from the existing groups.
2 1
C1 A F B E A D C : q p
3, 5 4 6 7, 10 0 1 11 A, F B, C, E
240 APPENDIX C. ANNEXE - ALGORITHM SIMOGGA
Objects 3 and 4 must be eliminated from the solution because they belong to a
not used process. Groups are adapted, and empty groups are eliminated from the
RP-part of the chromosome. Group A appearing twice. Group A from parent 2 is
deleted because it is already included in the inserted section site that has the priority.
C1 B E A D C : q p 2,5
6 7, 10 0 1 11 A, F B, C, E
The left object/operation 2 and 5 belonging to the used processes must be rein-
serted by the RP heuristic, respecting capability and capacity constraints.
C1 B E A D C : q p
6,2 7, 10 0, 5 1 11 A, F B, C, E
After the reconstruction phases of the RP-part, the CF-part of the chromosome
must be adapted and reconstructed. The CF heuristic is applied to regroup new
groups/machines inserted by the crossover or created by the reconstruction RP heuris-
tic. Object/machine F not used in the RP-part of the chromosome is eliminated from
the solution. Object/machine D used in the RP-part must be reinserted.
C1 B E A D C : q p D
6,2 7, 10 0, 5 1 11 A B, C, E
The CF heuristic is applied to reinsert the left object/machine D.
C1 B E A D C : q p
6,2 7, 10 0, 5 1 11 A, D B, C, E
The same procedure is applied to groups B and A included in the crossing site of
parent 2 to be injected in parent 1 at section point "|1 ".
C.2.3 Mutation
The role of a mutation operator is to insert new characteristics into a population to
enhance the search space of Genetic Algorithm. But the operator must be defined as
the smallest possible modification with respect to the encoding. And this operator
must be applied as small as possible to let the population evolve with the crossover
([67]).
To respect these considerations, the mutation is not applied as classic genetic
algorithm with a probability between 1 and 5 %. The mutation is applied when there
is no more evolution of the population and the best chromosome of the population is
no changed since a specific number of generations (=AgeBest, The age of a solution
is the number of generations during which the solution is not modified). This number
defines the best chromosome age in the population. The mutation is applied when
this age is reached. The best chromosome is mutated and replaces the worst one.
In the classic genetic algorithm with binary coding, a bit is transformed. With our
grouping coding, there is no sense to remove one group of the solution. To apply a
modification in the grouping, minimum of two groups must be treated to reassigned
differently removed objects.
As defined for the crossover, two parameters, M utRP % and M utCF %, defined
probabilities to apply mutation on first and/or second part of the chromosome. In
our case, the mutation is based on two principal aspects:
C.2. GENERATION 241
• The first idea is to randomly choose a few groups and to remove them from
the solution. The objects attached to these groups are then reinserted into the
solution.
• The second idea is specialized for the cell formation problem. It is to force the
use of another process for a few products. All operations of selected process
are eliminated and the reconstruction phases is applied to the objects belong
to the new process.
P1: A D B 0 0 C B E 0 0 E C : A D B C E :: p q q p q 0 : p q
P1 A D B C E : p q
0 1 2, 6 5, 11 7, 10 A, D B, C, E
could be mutated into
P1: ∗ D ∗ 0 0 C ∗ E 0 0 E C : D C E :: p q q p q 0 : p q
P1 D C E : p q 0, 2, 6, A, B
1 5, 11 7, 10 A, D B, C, E
The process selection string is not changed because there is at least one operation
belonging to each part in the actual solution (1 0 1 0 1). On the contrary way, the
process would be changed.
The chromosome is reconstructed into:
P1: C D B 0 0 C B E 0 0 E C : D C E B :: p q q p q 0 : p q
P1 D C E B : p q A
1 0, 5, 11 7, 10 2, 6 A, D B, C, E
The method used in the reconstruction phases is the same as the one used for the
crossover operator. Objects/machines not used in the RP-part are eliminated from
the solution and left objects/machines are reinserted.
P1: C D B 0 0 C B E 0 0 E C : D C E B :: 0 q q p q 0 : p q
P1 D C E B : p q A
1 0, 5, 11 7, 10 2, 6 D B, C, E
C.2.4 Inversion
The role of the inversion operator is to propose the same solution to the Grouping
Genetic Algorithm, but differently. As pointed out in 6.1, a single solution may
have different presentations, and because crossovers work through crossing sites, the
way in which a solution is presented influences the crossover operator’s results. The
groups appearing close to each other in the group element of a chromosome has likely
more probability to be chosen and transmitted together to the children (offspring). It
is therefore important to include this operator in the Grouping Genetic Algorithms.
This operator changes the position of the groups on the chromosome without changing
the solution. Thus for instance, the chromosome
242 APPENDIX C. ANNEXE - ALGORITHM SIMOGGA
Post Evaluation :
Evaluation of each
Begin Run Promethee II
chromsome (with
Evaluation algorithm to class
several criteria)
all chromosomes
Parent 1: A D B 0 0 C B E 0 0 E C : A D B C E :: p q q p q 0 : p q
P1 A D B C E : p q
0 1 2, 6 5, 11 7, 10 A, D B, C, E
could be inverted into
Parent 1: A D B 0 0 C B E 0 0 E C : A B D C E :: p q q p q 0 : p q
P1 A B D C E : p q
0 2, 6 1 5, 11 7, 10 A, D B, C, E
The object part of the chromosome stays unchanged. Indeed, the groups are still
composed if same objects. Only the position of the representation of the groups
is changed. In this case, if groups D and C represent well performing groups, the
probability of transmitting both of them during the next crossover is improved after
the inversion, since there are closer together.
C.2.5 Reconstruction
After operators of crossover and mutation, it is necessary to reinsert all the objects
not assigned. If only one part of the chromosome is modified, the specialized heuristic
(Flow Heuristic RP or CF) is applied to reconstructed the modified part. If both parts
are crossed and/or mutated, one part is randomly selected and reconstructed with
the Rand Group Heuristic. And the specialized heuristic is applied to reconstruct
the second part of the chromosome. The specialized heuristic for part RP of the
chromosome is lightly different in this reconstruction phases. The process selection
for the chromosome depends on the crossover. In this case, the reconstruction phases
for the part RP is based on the computation of maximum flow between the object
to insert and the existing cells.
C.3 Evaluation
After the initialisation and during all generations after the operators, the population
is evaluated and sorted (illustrated in the flowchart C.11). On the basis of all criteria
C.4. STOPPING CONDITION 243
N
N
Evaluation of
N Modify End AnalysePop
Population
presented in chapter 4.3.3, each chromosome is evaluated and the parameters (ΦIntra ,
Γ, S, OHL, U W L and U LL) are completed. Once these parameters are evaluated
for each chromosome, Promethee ranks all the chromosome of the population (C1 to
CN ) and the best chromosome (CBest ) ever found. At gth generation, this ordered
list of chromosomes (s1 for the best ranked chromosome to sN + 1 for the worst
ranked one) permits to specify the fitness (Cg (Cc )) of each chromosome c following
the notations:
g + 1.1, Cc = s1 and s1 6= CBest
Best ), Cc = s1 and s1 = CBest
C (C
g−1
Cg (Cc ) = (C.7)
1,
Cc = s2 and s1 6= CBest
|N |+1−r ,
Cc = sr 6= CBest and c > 2
|N |
threshold. The number of generations influences the quality of the solution. In fact,
the number of generations has a double effect on a Genetic Algorithm (GA):
• When the number of generations increases, the quality of the GA should also
increase.
• When the number of generations decreases, the computational time needed also
decreases.
In the section, the Generalized Formula 4.51 is applied for the cases where nR = 0 to
nR = 3. In these following instances, the number (in bracket) put in index indicates
the studied repartition of nR elements and the level of the decomposition.
• if nR = 0
QnC −1 mU B !
(nC ,mU B ) i=0 C̃(n M −i×mU B )
N S(nR = 0) = G(nM ,0,0) =
| {z } (η0 )nC
(000...)
nR =0
QnC −1 mU B !
i=0 C̃(n M −i×mU B )
= (D.1)
nC !
(000...)
• if nR = 1
(mU B −1)
(n ,m
C UB ) C̃nM (nC −1,mU B )
N S(nR = 1) = G(nM ,1,1) = × G(nM −(mU B −1),0,1)
η1 | {z }
nR =0 (1...)
QnC −2 mU B !
(m −1) C̃(n
C̃nMU B i=0 M −i×mU B )
= ×
η1 (η0 )nC −1
(100...)
QnC −2 mU B !
i=0 C̃(n M −i×mU B )
N S(nR = 1) = C̃n(mU B −1)
× (D.2)
M
(nC − 1) !
(100...)
• if nR = 2
245
246 APPENDIX D. ANNEXE - SEARCH SPACE
2
!
(m −i)
(nC ,mU B )
X C̃nMU B (nC −1,mU B )
N S(nR = 2) = G(nM ,2,2) = × G(nM −(mU B −i),2−i,i)
ηi
i=1
!
(m −1) (mU B −2)
C̃nMU B C(n −1,m
UB ) C̃nM (nC −1,mU B )
= × G(nM −(mU B −1),1,1) + × G(nM −(mU B −2),0,2)
η1 η2 | {z }
(1...)
nR =0 (2...)
(mU B −1) (mU B −1)
C̃n C̃n −(mU B −1) (nC −2,mU B )
= M × M × G(nM −2×(m −1),0,1)
η1 η1 | {z U B
}
nR =0 (11...)
QnC −2 mU B !
(m −2) C((n
C̃nMU B i=0 M −(mUB −2))−i×mUB )
+ ×
η2 (η0 )nC −1
(2000...)
QnC −3
(mU B −1) (m −1) mU B
C̃nMU−(m
B
C̃(n
= C̃nM × U B −1)
×
i=0 M −2×(mU B −1)−i×mU B )
η1 η1 (η0 )nC −2
(1100...)
QnC −2 mU B !
(m −2) C̃((n
C̃nMU B i=0 M −(mU B −2))−i×mU B )
+ ×
η2 (η0 )nC −1
(2000...)
N S(nR = 2) =
QnC −3 mU B
(m −1) (m −1)
CnMU B × C̃nMU−(m
B
U B −1) i=0 C̃((nM −2×(mU B −1)−i×mU B ))
×
2! (nC − 2) !
(1100...)
QnC −2 mU B !
i=0 C̃((n M −(mU B −2))−i×mU B )
+ Cn(mU B −2)
× (D.3)
M
(nC − 1) !
(2000...)
• if nR = 3
3
!
(m −i)
(nC ,mU B )
X C̃nMU B (nC −1,mU B )
N S(nR = 3) = G(nM ,3,3) = × G(nM −(mU B −i),3−i,i)
ηi
i=1
! !
(m −1) (m −2)
C̃nMU B (nC −1,mU B ) C̃nMU B (nC −1,mU B )
= × G(nM −(mU B −1),2,1) + × G(nM −(mU B −2),1,2)
η1 η2
(1...) (2...)
(mU B −3)
C̃n (nC −1,mU B )
+ M × G(nM −(mU B −3),0,3)
η3 | {z }
nR =0 (3...)
247
(mU B −1) (m −1)
C̃nMU−(m
B
(mU B −1) (mU B −1) (m B −1)
C̃nM C̃nM −(mU B −1) C̃nMU−2×(m U B −1) (nC −3,mU B )
= × × × G(nM −3×(mU B −1),0,1)
η1 η1 η1 | {z }
nR =0 (111...)
QnC −3
(m −2) (m −1) mU B
C̃n U B C̃nMU−(m
B
U B −2) i=0 C̃((n M −(mU B −2)−(mU B −1))−i×mU B )
+ M × ×
η2 η1 (η0 )nC −2
(2100...)
QnC −2 mU B !
(m −3) C̃((n
C̃nMU B i=0 M −(mU B −3))−i×mU B )
+ ×
η3 (η0 )nC −1
(3000...)
QnC −4
(m −1) (m −1) (m −1) mU B
C̃n U B C̃nMU−(m
B
U B −1)
C̃nMU−2×(m
B
U B −1) i=0 C̃((n M −3×(mU B −1))−i×mU B )
= M × × ×
η1 η1 η1 (η0 )nC −3
(11100...)
QnC −2
(mU B −2) (m −1) mU B
C̃nMU−(m
B
C̃((n
+ C̃nM × U B −2)
×
i=0 M −(mU B −2)−(mU B −1))−i×mU B )
η2 η1 (η0 )nC −2
(2100...)
QnC −2 mU B !
(m −3) C̃((n
C̃nMU B i=0 M −(mU B −3))−i×mU B )
+ ×
η3 (η0 )nC −1
(3000...)
QnC −4
(m −1) (m −1) (m −1) mU B
C̃nMU B × C̃nMU−(m
B
U B −1)
× C̃nMU−2×(m
B
U B −1) i=0 C̃((n M −3×(mU B −1))−i×mU B )
= ×
3! (nC − 3) !
(11100...)
QnC −3 mU B !
(m B −1) i=0 C̃((n M −(mU B −2)−(mU B −1))−i×mU B )
+ C̃n(mU B −2)
× C̃nMU−(mU B −2)
×
M
(nC − 2) !
(2100...)
QnC −2 mU B !
i=0 C̃((n M −(mU B −3))−i×mU B )
+ C̃n(mU B −3)
×
M
(nC − 1) !
(3000...)
248 APPENDIX D. ANNEXE - SEARCH SPACE
4
!
(m −i)
(nC ,mU B )
X C̃nMU B (nC −1,mU B )
N S(nR = 4) = G(nM ,4,4) = × G(nM −(mU B −i),4−i,i)
ηi
i=1
! !
(m −1) (m −2)
C̃nMU B (nC −1,mU B ) C̃nMU B (nC −1,mU B )
= × G(nM −(mU B −1),3,1) + × G(nM −(mU B −2),2,2)
η1 η2
(1...) (2...)
!
(m −3)
C̃nMU B C (n −1,m
UB )
+ × G(nM −(mU B −3),1,3)
η3
(3...)
(mU B −4)
C̃n (nC −1,mU B )
+ M × G(nM −(mU B −4),0,4)
η4 | {z }
nR =0 (4...)
(m −1) (m −1) (m −1) (m −1)
C̃n U B C̃nMU−(m
B
U B −1)
C̃nMU−2×(m
B
U B −1)
C̃nMU−3×(m
B
U B −1)
= M × × × ×
η1 η1 η1 η1
(nC −4,mU B )
G(nM −4×(mU B −1),0,1)
| {z }
nR =0 (11110)
(mU B −2) (m −1) (m −1)
C̃nMU−(m
B
C̃nMU−(m
B
C̃nM × U B −2)
× U B −2)−(mU B −1)
×
η2 η1 η1
(nC −3,mU B )
G(nM −(mU B −2)−2×(mU B −1),0,1)
| {z }
nR =0 (21100)
(mU B −2) (m B −2)
C̃nM C̃nMU−(mU B −2) (nC −2,mU B )
× × G(nM −(mU B −2)−(mU B −1),0,2)
η2 η2 | {z }
nR =0 (22000)
(mU B −3) (m −1)
C̃n C̃nMU B (n −2,m )
+ M × × G(nCM −(mUUBB−3)−(mU B −1),0,1)
η3 η1 | {z }
nR =0 (31000)
QnC −2 mU B !
(m −4) C̃((n
C̃nMU B i=0 M −(mU B −4))−i×mU B )
+ ×
η4 (η0 )nC −1
(40000)
(mU B −1) (m −1) (m −1) (m −1)
C̃n C̃nMU−(m
B
U B −1)
C̃nMU−2×(m
B
U B −1)
C̃nMU−3×(m
B
U B −1)
= M × × ×
η1 η1 η1 η1
249
QnC −5 mU B !
i=0 C̃((n M −(4×mU B −4))−i×mU B )
×
(η0 )nC −4
(11110)
(m −2) (m −1) (m −1)
UB C̃nMU−(m
B
C̃nMU−2×m
B
C̃nM × U B −2)
× U B −3)
η2 η1 η1
QnC −4 mU B !
i=0 C̃((n M −(3×mU B −4))−i×mU B )
×
(η0 )nC −3
(21100)
QnC −3
(m −2) (m −2) mU B
UB C̃nMU−(m
B
C̃((n
C̃nM × U B −2)
××
i=0 M −(2×mU B −4))−i×mU B )
η2 η2 (η0 )nC −2
(22000)
QnC −3 mU B !
(m −3) (m −1) C̃((n
C̃nMU B C̃nMU B i=0 M −(2×mU B −4))−i×mU B )
+ × ×
η3 η1 (η0 )nC −2
(31000)
QnC −2 mU B !
(m −4) C̃((n
C̃nMU B i=0 M −(mU B −4))−i×mU B )
+ ×
η4 (η0 )nC −1
(40000)
(m −1) (m −1) (m −1) (m −1)
C̃nMU B × C̃nMU−(m
B
U B −1)
× C̃nMU−2×(m
B
U B −1)
× C̃nMU−3×(m
B
U B −1)
=
4!
QnC −5 mU B !
i=0 C̃((n M −(4×mU B −4))−i×mU B )
×
(nC − 4)!
(11110)
QnC −4
(m −2) (m −1) (m −1) mU B
C̃nMU B × C̃nMU−(m
B
U B −2)
× C̃nMU−(2×m
B
U B −3) i=0 C̃((n M −(3×mU B −4))−i×mU B )
×
2! (nC − 3)!
(21100)
QnC −3
(m −2) (m B −2) mU B
C̃nMU B × C̃nMU−(mU B −2) i=0 C̃((n M −(2×mU B −4))−i×mU B )
×
2! (nC − 2)!
(22000)
QnC −3 mU B !
i=0 C̃((n M −(2×mU B −4))−i×mU B )
+ C̃n(mU B −3)
× C̃n(mU B −1)
×
M M
(nC − 2)!
(31000)
QnC −2 mU B !
i=0 C̃((n M −(mU B −4))−i×mU B )
+ C̃n(mU B −4)
×
M
(nC − 1)!
(40000)
250 APPENDIX D. ANNEXE - SEARCH SPACE
Appendix E
This appendix contains all the complete tables used for the chapter 7.
Table E.4 resumes the best values for each cases of all the four sets of alterna-
tivities. For each set of alternativities, the first column contains the percentage of
cases reaching 100% of intra-cellular flow (%), the second one defines the number of
generations to reach this value (Gen) and the last one shows the maximum intra-
cellular flow found after 100 generations. The four last columns present the average
for each case while the last row, the average values for each set of alternativities. The
last column gives an idea of the resolution time. This resolution time is defined in
seconds.
Table E.5 presents the final results for the algorithm with the local optimisation
after 100 generations for each of the 80 cases (20 cases for each set of alternativity).
The shown values are the percentages of cases reaching 100% of intra-cellular flow,
the best generation number to achieve this level, and the best flow found. The last
row defines the average on 20 cases while the three last columns present the average
for each case on the four sets of alternativity.
Tables E.6 and E.7 presents for all 20 cases of four sets of alternativity RA and
P A ((0, 0), (0, 100), (100, 0) and (100, 100)) the best values after 1000 generations
(with 32 and 64 chromosomes). All values are the average computed for 10 successive
runs. The first column exposes the percentage of cases reaching 100% of intra-cellular
flow. The second one defines the average generation to reach this solution (100% of
grouping) while the third one indicates the best intra-cellular flow found after 1000
generations.
Table E.8 presents the results for the RP-Mogga with an integrated module CF
and the CF-Mogga with an integrated module RP. The results are organised as
follow: for each cases (CI5, CI10, CI15 and CI20), the three sets of alternativity
RA and P A ((0, 100), (100, 0) and (100, 100)) are placed in columns. For each case,
the results contain the percentage of cases reaching 100% of intra-cellular flow, the
average (on 10 runs) of the best intra-cellular flow after 100 generations as well as
the resolution time. These values are given for the proposed algorithm (Simogga)
in the first row. The second row adresses the value relative to the Mogga with an
integrated RP heuristic while the third one the value relative to the Mogga with
another integrated RPMogga. The fourth and fifth rows present the results for the
integrated CF module (heuristic and CFMogga). Theses tables permits to compare
the proposed method to the successive resolutions.
251
252 APPENDIX E. COMPLETE TABLES OF RESULTS
Table E.1: Average values of the intra-cellular flow for ideal case studies (RA=0 and
PA=100).
Table E.2: Average values of the intra-cellular flow for ideal case studies (RA=100
and PA=0).
253
Table E.3: Average values of the intra-cellular flow for ideal case studies (RA=100
and PA=100).
254
(RA,P A) (0, 0) (0, 100) (100, 0) (100, 100) Average
% Gen Max % Gen Max % Gen Max % Gen Max % Gen Max Time
CI1 100 0 100 100 0 100 100 0 100 100 0 100 100 0 100 0.1
CI2 100 0 100 100 1.2 100 100 0.2 100 100 0 100 100 0.35 100 0.1
CI3 100 0 100 100 1 100 100 1.1 100 100 0 100 100 0.53 100 0.1
CI4 100 0 100 100 6.3 100 100 5.9 100 100 6.2 100 100 4.6 100 0.4
CI5 100 0 100 100 6.5 100 100 24.2 100 100 34.4 100 100 16.28 100 1.1
Table E.4: Comparative table with best values for a population of 32 chromosomes after 100 generations for four sets of alternativity.
(RA,P A) (0, 0) (0, 100) (100, 0) (100, 100) Average
% Gen Max % Gen Max % Gen Max % Gen Max % Gen Max Time
CI1 100 0 100 100 0 100 100 0 100 100 0 100 100 0 100 0.1
CI2 100 0 100 100 1.2 100 100 0.1 100 100 0 100 100 0.33 100 0.2
CI3 100 0 100 100 1.2 100 100 0.2 100 100 0 100 100 0.35 100 0.1
CI4 100 0 100 100 5.3 100 100 1 100 100 4 100 100 2.58 100 0.5
CI5 100 0 100 100 6.3 100 100 1 100 100 9.2 100 100 4.13 100 0.7
CI6 100 0 100 100 7.7 100 100 9.7 100 100 9.1 100 100 6.63 100 1.2
CI7 100 0 100 100 6.4 100 100 3.6 100 100 8.9 100 100 4.73 100 1
CI8 100 0 100 100 7.8 100 100 7 100 100 31.4 100 100 11.55 100 3
CI9 100 0 100 100 15.7 100 100 7.2 100 90 28.22 99.7 97.5 12.78 99.93 4.9
CI10 100 0 100 100 17.3 100 50 38.8 99.1 30 37 97.4 70 23.28 99.13 15.3
CI11 100 0 100 100 21.3 100 50 52 99.4 20 89 95.8 67.5 40.58 98.8 26.6
CI12 100 0 100 100 7.2 100 100 7.5 100 20 52.5 98 80 16.8 99.5 6.3
CI13 100 0 100 100 15.5 100 100 20.8 100 60 48 99.4 90 21.08 99.85 8.8
CI14 100 0 100 100 16.6 100 100 24.3 100 20 66.5 93.1 80 26.85 98.28 17.6
CI15 100 0.3 100 100 20 100 100 38 100 20 80 91.8 80 34.58 97.95 27.1
CI16 100 0 100 100 29.2 100 60 32.5 98.6 40 65.5 98.1 75 31.8 99.18 39.3
CI17 100 0 100 100 13.4 100 100 23 100 70 50.71 98.9 92.5 21.78 99.73 10
CI18 100 0.2 100 100 18.3 100 70 45.4 99.4 50 50.4 95.3 80 28.58 98.68 21.7
CI19 100 0 100 100 29.7 100 60 58.7 98.5 0 - 91.5 65 29.46 97.5 47.1
CI20 100 1.8 100 100 28.5 100 40 60.3 97.6 0 - 91.8 60 30.18 97.35 56.4
Aver. 100 0.1 100 100 13.4 100 86.5 21.6 99.6 61 35 97.5 86.88 17.53 99.28 14.4
Table E.5: Comparative table with best values for the algorithm with the local optimisation after 100 generations and 32 chromosomes
for four sets of alternativity.
255
256
(RA,P A) (0, 0) (0, 100) (100, 0) (100, 100) Average
% Gen Max % Gen Max % Gen Max % Gen Max % Gen Max
CI1 100 0 100 100 17.3 100 100 0 100 100 0 100 100 4.33 100
CI2 100 0 100 100 0 100 100 0 100 100 0.3 100 100 0.08 100
CI3 100 0 100 100 0.2 100 100 1 100 100 0 100 100 0.3 100
CI4 100 0 100 100 1.1 100 100 1.6 100 100 6.8 100 100 2.38 100
CI5 100 0 100 100 4.1 100 100 1.4 100 100 7.8 100 100 3.33 100
CI6 100 0 100 100 8.3 100 100 13.6 100 100 34.5 100 100 14.1 100
Table E.6: Comparative table with best values for the algorithm with the local optimisation after 1000 generations and 32 chromosomes
for four sets of alternativity.
(RA,P A) (0, 0) (0, 100) (100, 0) (100, 100) Average
% Gen Max % Gen Max % Gen Max % Gen Max % Gen Max
CI1 100 0 100 100 10.4 100 100 0 100 100 0 100 100 2.6 100
CI2 100 0 100 100 0 100 100 0 100 100 0 100 100 0 100
CI3 100 0 100 100 0 100 100 0.2 100 100 0 100 100 0.05 100
CI4 100 0 100 100 0.1 100 100 0.8 100 100 2.6 100 100 0.88 100
CI5 100 0 100 100 3.3 100 100 1.6 100 100 6.2 100 100 2.78 100
CI6 100 0 100 100 5.9 100 100 9.7 100 100 12.4 100 100 7 100
CI7 100 0 100 100 4.9 100 100 2.5 100 100 6.6 100 100 3.5 100
CI8 100 0 100 100 6 100 100 8.6 100 100 11.1 100 100 6.43 100
CI9 100 0 100 100 6.8 100 100 8.5 100 100 23.9 100 100 9.8 100
CI10 100 0 100 100 13.8 100 90 237.1 99.9 90 43.6 99.9 95 73.63 99.95
CI11 100 0 100 100 13.4 100 70 185 99.4 80 78.1 99.6 87.5 69.13 99.75
CI12 100 0 100 100 17.4 100 100 6.3 100 90 21.1 99.8 97.5 11.2 99.95
CI13 100 0 100 100 6.3 100 100 32.4 100 100 39.3 100 100 19.5 100
CI14 100 0 100 100 15.8 100 100 19.1 100 80 76.1 98.9 95 27.75 99.73
CI15 100 0 100 100 23.2 100 100 39.5 100 100 145.2 100 100 51.98 100
CI16 100 0 100 100 16.3 100 100 46.8 100 70 143.6 99.6 92.5 51.68 99.9
CI17 100 0 100 100 28.1 100 100 18.2 100 80 51.8 99.7 95 24.53 99.93
CI18 100 0 100 100 10.9 100 100 105.9 100 100 100.5 100 100 54.33 100
CI19 100 0 100 100 21.3 100 100 48.6 100 90 214.2 99.6 97.5 71.03 99.9
CI20 100 7.4 100 100 26.3 100 90 112.8 99.7 70 187.9 98.7 90 83.6 99.6
Aver. 100 0.4 100 100 11.5 100 97.5 44.2 100 92.5 58.2 99.8 97.5 28.58 99.95
Table E.7: Values of the generation and the percentage of cases reaching 100% of intra-cellular flow and the best intra-cellular flows after
1000 generations of the Simogga and a population of 64 chromosomes.
257
258
(RA,P A) (0, 100) (100, 0) (100, 100) Average
% flow Time % flow Time % flow Time % flow Time
CI5 2P 100 100 0.9 100 100 1 100 100 2.5 100 100 1.5
HeurRP 100 100 2.2 100 100 0.1 100 100 1.1 100 100 1.1
RPGGA 100 100 112 100 100 35.5 100 100 70.2 100 100 72.6
HeurCF 100 100 1.6 100 100 0.2 100 100 0.9 100 100 0.9
CFGGA 100 100 198.3 100 100 19.8 100 100 113.3 100 100 110.5
CI10 2P 100 100 5.3 10 87.7 12.1 10 92.4 25.9 40 93.4 14.4
HeurRP 100 100 10.8 60 98.8 7.6 10 99.3 20 56.7 99.4 12.8
Table E.8: Best values found with the Simogga and the Mogga with an integrated module (RP and CF).
(RA,P A) (0, 0) (0, 100) (100, 0) (100, 100) Average
% Gen Max % Gen Max % Gen Max % Gen Max % Gen Max Time
CI1 100 0 100 100 3 100 100 1 100 100 2 100 100 1.5 100 0.2
CI2 100 0 100 100 2.1 100 100 1 100 100 2.9 100 100 1.5 100 0.3
CI3 100 0 100 100 6.1 100 100 1.5 100 100 1.7 100 100 2.33 100 0.3
CI4 100 0 100 100 8.5 100 100 1.1 100 100 10.9 100 100 5.13 100 0.5
CI5 100 0 100 100 11 100 100 2.3 100 100 6.5 100 100 4.95 100 0.7
CI6 100 0 100 100 11.3 100 100 6.9 100 100 16.5 100 100 8.68 100 1.5
CI7 100 0 100 100 12.2 100 100 1.2 100 100 18.7 100 100 8.03 100 1.4
CI8 100 0 100 100 11.4 100 100 6.3 100 100 32.9 100 100 12.65 100 2.7
CI9 100 0 100 100 24.6 100 100 8 100 90 33.11 99.9 97.5 16.43 99.98 4.6
CI10 100 0.2 100 100 15.9 100 50 66.6 99.6 90 24 100 85 26.68 99.9 16.9
CI11 100 0 100 100 16.7 100 60 48 99.7 60 37.67 99.6 80 25.59 99.83 31.2
CI12 100 0 100 100 15.4 100 100 7.1 100 70 20.43 99.2 92.5 10.73 99.8 3.3
CI13 100 0 100 100 22.8 100 90 23.8 99.8 100 21.3 100 97.5 16.98 99.95 5.9
CI14 100 2 100 100 20.4 100 100 17.2 100 80 43.38 99.3 95 20.75 99.83 19.7
CI15 100 1.8 100 100 18.1 100 100 21.6 100 100 36.9 100 100 19.6 100 20.6
CI16 100 2 100 100 31.5 100 80 24.6 99.8 80 48.63 99.9 90 26.68 99.93 45.4
CI17 100 0.1 100 100 12 100 100 10.7 100 90 44.78 99.9 97.5 16.9 99.98 13
CI18 100 0 100 100 19.9 100 70 32.4 99.6 80 32.38 99.6 87.5 21.17 99.8 25.1
CI19 100 0 100 100 24.8 100 100 42 100 30 78.67 98.2 82.5 36.37 99.55 65.3
CI20 100 2.4 100 100 28.5 100 50 55 98.5 50 81.8 99 75 41.93 99.38 84.5
Aver. 100 0.4 100 100 15.8 100 90 18.9 99.9 86 29.8 99.7 94 16.23 99.9 17.2
1000-32 100 0.5 100 100 13.8 100 98 42.3 100 84.5 139.6 99.6 95.63 49.05 99.9 54.78
1000-64 100 0.4 100 100 11.5 100 97.5 44.2 100 92.5 58.2 99.8 97.5 28.58 99.95 66.63
Table E.9: Comparative table with best values for the final algorithm after all adaptations after 100 generations for four sets of
alternativity.
259
260
(RA,P A) (0, 100) (100, 0) (100, 100) Average
% flow Time % flow Time % flow Time % flow Time
CI5 2P 100 100 1.2 100 100 0.3 100 100 1.2 100 100 0.9
HeurRP 100 100 2 100 100 0.3 100 100 1.7 100 100 1.3
RPGGA 100 100 112 100 100 35.5 100 100 70.2 100 100 72.6
HeurCF 100 100 2 100 100 0.2 100 100 1.1 100 100 1.1
CFGGA 100 100 198.3 100 100 19.8 100 100 113.3 100 100 110.5
CI10 2P 100 100 9.3 50 99.6 32.7 90 100 25.3 80 99.9 22.4
HeurRP 100 100 12.7 20 99.2 31.6 40 100 37.5 53.3 99.7 27.3
Table E.10: Best values found with the final Simogga and the Mogga with an integrated module (RP and CF).
Appendix F
P 0 0 0 0 0 0 0
M 1 2 3 4 5 6 7
1 1 1 1 1
2 1 1
3 1 1 1
4 1 1 1
5 1 1
P 0 0 0 0 0 0 0
M 1 2 3 4 5 6 7
1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 1 1 1 1
5 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8
1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1
261
262 APPENDIX F. ANNEXE - CASE STUDIES
P 0 0 0 0 0 0 0 0
M 1 2 3 4 5 6 7 8
1 1 1 1
2 1 1 1 1 1 1 1
3 1 1 1
4 1 1
5 1 1 1 1 1
6 1 1
P 0 0 0 0 0 0 0 0
M 1 2 3 4 5 6 7 8
1 1 1 1
2 1 1
3 1 1 1 1
4 1 1
5 1 1 1 1
6 1 1 1
P 0 0 0 0 0 0 0 0
M 1 2 3 4 5 6 7 8
1 1 1 1
2 1 1
3 1 1 1 1
4 1 1 1
5 1 1 1
6 1 1
7 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1
M 1 2 3 4 5 6 7 8 9 0 1
1 1 1 1
2 1 1 1
3 1 1
4 1 1 1
5 1 1
6 1 1 1 1 1
7 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1
M 1 2 3 4 5 6 7 8 9 0 1
1 1 1 1
2 1 1 1
3 1 1 1 1
4 1 1
5 1 1 1
6 1 1 1
7 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1
M 1 2 3 4 5 6 7 8 9 0 1 2
1 1 1 1 1
2 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1 1 1 1
5 1 1 1 1
6 1 1 1 1
7 1 1
8 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1
M 1 2 3 4 5 6 7 8 9 0
1 1 1
2 1 1 1
3 1 1
4 1
5 1 1
6 1 1 1
7 1 1
8 1 1
9 1 1 1 1
10 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
1 1 1 1 1
2 1 1 1 1 1
3 1 1 1 1
4 1 1 1 1
5 1 1 1 1 1
6 1 1 1 1 1
7 1 1 1 1 1
8 1 1 1 1 1
9 1 1 1 1
10 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
1 1 1 1 1 1
2 1 1 1
3 1 1 1
4 1 1 1 1 1 1 1 1
5 1 1 1
6 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1
11 1 1 1 1 1 1
12 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4
1 1 1
2 1 1
3 1 1 1
4 1 1 1 1 1 1
5 1 1 1 1 1
6 1 1 1 1 1 1 1
7 1 1 1 1 1
8 1 1 1 1 1 1 1
9 1 1 1 1 1
10 1 1
11 1 1 1
12 1 1
13 1 1 1 1 1
14 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4
1 1 1
2 1 1
3 1 1 1
4 1 1 1 1 1 1
5 1 1 1 1 1
6 1 1 1 1 1 1 1 1
7 1 1 1 1 1
8 1 1 1 1 1 1 1 1
9 1 1 1 1 1
10 1 1
11 1 1 1 1
12 1 1
13 1 1 1 1 1
14 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4
1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1
5 1
6 1 1 1 1 1 1 1
7 1 1 1
8 1 1 1 1 1 1 1
9 1 1
10 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1
13 1 1 1 1 1
14 1 1 1 1 1 1
15 1 1 1 1
16 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3
M 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1
6 1 1 1 1
7 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1
9 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1
14 1 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1
16 1 1 1 1 1 1 1 1
267
268 APPENDIX F. ANNEXE - CASE STUDIES
P 0 0 0 0 0 0 0 0 0 0 0
r 1 1 1 2 2 3 3 4 4 5 5
M a b c a b a b a b a b
1 1 1 1 1 1 1
2 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1 1
P 1 1 2 2 3 3 4 4
M a b a b a b a b
1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 1 1 1 1
P 1 1 2 3 3 4 4 5 5 6 7
M a b a a b a b a b a a
1 1 1 1 1 1 1
2 1 1 1 1 1 1
3 1 1 1 1
4 1 1 1 1 1
5 1 1 1 1 1 1
P 1 1 1 2 3 4 4 5 5 6 6 7 7 7 8
M a b c a a a b a b a b a b c a
1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1
4 1 1 1 1
5 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1
P 1 1 1 2 2 3 3 4 4 5 5 6 6
M a b c a b a b a b a b a b
1 1 1
2 1 1 1 1
3 1 1 1 1 1 1 1 1
4 1 1 1 1 1
5 1 1 1 1 1
6 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
r 1 1 2 2 3 4 4 5 5 5 5 6 6 7 8 8 9 9 0 0
M a b a b a a b a b c c a b a a b a b a b
1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
r 1 1 2 2 3 3 4 4 5 5 5 5 6 6 7 7 8 8 8 9 9 0 0
M a b a b a b a b a b c c a b a b a b c a b a b
1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
r 1 1 2 2 2 3 3 4 4
5 5 5 6 6 7 7 8 8 9 9 9 0 0 1 1 1 2 2 2 3 4 4
M a b a b c a b a b
a b c a b a b a b a b c a b a b c a b c a a b
1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
r 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 0 0 1 1 2 2 3 3
M a b a b a b a b a b a b a b a b a b a b a b a b a b
1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1
8 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
r 1 1 1 2 2 3 3 3 4 4 5 5 6 6 6 7 7 7 8 8 9 9 0 0
M a b c a b a b c a b a b a b c a b c a b a b a b
1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1
5 1 1 1 1 1 1
6 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1
r 1 1 2 2 3 3 4 4 5 5 6 6 6 7 7 8 8 9 9 0 0 0
M a b a b a b a b a b a b c a b a b a b a b c
1 1 1 1 1 1 1
2 1 1 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1
5 1 1 1 1 1 1 1
6 1 1 1 1
7 1 1 1 1 1 1
8 1 1 1 1 1
9 1 1 1 1 1
10 1 1 1 1 1 1
11 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2
r 1 2 2 3 4 5 5 6 7 8 9 0 1 2 2 3 4 4 5 6 7 7 7 8 9 0
M a a b a a a b a a a a a a a b a a b a a a b c a a a
1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1 1
12 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
r 1 2 2 3 3 4 4 5 5 6 7 7 8 8 9 9 0 0 1 2 2 3 3 4 4 5 5
M a a b a b a b a b a a b a b a b a b a a b a b a b a b
1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1 1 1 1
14 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
r 1 1 1 2 2 3 3 3 4 4 5 5 6 6 7 7 7 8 8 8 9 9 9 0 0 0 0 1 1 2 3 3 4 4 5 5 5 6 6
M a b c a b a b c a b a b a b a b c a b c a b c a b c c a b a a b a b a b c a b
1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1
4 1 1 1 1
5 1 1 1 1 1 1
6 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1
9 1 1 1 1
10 1 1 1 1 1 1
11 1 1 1
12 1 1 1 1 1
13 1 1 1 1 1 1 1
14 1 1 1 1 1 1
15 1 1 1 1 1 1 1
16 1 1 1 1
17 1 1 1 1 1 1 1 1
18 1 1 1 1 1 1
19 1 1 1 1 1 1 1 1
20 1 1 1 1 1 1
21 1 1 1 1 1 1 1
22 1 1 1 1 1 1
23 1 1 1 1 1 1 1 1 1
24 1 1 1 1 1 1 1 1
25 1 1 1 1 1 1 1
26 1 1 1 1 1 1
P 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
r 7 7 7 8 8 8 9 9 9 9 0 0 0 1 1 1 2 2 3 3 4 4 5 5 6 6 6 7 7 7 8 8
M a b c a b c a b c c a b c a b c a b a b a b a b a b c a b c a b
1 1 1 1 1 1
2 1 1 1 1
3 1 1 1
4 1 1 1
5 1 1 1
6 1 1 1 1 1
7 1 1 1 1 1 1 1
8 1 1 1
9 1 1 1
10 1 1 1 1 1 1
11 1 1 1 1
12 1 1 1 1
13 1 1 1 1 1 1
14 1 1 1 1 1
15 1 1 1 1 1
16 1 1 1 1 1
17 1 1 1
18 1 1 1 1 1 1
19 1 1 1 1 1 1 1 1
20 1 1 1 1 1
21 1 1 1 1
22 1 1 1
23 1 1 1 1 1 1
24 1 1 1 1 1 1
25 1 1 1 1 1 1
26 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
r 1 1 2 2 3 3 3 4 4 4 5 5 5 6 6 7 8 8 8 9 9 9 0 0 0 1 1 1 2 2 3 4 4 5 5 6 6 7 7 7
M a b a b a b c a b c a b c a b a a b c a b c a b c a b c a b a a b a b a b a b c
1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
3 1 1
4 1 1 1 1
5 1 1 1 1 1 1
6 1 1 1 1
7 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1 1 1
10 1 1 1
11 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1 1 1 1
14 1 1
15 1 1 1
16 1 1 1 1 1 1
17 1 1 1 1 1 1 1 1
18 1 1
19 1 1 1 1 1 1
20 1 1 1 1
21
22 1 1 1 1 1 1
23 1 1 1 1 1 1 1 1 1 1 1 1 1 1
24 1 1
25
26 1
27
28
29
30 1
275
276 APPENDIX F. ANNEXE - CASE STUDIES
P 0 0 0 0 0 0
r 1 2 3 4 5 6
Nb T a a a a a a
1 1 1 1 1 1
2 2 1 1 1
1 4 1 1 1 1
1 5 1 1 1 1
2 6 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
r 1 1 2 3 4 4 5 6 7 8 9 0 0 1 2 3 4 5 6 7 8 9
Nb T a b a a a b a a a a a a b a a a a a a a a a
1 1 1 1 1 1 1 1 1 1 1
1 2 1 1
1 3 1 1
1 4 1 1 1 1 1 1 1 1 1 1
1 5 1 1
1 6 1 1 1 1 1 1 1
1 7 1 1 1 1 1 1 1 1 1 1 1 1 1
1 8 1 1 1 1 1 1 1 1 1 1 1
1 9 1 1 1 1 1 1 1 1 1
1 10 1 1 1 1 1 1 1
1 11 1 1 1 1 1 1
1 12 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
r 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
Nb T a a a a a a a a a a a a a a a a a a a
2 1 1 1 1 1 1 1 1 1 1
2 2 1 1 1 11 1 1 1 1 1 1
1 3 1 1
2 4 1 1 1 1 1 1 1 1 1
1 5 1 1 1 1 1 1 1 1 1
2 6 1 1 1 1 1 1 1 1
2 7 1 1 1
1 8 1 1 1 1 1 1 1
2 9 1 1 1 1 1 1 1
2 10 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2
r 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
Nb T a a a a a a a a a a a a a a a a a a a a
1 1 1 1 1
2 2 1 1 1 1 1 1 1 1 1 1 1 1
1 3 1 1 1 1 1 1 1 1
1 4 1 1 1 1 1 1 1
2 5 1 1 1 1 1
1 1 1 1
1 6 1 1 1 1 1 1 1 1
1 7 1 1 1 1 1 1 1
1 8 1 1 1 1 1 1
1 9 1 1 1
1 10 1 1 1 1 1 1 1
1 11 1 1 1 1 1 1 1 1 1
1 12 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 3
r 1 1 1 2 3 3 4 5 6 6 7 8 8 8 9 0 1
1 2 2 3 4 5 6 6 7 8 9 0 1 2 3 4 5 6 7 8 9 9 0
Nb T a b c a a b a a a b a a b c a a a
b a b a a a a b a a a a a a a a a a a a a b a
1 1 1 1 1 1 1
1 2 1 1 1 1 1 1 1 1 1 1 1 1
2 3 1 1 1 1 1 1
1 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 5 1 1 1 1 1 1 1 1
1 6 1 1 1 1 1 1 1 1 1 1 1
2 7 1 1 1 1 1 1 1 1 1 1 1
1 8 1 1 1 1 1 1 1 1
1 9 1 1 1 1 1 1 1 1 1 1
1 10 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 11 1 1 1 1 1 1 1 1 1
1 12 1 1 1 1 1 1 1 1 1 1 1 1 1
1 13 1 1 1 1 1 1 1
1 14 1 1 1 1 1 1 1 1 1
1 15 1 1 1 1 1 1 1 1 1 1
1 16 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2
r 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
Nb T a a a a a a a a a a a a a a a a a a a a
1 1 1 1 1 1
1 2 1 1 1 1
1 3 1 1 1
1 4 1 1 1
1 5 1 1 1
2 6 1 1 1 1 1 1 1 1 1
1 7 1 1 1
1 8 1 1 1
1 9 1 1 1 1
1 10 1 1
1 11 1 1 1 1 1
1 12 1 1
1 13 1 1 1
1 14 1 1 1
1 15 1 1 1 1
1 16 1 1 1
3 17 1 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1
r 1 2 3 4 5 6 7 8 9 0
Nb T a a a a a a a a a a
1 1 1
2 2 1
1 3 1
2 4 1
5 5 1
1 6 1
1 7 1
1 8 1
1 9 1
1 10 1
1 11 1
2 12 1
1 13 1
1 14 1
1 15 1
1 16 1
2 17 1
1 18 1 1
1 19 1 1 1
1 20 1 1
2 21 1 1 1 1
2 22 1 1
1 23 1 1 1
2 24 1 1
3 25 1 1 1
2 26 1
2 27 1 1
3 28 1 1 1
2 29 1 1
3 30 1 1 1 1
P 0 0 0 0 0 0 0 0 0 1 1 1
r 1 2 3 4 5 6 7 8 9 0 1 2
Nb T a a a a a a a a a a a a
1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1
6 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
r 1 2 2 3 3 4 4 4 5 6 6 7 8 8 9 0 1 1 1 2 2 3 4 4 5 5 5
Nb T a a b a b a b c a a b a a b a a a b c a b a a b a b c
2 1 1 1 1 1 1 1 1 1 1 1
1 2 1 1 1 1 1 1 1 1
1 3 1 1 1 1 1 1 1 1
1 4 1 1 1 1
1 5 1 1 1 1 1 1 1 1 1 1 1 1 1
1 6 1 1 1 1 1 1 1
1 7 1 1 1 1 1 1 1 1 1
1 8 1 1 1 1 1 1 1
2 9 1 1 1 1 1 1 1 1 1 1 1 1
2 10 1 1 1 1 1 1 1 1
P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1
r 1 1 2 2 3 3 3 4 4 5 5 6 6 7 7
8 8 9 9 0 0 1 1 2 2 3 3 4 4 5 5 6 6
Nb T a b a b a b c a b a b a b a b
a b a b a b a b a b a b a b a b a b
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1
2 2 1 1 1 1 1 1 1 1 1
1 1 1
2 3 1 1 1 1 1 1 1 1 1
1 1
1 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 6 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 7 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 8 1 1 1 1 1 1 1 1 1 1 1 1
2 9 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 10 1 1 1 1 1 1 1 1 1 1 1
Acronyms
AI Artificial intelligence
Artificial intelligence (AI) is the intelligence of machines and the branch
of computer science that aims to create it.
CF Cell Formation
The cell formation adresses the process of formation of cells in the cell
formation problem.
CM Cellular Manufacturing
A style of manufacturing by which parts and products are manufactured
in cell units by a single operator who performs much of the work on that
part or product. This manufacturing style is especially effective when
carrying out high-mix, low-volume production.
EE Exceptional Element
A exceptional element is a one outside the diagonal blocks in a incidence
matrix.
281
282 APPENDIX G. ACRONYMS
EA Evolutionary Algorithm
Evolutionary Algorithms include a variety of related algorithms that are
based on the processes of evolution in nature.
GA Genetic Algorithm
Meta-heuristic used to solve optimization problems. It is based on the
concept of natural selection as first described by Darwin [51].
GE Group Efficacy
The group efficacy is a parameter to evaluate the diagonalisation of an
incidence matrix, developed by [142].
GT Group Technology
Group Technology is a manufacturing technique in which functionally-
grouped machines (producing parts or products with similar characteris-
tics) are organized into cells to achieve high repeatability levels.
[2] G.K. Adil, D. Rajamani, and D. Strong. A mathematical model for cell for-
mation considering investment and pperational cost. European Journal of
Operational Research, 69:330–341, 1993.
[3] G.K. Adil, D. Rajamani, and D. Strong. Cell formation considering alternate
routings. International Journal of Production Research, 34:1361–1380, 1996.
[5] A. Ahraria, M. Shariat-Panahia, and A.A. Atai. Gem: A novel evolutionary op-
timization method with improved neighborhood search. Applied Mathematics
and Computation, 210(2):376–386, 2009.
[6] M.S. Akturk and A. Turkcan. Cellular manufacturing system design using a
holonistic approach. International Journal of Production Research, 38(1):2327–
2347, 2000.
[7] M.R. Anderberg. Cluster Analysis for Applications. Academic Press, Inc., New
York, NY., 1973.
[8] R. Askin and S.B. Subramaniam. A cost-based heuristic for group technology
configuration. International Journal of Production Research, 25(1):101–113,
1987.
[9] R.G. Askin and K.S. Chiu. A graph partitioning procedure for machine as-
signment and cell formation in group technology. International Journal of
Production Economics, 28:1555–1572, 1990.
[10] R.G. Askin, J.B. Creswell, J.B. Goldberg, and A.J. Vakharia. A hamiltonian
path approach to reordering the part-machine matrix for cellular manufactur-
ing. International Journal of Production Research, 29:1081–1100, 1991.
[11] R.G. Askin, H.M. Selim, and A.J. Vakharia. A methodology for designing
flexible cellular manufacturing systems. IIE Transactions, 29(7):599–610, 1997.
285
286 BIBLIOGRAPHY
[12] R.G. Askin and C.R. Standridge. Modeling and Analysis of Manufacturing
Systems. Wiley and Sons, New York, 1993.
[13] R.J. Askin and A.J. Vakharia. Group technology-cell formation and operation.
In Automated Factory Handbook: Technology and Management, pages 317–
366. Cleland, D.I., Bidanda, B.(Eds.), TAB Books, Inc., New York, 1990.
[14] H. Aytug, M. Knouja, and F.E. Vergara. Use of genetic algorithms to solve
production and pperations management problems: A review. International
Journal of Production Research, 41(17):3955–4009, 2003.
[15] J.D. Bagley and H. Steudel. The behavior of adaptive systems which em-
ploy genetic and correlation algorithms. Abstracts International, University of
Michigan, 28(12):5106, 1987.
[17] G.H. Ball and D.J. Hall. A Novel Method of Data Analysis and Classification.
PhD thesis, Standford University, Standford, CA, 1965.
[19] A. Ballakur and H. Steudel. A within-cell utilization based heuristic for de-
signing cellular manufacturing systems. International Journal of Production
Research, 25(5):639–665, 1987.
[20] A. Basu, N. Hyer, and A. Shtub. An expert system based approach to manufac-
turing cell design. International Journal of Production Research, 33(10):2739–
2755, 1995.
[21] A. Baykasoglu and N.N.A. Gindy. Mocacef 1.0: Multiple objective capability
based approach to form part-machine groups for cellular manufacturing appli-
cation. International Journal of Production Research, 38(5):1133–1161, 2000.
[22] A. Baykasoglu, N.N.Z. Gindy, and R.C. Cobb. Capability based formulation
and solution of multiple objective cell formation problems using simulated an-
nealing. Integrated Manufacturing System, 12:258–274, 2001.
[23] J.E. Beasley and P.C. Chu. A genetic algorithm for the set covering problem.
European Journal of Operational Research, 94(2):392–404, 1996.
[25] T.O. Boucher, A. Yalcin, and T. Tai. A linear formation of the machine-
part cell formation problem. International Journal of Production Research,
29(2):343–356, 1991.
BIBLIOGRAPHY 287
[26] P.S. Bradley and U.M. Fayyad. Refining initial points for k-means clustering.
In Proceeding 15th International Conf. on Machine Learning, page 9. Morgan
Kaufmann, San Francisco, CA, 1999.
[27] J.P. Brans and B. Mareschal. The promcalc & gaia decision support system
for multicriteria decision aid. Decision Support Systems, 12:297–310, 2004.
[28] E.C. Brown and R.T. Sumichrast. Cf-gga : a grouping genetic algorithm for cell
formation problem. International Journal of Production Research, 39(16):3651–
3669, 2001.
[29] E.C. Brown and R.T. Sumichrast. Evaluating performance advantages of group-
ing genetic algorithms. Engineering Applications of Artificial Intelligence, 18:1–
12, 2005.
[30] J.L. Burbidge. Production flow analysis. Production Engineer, 42):742–752,
1963.
[31] J.L. Burbidge. Production flow analysis. Production Engineer, 50(4/5):139–
152, 1971.
[32] J.L. Burbidge. The Introduction of Group Technology. Wiley and Sons, New
York, 1975.
[33] J.L. Burbidge. Group Technology in Engineering Industry. London: Mechanical
Engineering Publications, 1979.
[34] J.L. Burbidge. Change to group technology: Process organization is obsolete.
International Journal of Production Research, 30:1209–1219, 1989.
[35] J.L. Burbidge. Production Flow Analysis for Planning Group Technology.
Clarendon Press, Oxford, England, 1989.
[36] G.A. Carpenter and S. Grossberg. A massively parallel architecture for a self-
orgainzing neural pattern recognition machine. Computer Vision, Graphics and
Image Processing, 37:54–115, 1987.
[37] A.S. Carrie. Numerical taxonomy applied to group technology and plant layout.
International Journal of Production Research, 11(4):399–416, 1973.
[38] C. Caux, R. Bruniaux, and H. Pierreval. Cell formation with alternative
process plans and machine capacity constraints: a new combined approach.
International Journal of Production Economics, 64(1-3):279–284, 2000.
[39] C.Y. Chan and D.A. Milner. Direct clustering algorithm for group formation
in cellular manufacture. Journal of Manufacturing Systems, 1(1):65–75, 1982.
[40] F.T.S. Chan, K.W. Lau, P.L.Y. Chan, and K.L. Choy. Two-stage ap-
proach for machine-part grouping and cell layout problems. Robotics and
Computer-Integrated Manufacturing, 22:217–238, 2006.
[41] M.P. Chandrasekharan and R. Rajagopalan. An ideal seed non-hierarchical
clustering algorithm for cellular manufacturing. International Journal of
Production Research, 24(2):451–464, 1986.
288 BIBLIOGRAPHY
[45] J.S. Chen and S.S Heragu. Stepwise decomposition approaches for large scale
cell formation problems. European Journal of Operational Research, 113:64–79,
1999.
[46] C.H. Cheng, Y.P. Gupta, W.H. Lee, and K.F. Wong. A tsp-based heuristic for
forming machine groups and part families. International Journal of Production
Research, 36(5):1325–1338, 1998.
[49] A.G. Cunha, P. Oliveira, and J.A. Covas. Use of genetic algorithms in multicri-
teria optimization to solve industrial problems. In Proceedings of the Seventh
International Conference on Genetic Algorithms, pages 682–688, 1997.
[50] N.E. Dahel and S.B. Smith. Designing flexibility into cellular manufacturing
systems. International Journal of Production Research, 31:933–945, 1993.
[52] C. de Beer and J. de Witte. Production flow synthesis. CIRP Annals, 27:389–
341, 1978.
[55] F.M. Defersha and M. Chen. A comprehensive mathematical model for the de-
sign of the cellular manufacturing systems. International Journal of Production
Economics, 103:767–783, 2006.
[57] M. Diallo, H. Pierreval, and A. Quilliot. Manufacturing cells design with flexible
routing capability in presence of unreliable machines. International Journal of
Production Economics, 74(1):175–182, 2001.
[61] D. Dobado, S. Lozano, J.M. Bueno, and J. Larraneta. Cell formation using a
fuzzy min-max neural network. International Journal of Production Research,
40(1):93–108, 2002.
[62] R.C. Dubes. How many clusters are best? - an experiment. Pattern
Recognation, 20(6):645–663, 1987.
[65] H.A. Elmaraghy and P. Gu. Feature based expert parts assignment in cellular
manufacturing. Journal of Manufacturing Systems, 8:139–152, 1988.
[66] E. Falkenauer. A hybrid grouping genetic algorithm for bin packing. Journal
of Heuristics, 2(1):5–30, 1996.
[67] E. Falkenauer. Genetic Algorithms for Grouping Problem. Wiley: New York,
1998.
[72] C.C. Gallagher and W.A. Knight. Group Technology. E. Horwood, Chichester
and New York, 1993.
[75] F. Glover. Future paths for integer programming and links to artificial intelli-
gence. Decision Sciences, 8:156–166, 1977.
[76] F. Glover and M. Laguna. Tabu Search. Kluwer Academic Publishers, 1997.
[78] D.E. Goldberg. Optimal Initial Population Size for Binary-Coded Genetic
Algorithms. PhD thesis, University of Alabama, 1985.
[81] D.E. Goldberg, B. Kork, and K. Deb. Messy genetic algorithms: Motivation,
analysis, and first results. Complex Systems, 3:493–530, 1989.
[82] J.F Goncalves and M. Resende. A hybrid genetic algorithm for manufacturing
cell formation. Technical report, Rapport, 2002.
[83] J.F. Goncalves and M.G.C. Resende. An evolutionary algorithm for manufac-
turing cell formation. Computers & Industrial Engineering, 47:247–273, 2004.
[84] M. Gravel, A.L. Nsakanda, and W. Price. Efficient solutions to the cell-
formation problem with multiple routings via a double-loop genetic algorithm.
European Journal of Operational Research, 109:286–298, 1998.
[87] K. Gunasingh and R.S. Lashkari. Part routing and machine grouping in fms - an
integrated approach. Computer Applications in Production and Engineering,
pages 651–658, 1989.
[89] T. Gupta and H. Seifoddini. Production data based similarity coefficient for
machine-component grouping decisions in the design of a cellular manufacturing
system. International Journal of Production Research, 28:1247–1269, 1990.
[90] Y.P. Gupta, M.C. Gupta, A. Kumar, and C. Sundram. Minimizing total in-
tercell and intracell moves in cellular manufacturing: a genetic algorithm ap-
proach. International Journal of Computer Integrated Manufacturing, 8:92–
101, 1995.
[94] S.S. Heragu. Group technology and cellular manufacturing. IEEE Transactions
on Systems, Man and Cybernetics, 24(2):203–215, 1994.
[95] S.S. Heragu and J.S. Chen. Optimal solution of cellular manufacturing system
design: Benders’ decomposition approach. European Journal of Operational
Research, 107:175–192, 1998.
[96] S.S. Heragu and Y.P. Gupta. A heuristic for designing cellular manufacturing
facilities. International Journal of Production Research, 32(1):125–140, 1997.
[97] S.S. Heragu and S.R. Kakuturi. Grouping and placement of machines cells. IIE
Transactions, 29(2):561–571, 1997.
[98] A. Hertz, B. Jaumard, C.C. Ribeiro, and W.P.F. Filho. A multi-criteria tabu
search approach to cell formation problems in group technology with multiple
objectives. RAIRO/Operations Research, 28(3):303–328, 1994.
[99] C. Hicks. A genetic algorithm tool for optimizing cellular or functional layouts
in the capital goods industry. International Journal of Production Economics,
104(2):598–614, 2006.
[100] J.H. Holland. Adaptation in Natural and Artificial Systems. Ann Arbor, Michi-
gan: The University of Michigan Press, 1975.
[102] J. Horn and N. Nafpliotis. Multiobjective optimisation using the niched pareto
genetic algoritm. Technical report, IlliGAL Report 93005, University of Illinois,
USA, 1993.
292 BIBLIOGRAPHY
[103] J. Horn, N. Nafploitis, and D.E. Goldberg. A niched pareto genetic algorithm
for multiobjective optimization. Proceedings of the First IEEE Conference on
Evolutionary Computation, pages 82–87, 1994.
[104] B. Hsu. Hybrid Metaheuristics for Generalized Network Design Problems. PhD
thesis, Faculty of Informatic, University of technology, Vienna, Austria, 2008.
[107] S.A. Hussain and V.U.K. Sastry. Application of genetic algorithm for bin pack-
ing. International Journal of Computer Mathematics, 63(3/4):203–214, 1997.
[108] H. Hwang and P. Ree. Routes selection for the cell formation problem with
alternative part process plans. Computers & Industrial Engineering, 30(3):423–
431, 1996.
[109] A.A. Islier. Genetic algorithm approach for multiple criteria facility layout
design. International Journal of Production Research, 36(6):1549–1569, 1998.
[111] M.S. Jabalameli, J. Arkat, and M.S. Sakri. Applying metaheuristics in the
generalized cell formation problem considering machine reliability. Journal of
the Chinese Institute of Industrial Engineers, 25:261–274, 2008.
[112] F.R. Jacobs and D.J. Bragg. Repetitive lots: Flow time reduction through
sequencing and dynamic batch sizing. Decision Sciences, 19:281–294, 1996.
[113] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice-Hall,
Inc., Prentice-Hall advanced reference series, Upper Saddle River, NJ, 1988.
[114] A.K. Jain, M.N. Murty, and P.J. Flynn. Data clustering: A review.
ACM-Computer Surveys, 31(3):264–323, 1999.
[115] T. James, E.C. Browna, and J.B.Keeling. A hybrid grouping genetic algorithm
for the cell formation problem. Computers & Operations Research, 34:2059–
2079, 2007.
[116] R.A. Jarvis and E.A. Patrick. Clustering using a similarity measure based on
shared nearest neighbors. IEEE Transactions on Computers, C-22(11):1025–
1034, 1973.
[117] S. Jayaswal and G.K. Adil. An efficient algorithm for cell formation
with sequence data, machine replications and alternative process routings.
International Journal of Production Research, 42(12):2419–2433, 2004.
BIBLIOGRAPHY 293
[118] G. Jeon and H.R. Leep. Forming part families by using genetic algorithm
and designing machine cells under demand changes. Computers & Operations
Research, 33(1):263–283, 2006.
[119] J.A. Joines. Manufacturing Cell Design Using Genetic Algorithms. PhD thesis,
North Carolina State University, Raleigh, NC, 1993.
[120] J.A. Joines, C.T. Culbreth, and R.E. King. Manufacturing cell design: an
integer programming model employing genetic algorithms. IEE Transactions,
28:69–85, 1996.
[121] J.A. Joines and C.R. Houck. On the use of non stationary penalty functions to
solve non linear constrained optimisation problems with ga’s. In Proceedings of
the First IEEE International Conference on Evolutionary Computation, IEEE
Press,, pages 579–584, 1994.
[122] J.A. Joines, M.G. Kay, and R.E. King. A hybrid genetic algorithm for manufac-
turing cell design. Technical report, Technical Report, Department of Industrial
Engineering, North Carolina State University, 1997.
[123] J.A. Joines, R.E. King, and C.T. Culbreth. A comprehensive review
of production-orien manufacturing cell formation techniques. International
Journal of Factory Automation and Information Management, 3(3-4):225–265,
1996.
[124] S. V. Kamarthi, S.R.T. Kumara, F.T.S. Yu, and I. Ham. Neural networks
and their application in component design data retrieval. Journal of Intelligent
Manufacturing, 1(2):125–140, 1990.
[127] Y. Kao and Y.B. Moon. A unified group technology implementation using the
back-propagation learning rule of neural networks. Computers & Industrial
Engineering, 20(4):425–437, 1991.
[129] D. Karaboga and B. Basturk. A powerful and efficient algorithm for numerical
function optimization: Artificial bee colony (abc) algorithm. Journal Global
Optim., 39:459–471, 2007.
[130] D. Karaboga and B. Basturk. On the performance of artificial bee colony (abc)
algorithm. Appl. Soft Comput, 8:687–697, 2008.
294 BIBLIOGRAPHY
[131] J.A.K. Kasilingam and S.D. Bhole. Cell formation in flexible manufactur-
ing systems under resource constraints. Computers & Industrial Engineering,
19:437–141, 1990.
[132] R.G. Kasilingam and R.S. Lashkari. Cell formation in the presence of alter-
nate process plans in flexible manufacturing systems. Production Planning &
Control, 2:135–141, 1991.
[133] M. Kazerooni, H.S. Luong, and K. Abhary. A genetic algorithm based cell
design considering alternative routing. Computer-Integrated Manufacturing
Systems, 10(2):93–108, 1997.
[134] M. Kazerooni, L.H. Luong, and K. Abhary. Cell formation using ge-
netic algorithms. International Journal Flexible, Automation and Integrated
Manufacturing, 3(3-4):219–235, 1996.
[135] B.W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning
graph’s. Bell System Technical Journal, 49(2):291–307, 1970.
[136] S.K. Khator and S.K. Irani. Cell formation in group technology: A new ap-
proach. Computers & Industrial Engineering, 12(2):131–142, 1987.
[137] Y.D. Kim. A study on surrogate objectives for loading a certain type of flexible
manufacturing system. International Journal of Production Research, 31:381–
392, 1993.
[138] Y.K. Kima, K. Park, and J. Ko. A symbiotic evolutionary algorithm for the inte-
gration of process planning and job shop scheduling". Computers & Operations
Research, 30:1151–1171, 2003.
[141] S. Kirkpatrick, C.D. Gellat, and M.P. Vecchi. Optimization by simulated an-
nealing. Science, 220:671–680, 1983.
[142] C.S. Kumar and M.P. Chandrasekharan. Grouping efficacy: A quantitive cri-
terion for goodness of block diagional forms of binary matrices in group tech-
nology. International Journal of Production Research, 28:233–243, 1990.
[143] K.R. Kumar, A. Kusiak, and A. Vannelli. Grouping of parts and components
in flexible manufacturing systems. European Journal of Operational Research,
24:387–397, 1986.
[144] K.R. Kumar and A. Vannelli. Strategic subcontracting for efficient dis-
aggregated manufacturing. International Journal of Production Research,
25(12):1715–1728, 1987.
BIBLIOGRAPHY 295
[149] A. Kusiak and C.H. Cheng. A branch-and-bound algorithm for solving the
group technology problem. Annals of Operations Research, 26:415–431, 1990.
[150] A. Kusiak and M. Cho. Similarity coefficient algorithm for solving the
group technology problem. International Journal of Production Research,
30(11):2633–2646, 1992.
[151] A. Kusiak and W. S. Chow. Efficient solving of the group technology problem.
Journal of Manufacturing Systems, 6(2):117–124, 1987.
[153] A. Kusiak and S.S. Heragu. Group technology. Computers in Industry, 9:83–91,
1987.
[154] P.J.M. Van Laarhoven and E.H.L. Aarts. Simulated Annealing: Theory and
Applications. D. Reidel Publishing Company, Dordrecht, Holland, 1987.
[155] C.Y. Lee, L. Lei, and M. Pinedo. Current trends in deterministic scheduling.
Annals of Operations Research, 70:1–41, 1997.
[156] S.D. Lee and C.P. Chiang. Cell formations in the uni-directional loop material
handling environment. European Journal of Operational Research, 137(2):401–
420, 2002.
[157] D. Lei and Z. Wu. Tabu search approach based on a similarity coefficient for cell
formation in generalized group technology. International Journal of Production
Research, 43:4035–4047, 2005.
[158] T.W. Liao and L.J. Chen. An evaluation of art1 neural models for gt part family
and machine cell forming. Journal of Manufacturing Systems, 12(4):282–290,
1993.
[159] Y.J. Lin and J.J. Solberg. Effectiveness of flexible routing control. The
International Journal of Flexible Manufacturing Systems, 3:189–211, 1991.
[161] R. Logendran. Effect of the identification of key machines in the cell for-
mation problem of cellular manufacturing systems. Computers & Industrial
Engineering, 20:439–449, 1990.
[162] R. Logendran. A workload based model for minimizing total intercell and
intracell moves in cellular manufacturing. International Journal of Production
Research, 28(5):913–925, 1990.
[163] R. Logendran, P. Ramakrishna, and C. Srikandarajah. Tabu search-based
heuristics for cellular manufacturing systems in the presence of alternative pro-
cess plans. European Journal of Operational Research, 32(2):273–297, 1994.
[164] S. Lozano, F. Guerrero, I. Eguia, and L. Onieva. Cell design and loading in the
presence of alternative routing. International Journal of Operational Research,
37(14):3289–3304, 1999.
[165] S.Y. Lu and K.S. Fu. A sentence-to-sentence clustering procedure for pat-
tern analysis. IEEE Transactions on Systems, Man, and Cybernetics-Part A:
Systems and Humans, 8(5):381–389, 1978.
[166] J. MacQueen. Some methods for classification and analysis of multivariate
observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical
Statistics and Probability, pages 281–297, 1967.
[167] I. Mahdavi, M.M. Paydar, M. Solimanpur, and A. Heidarzade. Genetic algo-
rithm approach for solving a cell formation problem in cellular manufacturing.
Expert Systems with Applications, 36(3):6598–6604, 2009.
[168] I. Mahdavi, J. Rezaeian, K. Shanker, and Z.R. Amari. A set partitioning
based heuristic procedure for incremental cell formation with routing flexibility.
International Journal of Production Research, 44(24):5343–5361, 2006.
[169] O. Mahesh and G. Srinivasan. Incremental cell formation considering alter-
native machines. International Journal of Operational Research, 40(14):3291–
3310, 2002.
[170] S.A. Mansouri, S.M. Moattar-Husseini, and S.T. Newman. A review of the mod-
ern approaches to multi-criteria cell design. International Journal of Production
Research, 38(5):1201–1218, 2000.
[171] S.A. Mansouri, S.M. Moattar-Husseini, and S.H. Zegordi. A genetic algorithm
for multiple objective dealing with exceptional elements in cellular manufac-
turing. Production Planning & Control, 14(5):437–446, 2003.
[172] J. McAuley. Machine grouping for efficient production. The Production
Engineer, 51:53–57, 1972.
[173] Jr.W.T. McCormick, P.J. Schweitzer, and T.W. White. Problem decompo-
sition and data reorganization by a cluster technique. Operations Research,
20(5):993–1009, 1972.
[174] J. Mchugh. Algorithmic Graph Theory. Prentice Hall International, London,
1990.
BIBLIOGRAPHY 297
[189] M.N. Murty, A.K. Jain, P. Asokan, and V. Baskaran. Knowledge-based cluster-
ing scheme for collection management and retrieval of library books. Pattern
Recognation, 28:949–964, 1995.
[191] R. Muther. Systematic Layout Planning (SLP). Cahners Brooks, Boston, 1973.
[192] R. Nagi, G. Harhalakis, and J. Proth. Multiple routings and capacity consid-
erations in group technology applications. European Journal of Operational
Research, 28(12):2243–2257, 1990.
[193] A. Nsakanda, M. Diaby, and W.L. Price. Hybrid genetic approach for solv-
ing large-scale capacitated cell formation problems with multiple routings".
European Journal of Operational Research, 171(3):1051–1070, 2006.
[195] E. Olivia-Lopez and G.F.K. Purcheck. Load balancing for group technology
planning and control. nternational Journal of MTDR, I(19):259–274, 1979.
[197] G. Ozturk, K.Z. Ozturk, and A.A. Islier. A comparison of competitive neural
network with other ai techniques in manufacturing cell formation. In ICNC
2006, L Jiao et al (Eds), Part I, LNCS 4221, pages 575–585, 2006.
[199] V. Pareto. Cours d’éxonomie politique. vol. I & II, F. Rouge, Lausanne,
Switzeland, 1988.
[200] G.T. Parks and I. Miller. Selective breeding in a multi-objective genetic al-
gorithm. Proceedings of the Parallel Problem Solving from Nature, V, pages
250–259, 1998.
[202] D.T. Pham, A. Ghanbarzade, E. Koc, S. Otri, S. Rahim, and M. Zaidi. The
bees algorithm-a novel tool for complex optimization problems. In IPROMS
2006, Cardiff-England, pages 454–461, 2000.
[204] M.F. Plaquin and H. Pierreval. Cell formation using evolutionary algorithms
with certain constraints. International Journal of Production Economics,
64:267–278, 2000.
[205] P. Pongcharoen, D.J. Stewardson, C. Hicks, and P.M. Braiden. Applying de-
signed experiments to optimise the performance of genetic algorithms used for
scheduling complex products in the capital goods industry. Journal of Applied
Statistics, 28(3):441–455, 2001.
[206] R.C. Prim. Shortest connection networks and some generalizations. Bell System
Technical Journal, 36:1389–1401, 1957.
[209] R. Rai, S. Kameshwaran, and M.K. Tiwari. Machine-tool selection and opera-
tion allocation in fms: Solving a fuzzy goal programming model using a genetic
algorithm. International Journal of Production Research, 40(3):61–65, 2002.
[210] R. Rajagopalan and J.L. Batra. Design of cellular production system: a graph-
theoretic approach. International Journal of Production Research, 13:567–579,
1975.
[211] D. Rajamani, N. Singh, and Y.P. Aneja. Integrated design of cellular manu-
facturing systems in the presence of alternative process plans. International
Journal of Production Research, 28:1541–1554, 1990.
[212] D. Rajamani, N. Singh, and Y.P. Aneja. Selection of parts and machines for
cellularization: a mathematical programming approach. European Journal of
Operational Research, 62(1):47–54, 1992.
[215] H.A. Rao and P. Gu. Design of cellular manufacturing systems: A neural
network approach. International Journal of Systems Automation: Research
and Applications, 2(4):407–424, 1993.
[216] C.R. Reeves. A genetic algorithm for flowshop sequencing. Computers &
Operations Research, 22:5–13, 1995.
[218] B.J. Ritzel and J.W. Eheart ans S. Ranjithan. Using genetic algorithms to
solve a multiple objective goundwater pollution containment problem. Water
Resources Research, 30:1589–1603, 1994.
[219] R.S. Rosenberg. Simulation of Genetic Populations with Biochemical
Properties. PhD thesis, University of Michigan, 1967.
[220] B.J. Ross. Searching for search algorithms: Experiments in meta-search.
Technical report, Brock University, Department of Computer Science, St.
Catharines, ON, Canada, L2S 3A1, 2002.
[221] B. Roy. Methodologie d’Aide à la Décision. Economica, Paris, France, 1985.
[222] N. Safaei, M. Saidi-Mehrabad, and M.S. Jabal-Ameli. A hybrid simulated
annealing for solving an extended model of dynamic cellular manufacturing
system. European Journal of Operational Research, 185:563–592, 2008.
[223] S. Sankaran. Multiple objective decision making approach to cell formation: a
goal programming model. Mathematical Computer Modelling, 13:71–82, 1990.
[224] S. Sankaran and G. Kasilingam. An integrated approach to cell formation
and part routing in group technology manufacturing systems. Engineering
Optimization, 16:235–245, 1990.
[225] S. Sarin and E.M. Dar-El. Scheduling parts in an fms. Large Scale Systems,
11:83–94, 1986.
[226] B. Sarker and Y. Xu. Operation sequences-based cell formation methods: a
critical survey. Production Planning & Control, 9(8):771–783, 1998.
[227] B.R. Sarker. Measures of grouping efficiency in cellular manufacturing systems.
European Journal of Operational Research, 130(3):588–611, 2001.
[228] B.R. Sarker and K.M.S. Islam. Relative performances of similarity and dissim-
ilarity measures. Computers & Industrial Engineering, 37(4):769–807, 1999.
[229] B.R. Sarker and Z. Li. Measuring matrix-based cell formation with alternative
routings. Journal of the Operational Research Society, 49:953–965, 1998.
[230] D. Sathiaraj and B.R. Sarker. Common parts grouping heuristic: an iterative
procedure to cell formation. Production Planning & Control, 13(5):481–489,
2002.
[231] J.D. Schaffer. Some Experiments in Machine Learning Using Vector Evaluated
Genetic Algorithms. PhD thesis, Nashville, TN: Vanderbilt University, 1984.
[232] J.D. Schaffer. Multiple objective optimization with vector evaluated genetic
algorithms. In Proceedings of the First International Conference on Genetic
Algorithms, pages 93–100, 1985.
[233] J.D. Schaffer, R.A. Caruana, L.J. Eshelman, and R. Das. A study of control
parameters affecting online performance of genetic algorithms for function op-
timization. In Proceedings of the First International Conference on Genetic
Algorithms, George Mason University, United States, pages 51–60, 1989.
BIBLIOGRAPHY 301
[234] J. Schaller. Tabu search procedures for the cell formation problem with intra-
cell transfer costs as function of cell size. Computers & Industrial Engineering,
49:449–462, 2005.
[235] A. Sedqui, P. Baptiste, J. Favrel, and M. Martinez. Manufacturing sequence
family grouping for fms design - a new approach. In Emerging Technologies
and Factory Automation. ETFA95, INRIA/IEEE Symposium, volume 1, pages
429 – 437, 1995.
[236] H. Seifoddini. Single linkage versus average linkage clustering in machine cells
formation application. Computers & Industrial Engineering, 16(3):416–426,
1989.
[237] H. Seifoddini and P.M. Wolfe. Application of the similarity coefficient method
in group technology. IIE Transactions, 18(3):271–307, 1986.
[238] H.M. Selim, R.G. Askin, and A.J. Vakharia. Cell formation in group technology
review evaluation and directions of future research. Technical report, Working
Paper University of Arizona, Tucson, 1994.
[239] H.M. Selim, R.G. Askin, and A.J. Vakharia. Cell formation in group technology:
Review, evaluation and directions for futur research. Computers & Industrial
Engineering, 34(1):3–20, 1998.
[240] H.M. Selim, A.J. Vakharia, and R.G. Askin. Flexibility in cellular manufactur-
ing: A framework and measures. Technical report, DIS Department, University
of Florida, Gainesville, FL, 1995.
[241] S. Shafer, G. Kern, and J.C. Wei. A mathematical programming approach
for dealing with exceptional elements in cellular manufacturing. International
Journal of Production Research, 30(5):1029–1036, 1992.
[242] S.M. Shafer. Part-machine labour grouping: The problem and solution meth-
ods. In N.C. Suresh and J.M. Kay, editors, Group Technology and Cellular
Manufacturing: A State of the Art Synthesis of Research and Practice, pages
131–152. Kluwer Academic Publishers, Dordrecht, 1998.
[243] S.M. Shafer and D.F. Rogers. A goal programming approach to cell formation
problems. Journal of Operations Management, 10:28–43, 1991.
[244] S.M. Shafer and D.F. Rogers. Similarity and distance measures for cellular
manufacturing. part i. a survey. International Journal of Production Research,
31:1133–1142, 1993.
[245] S.M. Shafer and D.F. Rogers. Similarity and distance measures for cellular
manufacturing. part ii. an extension and comparison. International Journal of
Production Research, 31:1315–1326, 1993.
[246] A. Shtub. Modeling group technology cell formation as a generalized assignment
problem. International Journal of Production Research, 27:775–782, 1989.
[247] N. Singh. Design of cellular manufacturing systems - an invited review.
European Journal of Operational Research, 69:284–291, 1993.
302 BIBLIOGRAPHY
[248] D. Sinriech and A. Meir. Process selection and tool assignment in auto-
mated cellular manufacturing using genetic algorithms. Annals of Operations
Research, 77:51–778, 1998.
[249] S. Sofianopoulou. Application of simulated annealing to a linear model for
the formation of machine cells in group technology. International Journal of
Production Economics, 35(2):501–511, 1997.
[250] S. Sofianopoulou. Manufacturing cells design with alternative process plans
and/or replicate machines. International Journal of Production Research,
37(3):707–720, 1999.
[251] M. Soleymanpour, P. Vrat, and R. Shankar. Ant colony optimization algorithm
to the inter-cell layout problem in cellular manufacturing. European Journal
of Operational Research, 157(3):592–606, 2004.
[252] M. Solimanpur, P. Vrat, and R. Shankar. A multi-objective genetic algorithm
approach to the design of cellular manufacturing systems. International Journal
of Production Research, 42(7):1419–1441, 2004.
[253] K. Spiliopoulos and S. Sofianopoulou. Manufacturing cell design with alterna-
tive routings in generalized group technology: Reducing the complexity of the
solution space. International Journal of Production Research, 45(6):1355–1367,
2007.
[254] J. Sridhar and C. Rajendran. Scheduling in flowshop and cellular manufacturing
systems with multiple objectives - a genetic algorithmic approach. Production
Planning & Control, 7(4):374–382, 1996.
[255] N. Srinivas and K. Deb. Multi-objective function optimization using non-
dominated sorting genetic algorithms. Evolutionary Computation, 2(3):221–
248, 1994.
[256] G. Srinivasan. A clustering algorithm for machine cell formation in group
technology using minimum spanning trees. International Journal of Production
Research, 32(9):2149–2158, 1994.
[257] G. Srinivasan and T.T. Narendran. Grafics: a non technological clustering
algorithm for group technology. International Journal of Production Research,
29(3):463–478, 1991.
[258] G. Srinivasan, T.T. Narendran, and B. Mahadevan. An assignment model
for the part families problem in group technology. International Journal of
Production Research, 28(1):145–152, 1990.
[259] L.E. Stanfel. Machine clustering for economic production. Engineering Costs
and Production Economics, 9:73–81, 1985.
[260] A. Stawowy. Evolutionary strategy for manufacturing cell design. OMEGA.
The International Journal of Management Science, 34(1):1–18, 2006.
[261] L. Steinberg and K. Rasheed. Optimizing by searching a tree of populations.
In Proceedings of GECCO’99, Orlando, USA, pages 1723–1730, 1998.
BIBLIOGRAPHY 303
[263] N.C. Suresh and J. Slomp. A multi-objective procedure for labor assignments
and grouping in capacitated cell formation problems. International Journal of
Production Research, 39(18):4103–4131, 2001.
[264] N.C. Suresh, J. Slomp, and S. Aparthi. The capacitated cell formation problem:
a new hierarchical methodology. International Journal of Production Research,
33(6):1761–1784, 1995.
[265] S. Suresh, P.B. Sujit, and A.K. Rao. Particle swarm optimization approach for
multi-objective composite-beam design. Compos. Struct, 81:598–605, 2007.
[267] K.Y. Tam. Genetic algorithms, function optimization, and facility layout de-
sign. European Journal of Operational Research, 63:322–346, 1992.
[268] A. Tariq, I. Hussain, and A. Ghafoor. A hybrid genetic algorithm for machine-
part grouping. Computers & Industrial Engineering, 56(1):347–356, 2009.
[271] I.C. Trelea. The particle swarm optimization algorithm: Convergence analysis
and parameter selection. Inf. Process. Lett., 85:317–325, 2003.
[272] T. Tunnukij and C. Hicks. An enhanced grouping genetic algorithm for solv-
ing the cell formation problem. International Journal of Production Research,
47(7):1989–2007, 2009.
[274] M.K. Uddin and K. Shanker. Grouping of parts and machines in presence
of alternative process routes by genetic algorithm. International Journal of
Production Economics, 76(3):219–228, 2002.
[275] B. Ulutas and A.A. Islier. The performance of clonal selection algorithm for cell
formation problem compared to other nature based methods. In International
Symposium on Group Technology and Cellular Manufacturing (GT/CM2009),
Kitakyushu, Japan 16-18 February, 2009.
304 BIBLIOGRAPHY
[276] B. Ulutas and T. Saraç. A clonal selection algorithm for cell formation
problem with alternative routings. In International Symposium on Group
Technology and Cellular Manufacturing (GT/CM2009), Kitakyushu, Japan
16-18 February, 2009.
[277] A.J. Vakharia and Y.L. Chang. Cell formation in group technology: a com-
binatorial search approach. International Journal of Production Research,
35(7):2025–2043, 1997.
[287] E. Vin, P.G. DeLit, and A. Delchambre. Une approche intégrée pour résoudre le
problème de formation des cellules de production avec des routages alternatifs.
In MOSIM03 world symposium. April 23 - 25, Toulouse, France, 2003.
BIBLIOGRAPHY 305
[288] E. Vin, P.G. DeLit, and A. Delchambre. A multiple objective grouping genetic
algorithm for the cell formation problem with alternative routings. Journal of
Intelligent Manufacturing, 16(2):189–206, 2005.
[293] P. Vincke. Multicriteria Decision Aid. John Wiley and sons, 1989.
[294] P. Vivekanand and T.T. Narendran. Logical cell formation in fms, us-
ing flexibility-base criteria. International Journal of Flexible Manufacturing
Systems, 10:163–181, 1998.
[295] S. Voss, S. Martello, I. Osman, , and C. Roucariol. Meta Heuristics 98: Theory
and Applications. Kluwer Academic Publishers, Boston MA, 1999.
[297] B.J. Wagner and G.L. Ragatz. The impact of lot splitting on due date perfor-
mance. Journal of Operations Management, 12(1):13–25, 1994.
[298] S.J. Wang and C. Roze. Formation of machine cells and part families: A
modified p-median model and a comparative study. International Journal of
Production Research, 35(5):1259–1286, 1997.
[299] S.J. Wang and B.R. Sarker. Locating cells with bottleneck machines in cel-
lular manufacturing systems. International Journal of Production Research,
40(2):403–424, 2002.
[300] J.C. Wei and N. Gaither. A capacity constrained multiobjective cell formation
method. Journal of Manufacturing Systems, 9(3):222–232, 1990.
306 BIBLIOGRAPHY
[301] D.S. Weile, E. Michielssen, and D.E. Goldberg. Genetic algorithm design
of pareto-optimal broad band microwave absorbers. IEEE Transactions on
Electromagnetic Compatibility, 38, 1996.
[304] Y. Kao W.H. Liao and C.M. Fan. A genetic algorithm for a cell formation
problem with multiple objectives. Journal Network Comput. Appl., 31:387–
401, 2008.
[305] D.H. Wolpert and W.G. Macready. No free lunch theorems for optimization.
IEEE Transactions on Evolutionary Computation, 1:67–82, 1997.
[306] Y. Won. New p-median approach to cell formation with alternative process
plans. International Journal of Production Research, 38(1):229–240, 2000.
[307] Y.K. Won and S.H. Kim. An assignment method for the part-machine cell
formation problem in the presence of multiple process routes. Engineering
Optimization, 22:231–240, 1994.
[308] Y.K. Won and S.H. Kim. Multiple criteria clustering algorithm for solving
the group technology problem with multiple process routings. Computers &
Industrial Engineering, 32(1):207–220, 1997.
[310] T.H. Wu, J.F. Chen, and J.Y. Yeh. A decomposition approach to the cell
formation problem with alternative process plans. The International Journal
of Advanced Manufacturing Technology, 24(11/12):834–840, 2004.
[311] T.H. Wu, S.H. Chung, and C.C. Chang. Hybrid simulated annealing algorithm
with mutation operator to the cell formation problem with alternative process
routing. Expert Systems with Applications, 36:3652–3661, 2009.
[312] H. Xu and H.P. Wang. Part family formation for gt applications based on
fuzzy mathematics. International Journal of Production Research, 27:1637–
1651, 1989.
[313] J. Yang and R.H. Deane. Flexible parts routing in manufacturing system. IEE
Transactions, 5(4-5):87–96, 1994.
[314] M.S. Yang, W.L. Hung, and F.C. Cheng. A comparative investigation of hi-
erarchical clustering techniques and dissimilarity measures applied to the cell
formation problem. Journal of Operations Management, 13:117–138, 1995.
BIBLIOGRAPHY 307
[315] K. Yasuda, L. Hu, and Y. Yin. A grouping genetic algorithm for the multi-
objective cell formation problem. International Journal of Production Research,
43(4):829–853, 2005.
[316] K. Yasuda and Y. Yin. A dissimilarity measure for solving the cell formation
problem in cell manufacturing. Computers & Industrial Engineering, 39:1–17,
2001.
[318] C.T. Yu and V.V. Raghavan. A single pass method for determining the re-
lationship between terms. Journal of The American Society for Informations
Sciences, 28:345–354, 1977.
[319] L.A. Zadeh. Fuzzy sets. Information and Control, 8:338–353, 1965.
[320] R.G. Özdemir, G. Gençyilmaz, and T. Aktin. The modified fuzzy art and a two-
stage clustering approach to cell design. Information Sciences, 177(23):5219–
5236, 2007.
[321] C. Zhao and Z. Wu. Graph-theoritical methods for detecting and describing
gestalt clusters. IEEE Trans. Computation, 20:68–86, 1971.
[322] C. Zhao and Z. Wu. A genetic algorithm for manufacturing cell formation with
multiple routes and multiple objectives. International Journal of Production
Research, 38(2):385–395, 2000.
[324] S. Zolfagharia and M. Liang. A new genetic algorithm for the machine/part
grouping problem involving processing times and lot sizes. Computers &
Industrial Engineering, 45:713–731, 2003.
[326] B.W. Zulawinski, W.F. Punch, and E.D. Goodman. The grouping genetic
algorithm (gga) applied to the bin balancing problem. Technical report, Genetic
Algorithm Research and Applications Group, Michigan State University, 1995.