See discussions, st ats, and author pr ofiles f or this public ation at : https:www .researchgate.ne tpublic ation237010385 [621434]

See discussions, st ats, and author pr ofiles f or this public ation at : https://www .researchgate.ne t/public ation/237010385
Swarm Intelligence and Bio-Inspired Computation: An Overview
Chapt er · May 2013
DOI: 10.1016/B978-0-12-405163-8.00001-6,
CITATIONS
35READS
1,919
2 author s:
Some o f the author s of this public ation ar e also w orking on these r elat ed pr ojects:
Telec oms View pr oject
Motion Driv en eXperienc e View pr oject
Xin-She Y ang
Middlese x Univ ersity, UK
523 PUBLICA TIONS    36,987 CITATIONS    
SEE PROFILE
Mehme t Kar amanoglu
Middlese x Univ ersity, UK
76 PUBLICA TIONS    1,262 CITATIONS    
SEE PROFILE
All c ontent f ollo wing this p age was uplo aded b y Mehme t Kar amanoglu on 17 July 2019.
The user has r equest ed enhanc ement of the do wnlo aded file.

Swarm Intelligence and
Bio-Inspired Computation

This page intentionally left blank

Swarm Intelligence and
Bio-Inspired Computation
Theory and Applications
Edited by
Xin-She Yang
Department of Design Engineering and Mathematics,
Middlesex University, UK
Zhihua Cui
Complex System and Computational Intelligence Laboratory,Taiyuan University of Science and Technology, China
Renbin Xiao
Institute of Systems Engineering,Huazhong University of Science and Technology, China
Amir Hossein Gandomi
Department of Civil Engineering,University of Akron, OH, USA
Mehmet Karamanoglu
Department of Design Engineering and Mathematics,Middlesex University, UK
AMSTERDAMGBOSTONGHEIDELBERGGLONDONGNEW YORKGOXFORD
PARISGSAN DIEGOGSAN FRANCISCOGSINGAPOREGSYDNEYGTOKYO

Elsevier
32 Jamestown Road, London NW1 7BY225 Wyman Street, Waltham, MA 02451, USA
First edition 2013Copyright ©2013 Elsevier Inc. All rights reserved
No part of this publication may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, recording, or any information storage and retrieval system,
without permission in writing from the publisher. Details on how to seek permission, further
information about the Publisher’s permissions policies and our arrangement with organizations such asthe Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:
www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by thePublisher (other than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experiencebroaden our understanding, changes in research methods, professional practices, or medical treatment
may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating
and using any information, methods, compounds, or experiments described herein.
In using such information or methods they should be mindful of their own safety and the safety of
others, including parties for whom they have a professional responsibility. To the fullest extent of thelaw, neither the Publisher nor the authors, contributors,or editors, assume any liability for any injury
and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from
any use or operation of any methods, products, instructions, or ideas contained in the material herein.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress
ISBN: 978-0-12-405163-8
For information on all Elsevier publications
visit our website at
store.elsevier.com
This book has been manufactured using Print On Demand technology. Each copy is produced to order
and is limited to black ink. The online version of this book will show color figures where appropriate.

Contents
List of Contributors xv
Preface xix
Part One Theoretical Aspects of Swarm Intelligence and
Bio-Inspired Computing 1
1 Swarm Intelligence and Bio-Inspired Computation: An Overview 3
Xin-She Yang and Mehmet Karamanoglu
1.1 Introduction 3
1.2 Current Issues in Bio-Inspired Computing 5
1.2.1 Gaps Between Theory and Practice 5
1.2.2 Classifications and Terminology 6
1.2.3 Tuning of Algorithm-Dependent Parameters 7
1.2.4 Necessity for Large-Scale and Real-World Applications 7
1.2.5 Choice of Algorithms 8
1.3 Search for the Magic Formulas for Optimization 8
1.3.1 Essence of an Algorithm 8
1.3.2 What Is an Ideal Algorithm? 8
1.3.3 Algorithms and Self-Organization 9
1.3.4 Links Between Algorithms and Self-Organization 10
1.3.5 The Magic Formulas 11
1.4 Characteristics of Metaheuristics 12
1.4.1 Intensification and Diversification 12
1.4.2 Randomization Techniques 12
1.5 Swarm-Intelligence-Based Algorithms 13
1.5.1 Ant Algorithms 13
1.5.2 Bee Algorithms 14
1.5.3 Bat Algorithm 14
1.5.4 Particle Swarm Optimization 15
1.5.5 Firefly Algorithm 16
1.5.6 Cuckoo Search 17
1.5.7 Flower Pollination Algorithm 19
1.5.8 Other Algorithms 20
1.6 Open Problems and Further Research Topics 20
References 21

2 Analysis of Swarm Intelligence /C0Based Algorithms for Constrained
Optimization 25
M.P. Saka, E. Do ˘gan and Ibrahim Aydogdu
2.1 Introduction 25
2.2 Optimization Problems 27
2.3 Swarm Intelligence /C0Based Optimization Algorithms 28
2.3.1 Ant Colony Optimization 28
2.3.2 Particle Swarm Optimizer 30
2.3.3 ABC Algorithm 32
2.3.4 Glowworm Swarm Algorithm 33
2.3.5 Firefly Algorithm 35
2.3.6 Cuckoo Search Algorithm 36
2.3.7 Bat Algorithm 38
2.3.8 Hunting Search Algorithm 39
2.4 Numerical Examples 41
2.4.1 Example 1 41
2.4.2 Example 2 42
2.5 Summary and Conclusions 44
References 47
3L e ´vy Flights and Global Optimization 49
Momin Jamil and Hans-Ju ¨rgen Zepernick
3.1 Introduction 49
3.2 Metaheuristic Algorithms 50
3.3 Le ´vy Flights in Global Optimization 52
3.3.1 The Le ´vy Probability Distribution 53
3.3.2 Simulation of Le ´vy Random Numbers 54
3.3.3 Diversification and Intensification 55
3.4 Metaheuristic Algorithms Based on Le ´vy Probability
Distribution: Is It a Good Idea? 59
3.4.1 Evolutionary Programming Using Mutations Based on the
Le´vy Probability Distribution 59
3.4.2 Le ´vy Particle Swarm 60
3.4.3 Cuckoo Search 61
3.4.4 Modified Cuckoo Search 63
3.4.5 Firefly Algorithm 63
3.4.6 Eagle Strategy 66
3.5 Discussion 67
3.6 Conclusions 68
References 69
4 Memetic Self-Adaptive Firefly Algorithm 73
Iztok Fister, Xin-She Yang, Janez Brest and Iztok Jr. Fister
4.1 Introduction 73
4.2 Optimization Problems and Their Complexity 76vi Contents

4.3 Memetic Self-Adaptive Firefly Algorithm 79
4.3.1 Self-Adaptation of Control Parameters 81
4.3.2 Population Model 82
4.3.3 Balancing Between Exploration and Exploitation 83
4.3.4 The Local Search 86
4.3.5 Scheme of the MSA-FFA 87
4.4 Case Study: Graph 3-Coloring 87
4.4.1 Graph 3-Coloring 89
4.4.2 MSA-FFA for Graph 3-Coloring 90
4.4.3 Experiments and Results 92
4.5 Conclusions 98
References 99
5 Modeling and Simulation of Ant Colony’s Labor Division:
A Problem-Oriented Approach 103
Renbin Xiao
5.1 Introduction 103
5.2 Ant Colony’s Labor Division Behavior and its Modeling Description 105
5.2.1 Ant Colony’s Labor Division 105
5.2.2 Ant Colony’s Labor Division Model 105
5.2.3 Some Analysis 108
5.3 Modeling and Simulation of Ant Colony’s Labor Division
with Multitask 109
5.3.1 Background Analysis 109
5.3.2 Design and Implementation of Ant Colony’s Labor
Division Model with Multitask 110
5.3.3 Supply Chain Virtual Enterprise Simulation 113
5.3.4 Virtual Organization Enterprise Simulation 115
5.3.5 Discussion 119
5.4 Modeling and Simulation of Ant Colony’s Labor Division
with Multistate 119
5.4.1 Background Analysis 119
5.4.2 Design and Implementation of Ant Colony’s Labor
Division Model with Multistate 121
5.4.3 Simulation Example of Ant Colony’s Labor Division
Model with Multistate 123
5.5 Modeling and Simulation of Ant Colony’s Labor Division with
Multiconstraint 127
5.5.1 Background Analysis 127
5.5.2 Design and Implementation of Ant Colony’s
Labor Division Model with Multiconstraint 128
5.5.3 Simulation Results and Analysis 132
5.6 Concluding Remarks 132
Acknowledgment 134
References 134vii Contents

6 Particle Swarm Algorithm: Convergence and Applications 137
Shichang Sun and Hongbo Liu
6.1 Introduction 137
6.2 Convergence Analysis 139
6.2.1 Individual Trajectory 139
6.2.2 Probabilistic Analysis 142
6.3 Performance Illustration 146
6.3.1 Dataflow Application 146
6.4 Application in Hidden Markov Models 157
6.4.1 Parameters Weighted HMM 159
6.4.2 PSO /C0Viterbi for Parameters Weighted HMMs 160
6.4.3 POS Tagging Problem and Solution 160
6.4.4 Experiment 162
6.5 Conclusions 163
References 165
7 A Survey of Swarm Algorithms Applied to Discrete
Optimization Problems 169
Jonas Krause, Jelson Cordeiro, Rafael Stubs Parpinelli andHeitor Silve ´rio Lopes
7.1 Introduction 169
7.2 Swarm Algorithms 170
7.2.1 Particle Swarm Optimization 170
7.2.2 Roach Infestation Optimization 171
7.2.3 Cuckoo Search Algorithm 171
7.2.4 Firefly Algorithm 171
7.2.5 Gravitational Search Algorithm 171
7.2.6 Bat Algorithm 172
7.2.7 Glowworm Swarm Optimization Algorithm 172
7.2.8 Artificial Fish School Algorithm 172
7.2.9 Bacterial Evolutionary Algorithm 173
7.2.10 Bee Algorithm 173
7.2.11 Artificial Bee Colony Algorithm 173
7.2.12 Bee Colony Optimization 174
7.2.13 Marriage in Honey-Bees Optimization Algorithm 174
7.3 Main Concerns to Handle Discrete Problems 174
7.3.1 Discretization Methods 175
7.4 Applications to Discrete Problems 177
7.4.1 Particle Swarm Optimization 177
7.4.2 Roach Infestation Optimization 178
7.4.3 Cuckoo Search Algorithm 178
7.4.4 Firefly Algorithm 178
7.4.5 Bee Algorithm 179
7.4.6 Artificial Bee Colony 179
7.4.7 Bee Colony Optimization 180viii Contents

7.4.8 Marriage in Honey-Bees Optimization Algorithm 181
7.4.9 Other Swarm Intelligence Algorithms 181
7.5 Discussion 182
7.6 Concluding Remarks and Future Research 184
References 186
8 Test Functions for Global Optimization: A Comprehensive Survey 193
Momin Jamil, Xin-She Yang and Hans-Ju ¨rgen Zepernick
8.1 Introduction 193
8.2 A Collection of Test Functions for GO 194
8.2.1 Unimodal Test Functions 196
8.2.2 Multimodal Function 199
8.3 Conclusions 221
References 221
Part Two Applications and Case Studies 223
9 Binary Bat Algorithm for Feature Selection 225
Rodrigo Yuji Mizobe Nakamura, Luı ´s Augusto Martins Pereira,
Douglas Rodrigues, Kelton Augusto Pontara Costa,
Joa˜o Paulo Papa and Xin-She Yang
9.1 Introduction 225
9.2 Bat Algorithm 226
9.3 Binary Bat Algorithm 228
9.4 Optimum-Path Forest Classifier 228
9.4.1 Background Theory 229
9.5 Binary Bat Algorithm 231
9.6 Experimental Results 233
9.7 Conclusions 236
References 237
10 Intelligent Music Composition 239
Maximos A. Kaliakatsos-Papakostas, Andreas Floros andMichael N. Vrahatis
10.1 Introduction 239
10.2 Unsupervised Intelligent Composition 241
10.2.1 Unsupervised Composition with Cellular Automata 241
10.2.2 Unsupervised Composition with L-Systems 243
10.3 Supervised Intelligent Composition 245
10.3.1 Supervised Composition with Genetic Algorithms 246
10.3.2 Supervised Composition Genetic Programming 247
10.4 Interactive Intelligent Composition 248
10.4.1 Composing with Swarms 250
10.4.2 Interactive Composition with GA and GP 251
10.5 Conclusions 253
References 254ix Contents

11 A Review of the Development and Applications of the
Cuckoo Search Algorithm 257
Sean Walton, Oubay Hassan, Kenneth Morgan and M. Rowan Brown
11.1 Introduction 257
11.2 Cuckoo Search Algorithm 258
11.2.1 The Analogy 258
11.2.2 Cuckoo Breeding Behavior 258
11.2.3 Le ´vy Flights 259
11.2.4 The Algorithm 259
11.2.5 Validation 261
11.3 Modifications and Developments 261
11.3.1 Algorithmic Modifications 262
11.3.2 Hybridization 264
11.4 Applications 265
11.4.1 Applications in Machine Learning 265
11.4.2 Applications in Design 266
11.5 Conclusion 269
References 270
12 Bio-Inspired Models for Semantic Web 273
Priti Srinivas Sajja and Rajendra Akerkar
12.1 Introduction 273
12.2 Semantic Web 274
12.3 Constituent Models 276
12.3.1 Artificial Neural Network 276
12.3.2 Genetic Algorithms 281
12.3.3 Swarm Intelligence 284
12.3.4 Application in Different Aspects of Semantic Web 286
12.4 Neuro-Fuzzy System for the Web Content Filtering:
Application 287
12.5 Conclusions 291
References 291
13 Discrete Firefly Algorithm for Traveling Salesman Problem:
A New Movement Scheme 295
Gilang Kusuma Jati, Ruli Manurung and Suyanto
13.1 Introduction 295
13.2 Evolutionary Discrete Firefly Algorithm 297
13.2.1 The Representation of the Firefly 297
13.2.2 Light Intensity 298
13.2.3 Distance 298
13.2.4 Attractiveness 299
13.2.5 Light Absorption 299x Contents

13.2.6 Movement 300
13.2.7 Inversion Mutation 301
13.2.8 EDFA Scheme 301
13.3 A New DFA for the TSP 302
13.3.1 Edge-Based Movement 302
13.3.2 New DFA Scheme 304
13.4 Result and Discussion 305
13.4.1 Firefly Population 306
13.4.2 Effect of Light Absorption 306
13.4.3 Number of Updating Index 307
13.4.4 Performance of New DFA 309
13.5 Conclusion 311
Acknowledgment 311
References 311
14 Modeling to Generate Alternatives Using Biologically
Inspired Algorithms 313
Raha Imanirad and Julian Scott Yeomans
14.1 Introduction 313
14.2 Modeling to Generate Alternatives 314
14.3 FA for Function Optimization 316
14.4 FA-Based Concurrent Coevolutionary Computational
Algorithm for MGA 318
14.5 Computational Testing of the FA Used for MGA 321
14.6 An SO Approach for Stochastic MGA 324
14.7 Case Study of Stochastic MGA for the Expansion of
Waste Management Facilities 326
14.8 Conclusions 331
References 332
15 Structural Optimization Using Krill Herd Algorithm 335
Amir Hossein Gandomi, Amir Hossein Alavi andSiamak Talatahari
15.1 Introduction 335
15.2 Krill Herd Algorithm 336
15.2.1 Lagrangian Model of Krill Herding 336
15.3 Implementation and Numerical Experiments 339
15.3.1 Case I: Structural Design of a Pin-Jointed Plane Frame 340
15.3.2 Case II: A Reinforced Concrete Beam Design 342
15.3.3 Case III: 25-Bar Space Truss Design 344
15.4 Conclusions and Future Research 346
References 348xi Contents

16 Artificial Plant Optimization Algorithm 351
Zhihua Cui and Xingjuan Cai
16.1 Introduction 351
16.2 Primary APOA 352
16.2.1 Main Method 352
16.2.2 Photosynthesis Operator 352
16.2.3 Phototropism Operator 353
16.2.4 Applications to Artificial Neural Network Training 354
16.3 Standard APOA 357
16.3.1 Drawbacks of PAPOA 357
16.3.2 Phototropism Operator 358
16.3.3 Apical Dominance Operator 359
16.3.4 Application to Toy Model of Protein Folding 360
16.4 Conclusion 363
Acknowledgment 364
References 364
17 Genetic Algorithm for the Dynamic Berth Allocation Problem
in Real Time 367
Carlos Arango, Pablo Corte ´s, Alejandro Escudero and Luis Onieva
17.1 Introduction 367
17.2 Literature Review 369
17.3 Optimization Model 370
17.3.1 Sets 372
17.3.2 Parameters 372
17.3.3 Decision Variables 372
17.4 Solution Procedure by Genetic Algorithm 375
17.4.1 Representation 375
17.4.2 Fitness 375
17.4.3 Selection of Parents and Genetic Operators 376
17.4.4 Mutation 376
17.4.5 Crossover 377
17.5 Results and Analysis 377
17.6 Conclusion 382
References 382
18 Opportunities and Challenges of Integrating Bio-Inspired
Optimization and Data Mining Algorithms 385
Simon Fong
18.1 Introduction 385
18.2 Challenges in Data Mining 387
18.2.1 Curse of Dimensionality 387
18.2.2 Data Streaming 389xii Contents

18.3 Bio-Inspired Optimization Metaheuristics 390
18.4 The Convergence 391
18.4.1 Integrating BiCam Algorithms into Clustering 392
18.4.2 Integrating BiCam Algorithms into Feature Selection 394
18.5 Conclusion 400
References 401
19 Improvement of PSO Algorithm by Memory-Based Gradient
Search—Application in Inventory Management 403
Tama ´s Varga, Andra ´s Kira ´ly and Ja ´nos Abonyi
19.1 Introduction 403
19.2 The Improved PSO Algorithm 405
19.2.1 Classical PSO Algorithm 405
19.2.2 Improved PSO Algorithm 407
19.2.3 Results 410
19.3 Stochastic Optimization of Multiechelon Supply Chain Model 414
19.3.1 Inventory Model of a Single Warehouse 415
19.3.2 Inventory Model of a Supply Chain 417
19.3.3 Optimization Results 419
19.4 Conclusion 419
Acknowledgment 420
References 420xiii Contents

This page intentionally left blank

List of Contributors
Ja´nos Abonyi Department of Process Engineering, University of Pannonia,
Veszpre ´m, Hungary
Rajendra Akerkar Western Norway Research Institute, Sogndal, Norway
Amir Hossein Alavi Department of Civil and Environmental Engineering,
Michigan State University, East Lansing, MI, USA
Carlos Arango Ingenierı ´a de Organizacio ´n, Engineering School of Seville,
University of Seville, Camino de los Descubrimientos s/n 41092, Seville, Spain
Ibrahim Aydogdu Civil Engineering Department, Akdeniz University, Antalya,
Turkey
Janez Brest Faculty of Electrical Engineering and Computer Science, University
of Maribor, Maribor, Slovenia
Xingjuan Cai Complex System and Computational Intelligence Laboratory,
Taiyuan University of Science and Technology, Shanxi, China
Jelson Cordeiro Bioinformatics Laboratory, Federal University of Technology
Parana ´, Curitiba, Brazil
Pablo Corte ´sIngenierı ´a de Organizacio ´n, Engineering School of Seville,
University of Seville, Camino de los Descubrimientos s/n 41092, Seville, Spain
Kelton Augusto Pontara Costa Department of Computing, Sa ˜o Paulo State
University, Bauru, Brazil
Zhihua Cui Complex System and Computational Intelligence Laboratory, Taiyuan
University of Science and Technology, Shanxi, China
E. Do ˘ganCivil Engineering Department, Celal Bayar University, Manisa, Turkey
Alejandro Escudero Ingenierı ´a de Organizacio ´n, Engineering School of Seville,
University of Seville, Camino de los Descubrimientos s/n 41092, Seville, Spain

Iztok Fister Faculty of Electrical Engineering and Computer Science, University
of Maribor, Maribor, Slovenia
Iztok Jr. Fister Faculty of Electrical Engineering and Computer Science,
University of Maribor, Maribor, Slovenia
Andreas Floros Department of Audio and Visual Arts, Ionian University, Corfu,
Greece
Simon Fong Department of Computer and Information Science, University of
Macau, Macau SAR, China
Amir Hossein Gandomi Department of Civil Engineering, University of Akron,
Akron, OH, USA
Oubay Hassan College of Engineering, Swansea University, Swansea, Wales, UK
Raha Imanirad OMIS Area, Schulich School of Business, York University,
Toronto, ON, Canada
Momin Jamil Blekinge Institute of Technology, Karlskrona, Sweden; Harman
International, Harman/Becker Automotive Systems GmbH, Karlsbad, Germany
Gilang Kusuma Jati Faculty of Computer Science, Universitas Indonesia, Kampus
UI, Depok, Jawa Barat, Indonesia
Maximos A. Kaliakatsos-Papakostas Department of Mathematics, University of
Patras, Patras, Greece
Mehmet Karamanoglu Department of Design Engineering and Mathematics,
School of Science and Technology, Middlesex University, The Burroughs,London, UK
Andra ´s Kira ´lyDepartment of Process Engineering, University of Pannonia,
Veszpre ´m, Hungary
Jonas Krause Bioinformatics Laboratory, Federal University of Technology
Parana ´, Curitiba, Brazil
Hongbo Liu Department of Computer, Dalian University of Technology, Dalian,
China; School of Information Science and Technology, Dalian MaritimeUniversity, Dalian, China
Heitor Silve ´rio Lopes Bioinformatics Laboratory, Federal University of
Technology Parana ´, Curitiba, Brazilxvi List of Contributors

Ruli Manurung Faculty of Informatics, Telkom School of Technology, Jl.
Telekomunikasi No. 1, Terusan Buah Batu, Bandung, Jawa Barat, Indonesia
Kenneth Morgan College of Engineering, Swansea University, Swansea,
Wales, UK
Rodrigo Yuji Mizobe Nakamura Department of Computing, Sa ˜o Paulo State
University, Bauru, Brazil
Luis Onieva Ingenierı ´a de Organizacio ´n, Engineering School of Seville,
University of Seville, Camino de los Descubrimientos s/n 41092, Seville, Spain
Rafael Stubs Parpinelli Applied Cognitive Computing Group, Santa Catarina
State University, Joinville, Brazil; Bioinformatics Laboratory, Federal Universityof Technology Parana ´, Curitiba, Brazil
Joa˜o Paulo Papa Department of Computing, Sa ˜o Paulo State University, Bauru,
Brazil
Luı´s Augusto Martins Pereira Department of Computing, Sa ˜o Paulo State
University, Bauru, Brazil
Douglas Rodrigues Department of Computing, Sa ˜o Paulo State University, Bauru,
Brazil
M. Rowan Brown College of Engineering, Swansea University, Swansea,
Wales, UK
Priti Srinivas Sajja Department of Computer Science, Sardar Patel University,
India
M.P. Saka Department of Civil Engineering, University of Bahrain, Isa Town,
Bahrain
Shichang Sun Department of Computer, Dalian University of Technology, Dalian,
China; School of Computer Science and Engineering, Dalian NationalitiesUniversity, Dalian, China
Suyanto Faculty of Informatics, Telkom School of Technology, Jl. Telekomunikasi
No. 1, Terusan Buah Batu, Bandung, Jawa Barat, Indonesia
Siamak Talatahari Marand Faculty of Engineering, University of Tabriz, Tabriz,
Iranxvii List of Contributors

Tama ´s Varga Department of Process Engineering, University of Pannonia,
Veszpre ´m, Hungary
Michael N. Vrahatis Department of Mathematics, University of Patras, Patras,
Greece
Sean Walton College of Engineering, Swansea University, Swansea, Wales, UK
Renbin Xiao Institute of Systems Engineering, Huazhong University of Science
and Technology, Wuhan, China
Xin-She Yang Department of Design Engineering and Mathematics, School of
Science and Technology, Middlesex University, The Burroughs, London, UK
Julian Scott Yeomans OMIS Area, Schulich School of Business, York University,
Toronto, ON, Canada
Hans-Ju ¨rgen Zepernick Blekinge Institute of Technology, Karlskrona, Swedenxviii List of Contributors

Preface
Swarm intelligence and bio-inspired computation have become increasingly popu-
lar in the last two decades. Bio-inspired algorithms such as ant colony algorithm,bat algorithm (BA), cuckoo search (CS), firefly algorithm (FA), and particle swarmoptimization have been applied in almost every area of science and engineering
with a dramatic increase in the number of relevant publications. Metaheuristic algo-
rithms form an important part of contemporary global optimization algorithms,computational intelligence, and soft computing.
New researchers often ask “why metaheuristics?”, and this indeed is a profound
question, which can be linked to many aspects of algorithms and optimization,
including what algorithms to choose and why certain algorithms perform betterthan others for a given problem. It was believed that the word “metaheuristic” wascoined by Fred Glover in 1986. Generally, “heuristic” means “to find or to discoverby trial and error.” Here, “meta-” means “beyond or higher level.” Therefore, meta-heuristic can be considered as a higher-level strategy that guides and modifies otherheuristic procedures to produce solutions or innovations beyond those that are nor-mally achievable in a quest for local optimality. In reality, we are often puzzled
and may be even surprised by the excellent efficiency of bio-inspired metehauristic
algorithms because these seemingly simple algorithms can sometime work like a“magic,” even for highly nonlinear, challenging problems. For example, for multi-modal optimization problems, many traditional algorithms usually do not workwell, while new algorithms such as differential evolution (DE) and FA can workextremely well in practice, even though we may not fully understand the underly-ing mechanisms of these algorithms.
The increasing popularity of bio-inspired metaheuristics and swarm intelligence
(SI) has attracted a great deal of attention in engineering and industry. There are
many reasons for such popularity, and here we discuss three factors: simplicity,flexibility, and ergodicity. Firstly, most bio-inspired algorithms are simple in thesense that they are easy to implement and their algorithm complexity is relativelylow. In most programming languages, the core algorithm can be coded within ahundred lines. Second, these algorithms, though simple, are flexible enough to dealwith a wide range of optimization problems, including those that are not solvableby conventional algorithms. Third, bio-inspired algorithms such as FA and CS can
often have high degrees of ergodicity in the sense that they can search multimodal
landscape with sufficient diversity and ability to escape any local optimum. Theergodicity is often due to some exotic randomization techniques, derived from nat-ural systems in terms of crossover and mutation, or based on statistical modelssuch as random walks and Le ´vy flights.

As most real-world problems are nonlinear and multimodal with uncertainty,
such complexity and multimodality may imply that it may not be possible to find
the true global optimality with a 100% certainty for a given problem. We often
have to balance the solution accuracy and computational cost, leading to a (possi-bly aggressive) local search method. Consequently, we may have to sacrifice thepossibility of finding the true global optimality in exchange of some suboptimal,robust solutions. However, in practice, for the vast majority of cases, many bio-inspired algorithms can achieve the true global optimality in a practicallyacceptable fixed number of iterations, though there is no guarantee for this to bethe case all the time.
The history of bio-inspired computation and SI has spanned over half a century,
though the developments have been sped up in the last 20 years. Since the emer-gence of evolutionary strategies in the 1960s and the development of genetic algo-rithms (GA) in the 1970s, a golden age with major progress in modern bio-inspiredcomputing is the 1990s. First, in 1992, Marco Dorigo described his innovativework on ant colony optimization (ACO) in his PhD thesis, and in the same year,J.R. Koza published a treatise on genetic programming. Then, in 1995, J. Kennedyand R. Eberhart developed particle swarm optimization (PSO), which essentially
opened up a new field, now loosely named as SI. Following this in 1996 and 1997,
R. Storn and K. Price published their DE. At the turn of the twenty-first century,Zong Woo Geem et al. developed the harmony search in 2001. Around 2004 to2005, bee algorithms emerged. S. Nakrani and C. Tovey proposed the honey beealgorithm in 2004, and Xin-She Yang proposed the virtual bee algorithm in 2005.D.T. Pham et al. developed their bee algorithms and D. Karaboga formulated theartificial bee colony all in 2005. In 2008, Xin-She Yang developed the FA for mul-timodal optimization, and in 2009, Xin-She Yang and Suash Deb developed CS. In
2010, Xin-She Yang first developed the BA, and then Xin-She Yang and S. Deb
developed the eagle strategy. More bio-inspired algorithms started to appear in2012, including krill herd algorithm (KHA) by A.H. Gandomi and A.H. Alavi,flower pollination algorithm by Xin-She Yang, and wolf search algorithm by Ruiet al. As we can see, the literature has expanded dramatically in the last decade.
Accompanying the rapid developments in bio-inspired computing, another
important question comes naturally: Can an algorithm be intelligent? The answersmay depend on the definition of “intelligence” itself, and this is also a debating
issue. Unless a true Turing test can be passed without any doubt, truly intelligent
algorithms may be still a long way to go. However, if we lower our expectation todefine the intelligence as “the ability to mimic some aspects of human intelligence”such as memory, automation, and sharing information, then many algorithms canhave low-level intelligence to a certain degree. First, many bio-inspired algorithmsuse elitism and memory to select the best solution or “survival of the fittest,” andthen share this information with other agents in a multiple agent system.Algorithms such as artificial neural networks use connectionism, interactions,
memory, and learning. Most SI-based algorithms use rule-based updates, and they
can adjust their behavior according to the landscape (such as the best values, gradi-ents) in the search space during iterations. To some extent, they can be calledxx Preface

“smart” algorithms. Obviously, truly intelligent algorithms are yet to appear in the
future. Whatever the forms such intelligent algorithms may take, it would be the
holy grail of artificial intelligence and bio-inspired computation.
Despite the above recent advances, there are many challenging issues that
remain unresolved. First, there are some significant gaps between theory and prac-tice, concerning bio-inspired computing and optimization. From numerical experi-ments and applications, we know bio-inspired algorithms often work surprisinglywell; however, we do not quite understand why they are so efficient. In fact, itlacks solid theoretical proof of convergence for many bio-inspired algorithms,though the good news is that limited results do start to appear in the literature.
In addition, for most algorithms, we do not know how parameters can exactly
control or influence the performance of an algorithm. Consequently, a major chal-lenge is the tuning of algorithm-dependent parameters so as to produce the optimalperformance of an algorithm. In essence, parameter tuning itself is an optimizationproblem. At present, this is mainly carried out by trial and error, and thus very timeconsuming. In fact, parameter tuning is a very active research area which requiresmore research emphasis on both theory and extensive simulations.
On the other hand, even though we have seen a vast range of successful applica-
tions, however, in most applications, these are still limited to small-scale problems
with the number of design variables less than a few dozens or a few hundreds. It isvery rare to see larger-scale applications . In reality, many optimization problems may
be very large scale, but we are not sure how bio-inspired algorithms can deal withsuch large-scale problems. As most problem s are often nonlinear, scalability may also
be a problem, and computational time can be a huge barrier for large-scale problems.
Obviously, there are other challenging issues such as performance measures,
uncertainty, and comparison statistics. These challenges also provide golden oppor-
tunities for researchers to pursue further research in these exciting areas in the
years to come.
This book strives to provide a timely snapshot of the state-of-the-art develop-
ments in bio-inspired computation and SI, capturing the fundamentals and applica-tions of algorithms based on SI and other biological systems. In addition to reviewand document the recent advances, this book analyze and discuss the latest andfuture trends in research directions so that it can help new researchers to carry outtimely research and inspire readers to develop new algorithms.
As the literature is vast and the research area is very broad, it is not possible to
include even a good fraction of the current research. However, the contributions byleading experts still contain latest developments in many active areas and applica-tions. Topics include overview and analysis of SI and bio-inspired algorithms,PSO, FA, memetic FA, discrete FA, BA, binary BA, GA, CS and modified CS,KHA, artificial plant optimization, review of commonly used test functions andlabor division in ACO. Application topics include traveling salesman problems,feature selection, graph coloring, combinatorial optimization, music composition,
mesh generation, semantic web services, optimization alternatives generation, pro-
tein folding, berth allocation, data mining, structural optimization, inventory man-agement, and others.xxi Preface

It can be expected that this edited book can serve as a source of inspiration for
novel research and new applications. Maybe, in the not very far future, some truly,
intelligent, self-evolving algorithm may appear to solve a wide range of tough opti-
mization more efficiently and more accurately.
Last but not the least, we would like to thank our Editors, Dr Erin Hill-Parks,
Sarah E. Lay, and Tracey Miller, and the staff at Elsevier for their help andprofessionalism.
Xin-She Yang, Zhihua Cui, Renbin Xiao,
Amir Hossein Gandomi and Mehmet Karamanoglu
February 2013xxii Preface

Part One
Theoretical Aspects of Swarm
Intelligence and Bio-Inspired
Computing

This page intentionally left blank

1Swarm Intelligence and
Bio-Inspired Computation:
An Overview
Xin-She Yang and Mehmet Karamanoglu
Department of Design Engineering and Mathematics, School of Science and
Technology, Middlesex University, The Burroughs, London, UK
1.1 Introduction
Swarm intelligence (SI), bio-inspired computation in general, has attracted greatinterest in the last two decades, and many SI-based optimization algorithms havegained huge popularity. There are many reasons for such popularity and attention,and two main reasons are probably that these SI-based algorithms are flexible andversatile, and that they are very efficient in solving nonlinear design problems withreal-world applications. Bio-inspired computation has permeated into almost all
areas of sciences, engineering, and industries, from data mining to optimization,
from computational intelligence to business planning, and from bioinformatics toindustrial applications. In fact, it is perhaps one of the most active and popularresearch subjects with wide multidisciplinary connections.
Even when considered from a narrow point of view of optimization, this is still
a very broad area of research. Optimization is everywhere and is thus an importantparadigm by itself with a wide range of applications. In almost all applications inengineering and industry, we are always trying to optimize something—whether to
minimize cost and energy consumption, or to maximize profit, output, performance,
and efficiency. In reality, resources, time, and money are always limited; conse-quently, optimization is far more important in practice (
Yang, 2010b; Yang and
Koziel, 2011 ). The optimal use of available resources of any sort requires a para-
digm shift in scientific thinking; this is because most real-world applications havefar more complicated factors and parameters that affect how the system behavesand thus how the optimal design needs are met.
A proper formulation of optimization is an art, which requires detailed knowl-
edge of the problem and extensive experience. Optimization problems can be for-
mulated in many ways. For example, the commonly used method of least squares
Swarm Intelligence and Bio-Inspired Computation. DOI: http://dx.doi.org/10.1016/B978-0-12-405163-8.00001-6
©2013 Elsevier Inc. All rights reserved.

is a special case of maximum-likelihood formulations. By far, the most widely
used formulations is to write a nonlinear optimization problem as
minimize fiðxȚ;ði51;2;…;MȚð 1:1Ț
subject to the following nonlinear constraints:
hjðxȚ50;ðj51;2;…;JȚð 1:2Ț
gkðxȚ#0;ðk51;2;…;KȚð 1:3Ț
where fi;hj;andgkare in general nonlinear functions, or even integrals and/or dif-
ferential equations. Here, the design vector x5ðx1;x2;…;xnȚcan be continuous,
discrete, or mixed in d-dimensional space. The functions fiare called objectives, or
cost functions, and when M.1, the optimization is multiobjective or multicriteria
(Yang, 2008, 2010b ). It is worth pointing out that here we write the problem as a
minimization problem; it can also be written as a maximization problem by simplyreplacing f
iby2fi. When all functions are nonlinear, we are dealing with nonlinear
constrained problems. In some special cases when all functions are linear, the prob-
lem becomes linear, and we can use the widely used linear programming techni-ques such as the simplex method. When some design variables can only takediscrete values (often integers), while other variables are real continuous, the prob-lem is of mixed type, which is often difficult to solve, especially for large-scaleoptimization problems. A very special class of optimization is the convex optimiza-tion, which has guaranteed global optimality. Any optimal solution to a convexproblem is also its global optimum, and most importantly, there are efficient algo-
rithms of polynomial time to solve. These efficient algorithms such as the interior-
point methods are widely used and have been implemented in many softwarepackages.
Despite the fact that the above optimization problem looks seemingly simple,
it is usually very challenging to solve. There are many challenging issues; twomajor challenges are the nonlinearity and complex constraints. Nonlinearity inobjective functions makes the cost landscape highly multimodal, and potentiallynonsmooth, and nonlinearity in constraints complicates the search boundaries and
search domains which may be disconnected. Therefore, the evaluations of objec-
tives and the handling of constraints can be time consuming. In addition, notall problems can be written in terms of explicit objective functions. Sometimes,the objectives such as energy efficiency can have a complex, implicit dependenceon the design variables. In this case, we may have to deal with optimizationproblems with black-box type objectives whose evaluations can be by some exter-nal, finite-element simulators. Simulations are often the most time-consumingpart. In many applications, an optimization process often involves the evaluation
of objective functions many times, often thousands and even millions of4 Swarm Intelligence and Bio-Inspired Computation

configurations. Such evaluations often involve the use of extensive computational
tools such as a computational fluid dynamics simulator or a finite element solver.
Therefore, an efficient optimization algorithm in combination with an efficient
solver is extremely important.
Furthermore, even when variables take only integer values such as 0 and 1, such
combinatorial optimization problem can still be nondeterministic polynomial-time(NP) hard. Therefore, no efficient algorithm exists for such problems. Specificknowledge about the problem of interest can gain good insights, but in many cases,heuristic approaches have to be used by trial and error. That is probably anotherreason for the popularity of heuristic and metaheuristic algorithms.
1.2 Current Issues in Bio-Inspired Computing
Despite the popularity and success of SI and bio-inspired computing, there remainmany challenging issues. Here we highlight five issues: gaps between theory andpractice, classifications, parameter tuning, lack of truly large-scale real-world appli-
cations, and choices of algorithms.
1.2.1 Gaps Between Theory and Practice
There is a significant gap between theory and practice in bio-inspired computing.
Nature-inspired metaheuristic algorithms work almost magically in practice, but itis not well understood why these algorithms work. For example, except for a few
cases such as genetic algorithms, simulated annealing, and particle swarm optimi-
zation, there are not many good results concerning the convergence analysis andstability of metaheuristic algorithms. The lack of theoretical understanding maylead to slow progress or even resistance to the wider applications of metaheuristics.
There are three main methods for theoretical analysis of algorithms: complexity
theory, dynamical systems, and Markov chains. On the one hand, metaheuristic algo-rithms tend to have low algorithm complexity, but they can solve highly complex pro-blems. Complexity analysis is an active research area and requires more in-depth
analysis (
Hopper and Turton, 2000; Yang, 2011c ). On the other hand, the convergence
analysis typically uses dynamic systems and statistical methods based on Markovchains. For example, particle swarm optimization was analyzed by Clerc and
Kennedy (2002)
using simple dynamic systems, whereas genetic algorithms were ana-
lyzed intensively in a few theoretical studies ( Aytug et al., 1996; Greenhalgh and
Marshal, 2000; Gutjahr, 2010; Villalobos-Arias et al., 2005 ). For example, for a given
mutation rate ( μ), string length ( L), and population size ( n), the number of iterations
in genetic algorithms can be estimated by
t#lnð12pȚ
lnf12min½ð12μȚLn;μLn/C138g/C24/C25
ð1:4Ț5 Swarm Intelligence and Bio-Inspired Computation: An Overview

where duemeans taking the maximum integer value of u, and pis a function of μ,
L, and n(Gutjahr, 2010; Villalobos-Arias et al., 2005 ).
Theoretical understanding lags behind, and thus there is a strong need for further
studies in this area. There is no doubt that any new understanding will providegreater insight into the working mechanism of metaheuristic algorithms.
1.2.2 Classifications and Terminology
There are many ways to classify optimization algorithms; one of the most widely usedis based on the number of agents, and another is based on the iteration procedure. Theformer will lead to two categories: single agent and multiple agents. Simulated anneal-ing is a single-agent algorithm with a zigzag piecewise trajectory, whereas genetic
algorithms, particle swarm optimization, and firefly algorithm are population-based
algorithms. These algorithms often have multiple agents, interacting in a nonlinearmanner, and a subset of which is called SI-based algorithm. For example, particleswarm optimization and firefly algorithm are swarm-based, inspired by swarmingbehavior of birds, fish, and fireflies or by SI in general.
Another way of classifying algorithms is based on the core procedure of algo-
rithms. If the procedure is fixed without any randomness, an algorithm that starts froma given initial value will reach the same final value, no matter when you run the algo-
rithm. We call this deterministic. For example, the classic Newton /C0Raphson method
is a deterministic algorithm, so is the hill-climbing method. On the other hand, if analgorithm contains some randomness in the algorithm procedure, it is called stochas-tic, evolutionary, or heuristic or even metaheuristic. For example, genetic algorithmswith mutation and crossover components can be called evolutionary algorithms,stochastic algorithms, or metaheuristic algorithms.
These different names for algorithms with stochastic components reflect an issue
that there is still some confusion in the terminologies and terms used in the current
literature. Algorithms, such as genetic algorithms, developed before the 1980s were
called evolutionary algorithms, and now, they can be called both evolutionary-based and metaheuristic. Depending on the sources of inspiration, there are bio-inspired algorithms, nature-inspired algorithms, and metaheuristics in general.However, recent trends call such algorithms “metaheuristic.”
Briefly speaking, heuristic means “by trial and error,” and metaheuristic can be
considered a higher-level method by using certain selection mechanisms and infor-mation sharing. It is believed that the word “metaheuristic” was coined by
Glover
(1986) . The multiple names and inconsistency in terminologies in the literature
require efforts from research communities to agree on some of the common ter-minologies and to systematically classify and analyze algorithms. The currentdramatic expansion in the literature makes it an even more challenging task.
In this book, we have tried to use metaheuristic, SI, and bio-inspired computa-
tion in the right context, with a focus on metaheuristics.
It is worth pointing out that from the mobility point of view, algorithms can be
classified as local or global. Local search algorithms typically converge toward a
local optimum, not necessarily (often not) the global optimum, and such algorithms6 Swarm Intelligence and Bio-Inspired Computation

are often deterministic and have no ability of escaping local optima. Simple hill-
climbing algorithm is an example. On the other hand, we always try to find the
global optimum for a given problem, and if this global optimality is robust, it is
often the best, though it is not always possible to find such global optimality. Forglobal optimization, local search algorithms are not suitable. We have to use aglobal search algorithm. In most cases, modern metaheuristic algorithms areintended for global optimization, though they are not always successful or efficient.
1.2.3 Tuning of Algorithm-Dependent Parameters
All metaheuristic algorithms have algorithm-dependent parameters, and the appro-
priate setting of these parameter values will largely affect the performance of an
algorithm. One of the very challenging issues is deciding what values of parametersto use in an algorithm. How can these parameters be tuned so that they can maxi-mize the performance of the algorithm of interest?
Parameter tuning itself is a tough optimization problem. In the literature, there
are two main approaches. One approach is to run an algorithm with some trialvalues of key parameters, and the aim of the test runs is to get good setting of theseparameters. These parameters are then fixed for more extensive test runs involving
same type of problems or larger problems. The other approach is to use one algo-
rithm (which may be well tested and well established) to tune the parameters ofanother relatively new algorithm. Then, an important issue arises. If we use algo-rithm A (or tool A) to tune algorithm B, what tool or algorithm should be used totune algorithm A? If we use, say, algorithm C to tune algorithm A, then what toolshould be used to tune algorithm C? In fact, these key issues are still under activeresearch.
1.2.4 Necessity for Large-Scale and Real-World Applications
SI and bio-inspired computation are very successful in solving many practical pro-blems. However, the size of these problems in terms of number of design variablesis relatively small or moderate. In the current literature, studies have focused ondesign problems with about a dozen of variables or at most about a hundred. It israre to see studies with several hundred variables. In contrast, in linear program-ming, it is routine to solve design problems with half a million to several millionsof design variables. Therefore, it remains a huge challenge for SI-based algorithms
to apply to real-world, large-scale problems.
Accompanying this challenge is the methodology issue. Nobody is sure if we
can directly apply the same methods that work well for small, toy problems tolarge-scale problems. Apart from difference in size, there may be other issues suchas memory capacity, computational efficiency, and computing resources needingspecial care. If we cannot extend existing methods to deal with large-scale pro-blems effectively, often not, then what are the options? After all, real-world pro-blems are typically nonlinear and are often very large-scale. Further and detailed
studies are highly needed in this area.7 Swarm Intelligence and Bio-Inspired Computation: An Overview

1.2.5 Choice of Algorithms
Even with all the knowledge and all the books written on optimization and algorithms,
most readers are still not sure what algorithms to choose. It is similar to visiting ashopping mall to choose a certain product. There are often so many different choices,
and to make a right choice is again an optimization problem. In the literature, there is
no agreed guideline to choose algorithms, though there are specific instructions onhow to use a specific algorithm and what types of problems they can solve. Therefore,the issue of choice still remains: partly experience-based and partly by trial and error.
Sometimes, even with the best possible intention, the availability of an algorithm
and the expertise of the decision makers are the ultimate defining factors for choos-ing an algorithm. Even though some algorithms are better for the given problem athand, we may not have that algorithm implemented in our system or we do not
have access, which limits our choices. For example, Newton’s method, hill-climbing,
Nelder /C0Mead downhill simplex, trust-region methods (
Conn et al., 2000 ), and
interior-point methods are implemented in many software packages, which may alsoincrease their popularity in applications. In practice, even with the best possible algo-rithms and well-crafted implementation, we may still not get the desired solutions.This is the nature of nonlinear global optimization, as most of such problems areNP-hard, and no efficient algorithm (in the polynomial-time sense) exists for a givenproblem. Thus, the challenges of research in computational optimization and applica-
tions are to find the right algorithms most suitable for a given problem to obtain
good solutions, hopefully also the global best solutions, in a reasonable timescalewith a limited amount of resources.
1.3 Search for the Magic Formulas for Optimization
1.3.1 Essence of an Algorithm
Mathematically speaking, an algorithm is a procedure to generate outputs for giveninputs. From the optimization point of view, an optimization algorithm generates anew solution x
t11to a given problem from a known solution xtat iteration or time t.
That is,
xt115Aðxt;pðtȚȚ ð 1:5Ț
where Ais a nonlinear mapping from a given solution, or d-dimensional vector, xtto
a new solution vector xt11. The algorithm Ahaskalgorithm-dependent parameters
pðtȚ5ðp1;…;pkȚ, which can be time dependent and can thus be tuned if necessary.
1.3.2 What Is an Ideal Algorithm?
In an ideal world, we hope to start from any initial guess solution and wish to get
the best solution in a single step. That is, to use minimal computational effort.8 Swarm Intelligence and Bio-Inspired Computation

In other words, this is essentially saying that the algorithm simply has to tell what
the best answer is to any given problem in a single step! You may wonder if such
an algorithm exists. In fact, the answer is yes for a very specific type of problem—
quadratic convex problems.
We know Newton /C0Raphson method is a root-finding algorithm. It can find the
roots of f(x)50. As the minimum or maximum of a function f(x) has to satisfy the
critical condition f0(x)50, this optimization problem now becomes a problem of
finding the roots of f0(x). Newton /C0Raphson method provides the following itera-
tion formula:
xi115xi2f0ðxiȚ
f00ðxiȚð1:6Ț
For a quadratic function, say, f(x)5×2, if we start from a fixed location, say,
x05aati50, we have f0(a)52aandfv(a)52. Then, we get
x15x02f0ðx0Ț
f00ðx0Ț5a22a
250
which is exactly the optimal solution fmin50a t x50, which is also globally opti-
mal. We have found the global optimum in one step. In fact, for quadratic functions
that are also convex, Newton /C0Raphson is an ideal algorithm. However, the world
is not convex and certainly not quadratic, real-world problems are often highly non-linear, there is no ideal algorithm.
For NP-hard problems, there is no known efficient algorithm at all. Such hard
problems require a huge amount of research efforts to search for specific techni-ques, which are still not satisfactory in practice. These challenges can also be adriving force for active research.
1.3.3 Algorithms and Self-Organization
Self-organization exists in many systems, from physical and chemical to biologicaland artificial systems. Emergent phenomena such as Releigh /C0Be´nard convection,
Turing pattern formation, and organisms and thunderstorms can all be called self-organization (
Ashby, 1962; Keller, 2009 ). Although there is no universal theory for
self-organizing processes, some aspects of self-organization can partly be under-
stood using theories based on nonlinear dynamical systems, far-from-equilibriummultiple interacting agents, and closed system under unchanging laws ( Prigogine
and Nicolois, 1967
). As pointed out by the cyberneticist and mathematician Ashby
(1962) , every isolated determinate dynamic system, obeying unchanging laws, will
ultimately develop some sort of “organisms” that are adapted to their “environ-ments.” For simple systems, going to equilibrium is trivial; however, for a complexsystem, if its size is so large that its equilibrium states are just a fraction of the vast
number of possible states, and if the system is allowed to evolve long enough,9 Swarm Intelligence and Bio-Inspired Computation: An Overview

some self-organized structures may emerge. The changes in environments can
apply pressure on the system to reorganize and adapt to such changes. If the system
has sufficient perturbations or noise, often working at the edge of the chaos, some
spontaneous formation of structures will emerge as the systems move, far fromequilibrium, and select some states, thus reducing the uncertainty or entropy.
The state set Sof a complex system such as a machine may change from initial
states SðψȚto other states SðφȚ, subject to the change of a parameter set αðtȚwhich
can be time dependent. That is,
SðψȚ!αðtȚSðφȚð 1:7Ț
where αðtȚmust come from external conditions such as the heat flow in
Raleigh /C0Be´nard convection, not from the states Sthemselves. Obviously, S1αðtȚcan
be considered as a larger and a closed system ( Ashby, 1962; Keller, 2009 ). In this sense,
self-organization is equivalent to a ma pping from some high-entropy states to low-
entropy states.
An optimization algorithm can be viewed as a complex, dynamical system. If
we can consider the convergence process as a self-organizing process, then thereare strong similarities and links between self-organizing systems and optimizationalgorithms. First, let us discuss the essence of an optimization algorithm.
1.3.4 Links Between Algorithms and Self-Organization
To find the optimal solution x/C3to a given optimization problem Swith often an
infinite number of states is to select some desired states φfrom all states ψ, accord-
ing to some predefined criterion D. We have
SðψȚ!AðtȚSðφðx/C3ȚȚ ð 1:8Ț
where the final converged state φcorresponds to an optimal solution x/C3to the prob-
lem of interest. The selection of the system states in the design space is carried outby running the optimization algorithm A. The behavior of the algorithm is con-
trolled by pðtȚ, the initial solution x
t50, and the stopping criterion D. We can view
the combined S1AðtȚas a complex system with a self-organizing capability.
The change of states or solution to the problem of interest is controlled
by the algorithm A. In many classical algorithms such as hill-climbing, gradient
information is often used to select states, say, the minimum value of the land-scape, and the stopping criterion can be a given tolerance, accuracy, or zero gradi-ent. Alternatively, an algorithm can act like a tool to tune a complex system. If analgorithm does not use any state information of the problem, then the algorithm ismore likely to be versatile to deal with many types of problems. However, suchblack-box approaches can also imply that the algorithm may not be efficient as itcould be for a given type of problem. For example, if the optimization problem is
convex, the algorithms that use such convexity information will be more efficient10 Swarm Intelligence and Bio-Inspired Computation

than the ones that do not use such information. In order to select states/solutions
efficiently, the information from the search process should be used to enhance the
search process. In many cases, such information is often fed into the selection
mechanism of an algorithm. By far, the most widely used selection mechanism isto select or keep the best solution found so far. That is, some form of “survival ofthe fittest” is used.
From the schematic representation [see Eq. (1.8)] of an optimization process, we
can see that the performance of an algorithm may depend on the type of problem S
it solves. Whether the final, global optimality is achievable or not (within a givennumber of iterations) will also depend on the algorithm used. This may be another
way of stating the so-called no-free-lunch theorems.
Optimization algorithms can be very diverse with several dozens of widely used
algorithms. The main characteristics of different algorithms will only depend on theactual, often highly nonlinear or implicit, forms of AðtȚand their parameters pðtȚ.
1.3.5 The Magic Formulas
The ultimate aim is to find a magic formula or method that works for many pro-
blems, such as the Newton /C0Raphson method for quadratic functions. We wish it
could work like a magic to provide the best solution for any problem in a few steps.However, such formulas may never exist.
As optimization algorithms are iterative, an algorithm to solve a given problem
Pcan be written as the following generic formula:
x
t115gðxt;α;PȚð 1:9Ț
which forms a piecewise trajectory in the search space. This algorithm depends on
a parameter α, starting with an initial guess x0. The iterative path will depend on
the problem ( P) or its objective function f(x). However, as algorithms nowadays
tend to use multiple agents, Eq. (1.9) can be extended to
½x1;x2;…;xn/C138t115gð½x1;…;xn/C138t;½α1;…;αk/C138t;PȚð 1:10Ț
which has a population size of nand depends on kdifferent algorithm-dependent
parameters. Each iteration will produce ndifferent solutions [ x1,…,xn]. Modern
metaheuristic algorithms have stochastic components, which mean some of these k
parameters can be drawn from some probability distributions. If we wish to expressthe randomness more explicitly, we can rewrite
Eq. (1.10) as
½x1;x2;…;xn/C138t115gð½x1;…;xn/C138t;½α1;…;αk/C138t;½ε1;…;εm/C138t;PȚð 1:11Ț
with mrandom variables of εthat are often drawn from uniform distributions or
Gaussian distributions. In some cases as those in cuckoo search, these random
variables can also be drawn from a Le ´vy distribution ( Yang and Deb, 2009 ).11 Swarm Intelligence and Bio-Inspired Computation: An Overview

Although there is no magic formula, each algorithm strives to use fewer iteration
tas possible. The only difference among algorithms is the exact form of g(/C1).
In fact, sometimes, the procedure g(/C1) can be divided into many substeps or proce-
dures with different branches so that these branches can be used in a random man-ner during iterations. This is the essence of all contemporary SI and bio-inspiredmetaheuristic algorithms.
1.4 Characteristics of Metaheuristics
1.4.1 Intensification and Diversification
Intensification and diversification are two key components for any metaheuristicalgorithm (
Blum and Roli, 2003; Yang, 2008 ). Intensification is also called exploita-
tion, and it uses the local information in the search process so as to generate bettersolutions. Such local information can be derivative of the objective or the variations
of the cost landscape. Diversification is also called exploration, which intends to
explore the search space more thoroughly and to help generate diverse solutions.Too much intensification will make the optimization process converge quickly, butit may lead to premature convergence, often to a local optimum, or even a wrongsolution. It will also reduce the probability of finding the true global optimum. Onthe other hand, too much diversification will increase the probability of finding thetrue optimality globally, but it will often slow down the process with a much lowerconvergence rate. Therefore, there is a fine balance or trade-off between intensifica-
tion and diversification, or between exploitation and exploration.
Furthermore, just exploitation and exploration are not enough. During the
search, we have to use a proper mechanism or criterion to select the best solutions.The most common criterion is to use the survival of the fittest , i.e., to keep updating
the current best found so far. In addition, certain elitism is often used, and this is toensure the best or the fittest solutions are not lost and should be passed on to thenext generations (
Fogel et al., 1966; Goldberg, 1989; Holland, 1975 ).
For any algorithm to be efficient, it must somehow provide a mechanism to bal-
ance the aforementioned two key components properly. It is worth pointing out
that the naı ¨ve 50 /C050 balance is not optimal ( Yang, 2011c; Yang and He, 2013 ).
More research in this area is highly needed.
1.4.2 Randomization Techniques
On analyzing bio-inspired algorithms in more detail, we can single out the type ofrandomness that a particular algorithm is employing. For example, the simplest andyet often very efficient method is to introduce a random starting point for a deter-ministic algorithm. The well-known hill-climbing with random restart is a goodexample. This simple strategy is both efficient, in most cases, and easy to imple-
ment in practice.12 Swarm Intelligence and Bio-Inspired Computation

A more elaborate way to introduce randomness to an algorithm is to use ran-
domness inside different components of an algorithm, and various probability dis-
tributions such as uniform, Gaussian, and Le ´vy distributions can be used for
randomization ( Talbi, 2009; Yang, 2008, 2010b ). In essence, randomization is an
efficient component for global search algorithms.
Obviously, it still remains an open question that what is the best way to provide
sufficient randomness without slowing down the convergence of an algorithm.In fact, metaheuristic algorithms form a hot research topic with new algorithmsappearing almost yearly, and new techniques are being explored (
Yang, 2008,
2010b ).
1.5 Swarm-Intelligence-Based Algorithms
Metaheuristic algorithms are often nature-inspired, and they are now among the
most widely used algorithms for optimization. They have many advantages overconventional algorithms, as we can see from many case studies presented in the
later chapters of this book. There are a few recent books that are solely dedicated
to metaheuristic algorithms (
Talbi, 2009; Yang, 2008, 2010a,b ). Metaheuristic algo-
rithms are very diverse, including genetic algorithms, simulated annealing, differ-ential evolution, ant and bee algorithms, particle swarm optimization, harmonysearch, firefly algorithm, and cuckoo search. Here we introduce some of these algo-rithms briefly, especially those that are based on SI.
1.5.1 Ant Algorithms
Ant algorithms, especially the ant colony optimization ( Dorigo and Stu ¨tle, 2004 ),
mimic the foraging behavior of social ants. Ants primarily use pheromone as a
chemical messenger, and the pheromone concentration can be considered as theindicator of quality solutions to a problem of interest. As the solution is oftenlinked with the pheromone concentrations, the search algorithms often produceroutes and paths marked by the higher pheromone concentrations; therefore, ants-based algorithms are particularly suitable for discrete optimization problems.
The movement of an ant is controlled by pheromone, which will evaporate over
time. Without such time-dependent evaporation, ant algorithms will lead to prema-
ture convergence to the local (often wrong) solutions. With proper pheromone
evaporation, they usually behave very well.
There are two important issues here: the probability of choosing a route and the
evaporation rate of pheromone. There are a few ways of solving these problems,though it is still an area of active research. For a network routing problem, the proba-bility of ants at a particular node ito choose the route from node ito node jis given by
p
ij5φα
ijdβ
ijPn
i;j51φα
ijdβ
ijð1:12Ț13 Swarm Intelligence and Bio-Inspired Computation: An Overview

where α.0 and β.0 are the influence parameters, and their typical values are
α/C25β/C252. Here, φijis the pheromone concentration of the route between iandj,
anddijthe desirability of the same route. Some a priori knowledge about the route
such as the distance sijis often used so that dij~1=sij, which implies that shorter
routes will be selected due to their shorter traveling time, and thus the pheromoneconcentrations on these routes are higher. This is because the traveling time isshorter, and thus, less amount of pheromone has been evaporated during thisperiod.
1.5.2 Bee Algorithms
Bees-inspired algorithms are more diverse, and some use pheromone and most donot. Almost all bee algorithms are inspired by the foraging behavior of honeybeesin nature. Interesting characteristics such as waggle dance, polarization, and nectarmaximization are often used to simulate the allocation of the forager bees alongflower patches and thus different search regions in the search space. For a morecomprehensive review, refer to
Yang (2010a) and Parpinelli and Lopes (2011) .
Different variants of bee algorithms use slightly different characteristics of the
behavior of bees. For example, in the honeybee-based algorithms, forager bees are
allocated to different food sources (or flower patches) to maximize the total nectar
intake ( Karaboga, 2005; Nakrani and Tovey, 2004; Pham et al., 2006; Yang, 2005 ).
In the virtual bee algorithm (VBA), developed by Yang (2005) , pheromone concen-
trations can be linked with the objective functions more directly. On the otherhand, the artificial bee colony (ABC) optimization algorithm was first developedby
Karaboga (2005) . In the ABC algorithm, the bees in a colony are divided into
three groups: employed bees (forager bees), onlooker bees (observer bees), andscouts. Unlike the honeybee algorithm, which has two groups of bees (forager bees
and observer bees), the bees in ABC are more specialized (
Afshar et al., 2007;
Karaboga, 2005 ).
Similar to the ants-based algorithms, bee algorithms are also very flexible in
dealing with discrete optimization problems. Combinatorial optimization such asrouting and optimal paths has been successfully solved by ant and bee algorithms.In principle, they can solve both continuous optimization and discrete optimizationproblems; however, they should not be the first choice for continuous problems.
1.5.3 Bat Algorithm
Bat algorithm is a relatively new metaheuristic, developed by Yang (2010c) . It was
inspired by the echolocation behavior of microbats. Microbats use a type of sonar,called echolocation, to detect prey, avoid obstacles, and locate their roostingcrevices in the dark. These bats emit a very loud sound pulse and listen to theechoes that bounce back from the surrounding objects. Depending on the species,their pulse varies in property and can be correlated with their hunting strategies.Most bats use short, frequency-modulated signals to sweep through about an
octave, while others more often use constant-frequency signals for echolocation.14 Swarm Intelligence and Bio-Inspired Computation

Their signal bandwidth varies depending on the species and is often increased by
using more harmonics.
The bat algorithm uses three idealized rules: (i) All bats use echolocation to
sense distance, and they also “know” the difference between food/prey and back-ground barriers in some magical way. (ii) A bat roams randomly with a velocity v
i
at position xiwith a fixed frequency range ½fmin;fmax/C138, varying its emission rate
rA½0;1/C138and loudness A0to search for prey, depending on the proximity of their tar-
get. (iii) Although the loudness can vary in many ways, we assume that the loud-ness varies from a large (positive) A
0to a minimum constant value Amin. These
rules can be translated into the following formulas:
fi5fmin1ðfmax2fminȚε;vt11
i5vt
i1ðxt
i2x/C3Țfi;xt11
i5xt
i1vt
i ð1:13Ț
where εis a random number drawn from a uniform distribution and x/C3is the cur-
rent best solution found so far during iterations. The loudness and pulse rate canvary with iteration tin the following way:
A
t11
i5αAt
i;rt
i5r0
i½12expð2βtȚ/C138 ð 1:14Ț
Here αandβare constants. In fact, αis similar to the cooling factor of a cool-
ing schedule in the simulated annealing, which is discussed later. In the simplest
case, we can use α5β, and we have in fact used α5β50.9 in most simulations.
Bat algorithm has been extended to multiobjective bat algorithm (MOBA) by
Yang (2011a) , and preliminary results suggested that it is very efficient ( Yang and
Gandomi, 2012 ). A few other important applications of bat algorithm can be found
in the other chapters of this book.
1.5.4 Particle Swarm Optimization
Particle swarm optimization (PSO) was developed by Kennedy and Eberhart
(1995) based on the swarm behavior such as fish and bird schooling in nature.
Since then, PSO has generated much wider interests and forms an exciting, ever-expanding research subject called swarm intelligence. This algorithm searches thespace of an objective function by adjusting the trajectories of individual agents,called particles, as the piecewise paths formed by positional vectors in a quasi-stochastic manner.
The movement of a swarming particle consists of two major components: a sto-
chastic component and a deterministic component. Each particle is attracted toward
the position of the current global best g
/C3and its own best location x/C3
iin history,
while at the same time it has a tendency to move randomly. Let xiandvibe the
position vector and velocity of particle i, respectively. The new velocity vector is
determined by the following formula:
vt11
i5vt
i1αε 1½g/C32xt
i/C1381βε 2½x/C3
i2xt
i/C138ð 1:15Ț15 Swarm Intelligence and Bio-Inspired Computation: An Overview

where ε1andε2are two random vectors, and each entry takes the values between 0
and 1. The parameters αandβare the learning parameters or acceleration con-
stants, which can typically be taken as, say, α/C25β/C252.
The initial locations of all particles should be distributed relatively uniformly so
that they can sample over most regions, which is especially important for multi-modal problems. The initial velocity of a particle can be taken as zero, i.e.,v
t50
i50. The new positions can then be updated by
xt11
i5xt
i1vt11
i ð1:16Ț
Although vican be any value, it is usually bounded in some range ½0;vmax/C138.
There are many variants that extend the standard PSO algorithm ( Kennedy
et al., 2001 ;Yang, 2008, 2010b ), and the most noticeable improvement is probably
to use inertia function θðtȚso that vt
iis replaced by θðtȚvt
i:
vt11
i5θvt
i1αε 1½g/C32xt
i/C1381βε 2}½x/C3
i2xt
i/C138ð 1:17Ț
where θtakes the values between 0 and 1. In the simplest case, the inertia function
can be taken as a constant, typically θ/C250:5B0:9. This is equivalent to introducing
a virtual mass to stabilize the motion of the particles, and thus the algorithm isexpected to converge more quickly. Another efficient variant is called accelerated
particle swarm optimization (APSO) that proves efficient in solving business opti-
mization problems (
Yang et al., 2011 ).
1.5.5 Firefly Algorithm
Firefly algorithm (FA) was first developed by Yang in 2007 ( Yang, 2008, 2009 )
which was based on the flashing patterns and behavior of fireflies. In essence, FAuses the following three idealized rules:
1.Fireflies are unisexual so that one firefly will be attracted to other fireflies regardless of
their sex.
2.The attractiveness is proportional to the brightness and they both decrease as their dis-
tance increases. Thus, for any two flashing fireflies, the less brighter one will move
toward the more brighter one. If there is no brighter one than a particular firefly, it willmove randomly.
3.The brightness of a firefly is determined by the landscape of the objective function.
The movement of a firefly iis attracted to another, more attractive (brighter)
firefly jis determined by
xt11
i5xt
i1β0e2γr2
ijðxt
j2xt
iȚ1αεt
i ð1:18Ț
where β0is the attractiveness at the distance r50, and the second term is due to
the attraction. The third term is randomization with αbeing the randomization16 Swarm Intelligence and Bio-Inspired Computation

parameter, and εt
iis a vector of random numbers drawn from a Gaussian distribu-
tion or uniform distribution at time t.I fβ050, it becomes a simple random walk.
Furthermore, the randomization εt
ican easily be extended to other distributions
such as Le ´vy flights.
The Le ´vy flight essentially provides a random walk whose random step length is
drawn from a Le ´vy distribution
Lðs;λȚ5s2ð11λȚ;ð0,λ#2Țð 1:19Ț
which has an infinite variance with an infinite mean. Here the steps essentially form
a random walk process with a power-law step-length distribution with a heavy tail.Some of the new solutions should be generated by Le ´vy walk around the best solu-
tion obtained so far; this will often speed up the local search (
Pavlyukevich, 2007 ).
A demo version of firefly algorithm implementation, without Le ´vy flights, can be
found at Mathworks file exchange web site.1Firefly algorithm has attracted much
attention ( Apostolopoulos and Vlachos, 2011; Gandomi et al., 2011b; Sayadi et al.,
2010 ). A discrete version of FA can efficiently solve NP-hard scheduling problems
(Sayadi et al., 2010 ), while a detailed analysis has demonstrated the efficiency of
FA over a wide range of test problems, including multiobjective load dispatch pro-blems (
Apostolopoulos and Vlachos, 2011 ). A chaos-enhanced firefly algorithm with
a basic method for automatic parameter tuning is also developed ( Yang, 2011b ).
1.5.6 Cuckoo Search
Cuckoo search (CS) is one of the latest nature-inspired metaheuristic algorithms,developed by
Yang and Deb (2009) . CS is based on the brood parasitism of some
cuckoo species. In addition, this algorithm is enhanced by the so-called Le ´vy
flights ( Pavlyukevich, 2007 ) rather than by simple isotropic random walks. Recent
studies show that CS is potentially far more efficient than PSO and genetic algo-rithms (
Yang and Deb, 2010 ).
Cuckoos are fascinating birds, not only because of the beautiful sounds they can
make but also because of their aggressive reproduction strategy. Some species suchas the AniandGuira cuckoos lay their eggs in communal nests, though they may
remove others’ eggs to increase the hatching probability of their own eggs. Quite anumber of species engage the obligate brood parasitism by laying their eggs in the
nests of other host birds (often other species). For simplicity in describing the stan-
dard Cuckoo Search, we now use the following three idealized rules:
1.Each cuckoo lays one egg at a time and dumps it in a randomly chosen nest.
2.The best nests with high-quality eggs will be carried over to the next generations.
3.The number of available host nests is fixed, and the egg laid by a cuckoo is discovered
by the host bird with a probability paA½0;1/C138. In this case, the host bird can either get rid
of the egg or simply abandon the nest and build a completely new nest.
1http://www.mathworks.com/matlabcentral/fileexchange/29693-firefly-algorithm .17 Swarm Intelligence and Bio-Inspired Computation: An Overview

As a further approximation, this last assumption can be approximated by a frac-
tion paof the nhost nests which are replaced by new nests (with new random solu-
tions). For a maximization problem, the quality or fitness of a solution can simply
be proportional to the value of the objective function. Other forms of fitness can bedefined in a similar manner to the fitness function in genetic algorithms.
From the implementation point of view, we can use the following simple repre-
sentations: each egg in a nest represents a solution, and each cuckoo can lay onlyone egg (thus representing one solution); the aim is to use the new and potentiallybetter solutions (cuckoos) to replace a not-so-good solution in the nests. Obviously,this algorithm can be extended to a more complicated case where each nest has
multiple eggs representing a set of solutions. For this present introduction, we use
the simplest approach where each nest has only a single egg. In this case, there isno distinction between egg, nest, or cuckoo, as each nest corresponds to one eggwhich also represents one cuckoo.
This algorithm uses a balanced combination of a local random walk and the
global explorative random walk, controlled by a switching parameter p
a. The local
random walk can be written as
xt11
i5xt
i1αs/C10Hðpa2εȚ/C10ð xt
j2xt
kȚð 1:20Ț
where xt
jandxt
kare two different solutions selected randomly by random permuta-
tion, HðuȚis a Heaviside function, εis a random number drawn from a uniform dis-
tribution, and sis the step size. On the other hand, the global random walk is
carried out by using Le ´vy flights
xt11
i5xt
i1αLðs;λȚð 1:21Ț
where
Lðs;λȚ5λΓðλȚsinðπλ=2Ț
π1
s11λ;ðscs0.0Țð 1:22Ț
Here α.0 is the step-size-scaling factor, which should be related to the scales
of the problem of interests. In most cases, we can use α5OðL=10Ț, where Lis the
characteristic scale of the problem of interest, while in some cases α5OðL=100Ț
can be more effective and can avoid flying too far. Equation (1.22) is essentiallythe stochastic equation for a random walk. In general, a random walk is a Markovchain whose next status/location only depends on the current location (the firstterm in Eq. (1.22)) and the transition probability (the second term). However, asubstantial fraction of the new solutions should be generated by far field randomi-zation and whose locations should be far enough from the current best solution.
This will make sure that the system does not get trapped in a local optimum (
Yang
and Deb, 2010 ).18 Swarm Intelligence and Bio-Inspired Computation

A Matlab implementation is given by the author and can be downloaded.2
Cuckoo search is very efficient in solving engineering optimization problems
(Gandomi et al., 2013 ;Yang and Deb, 2009 ).
1.5.7 Flower Pollination Algorithm
Flower pollination algorithm (FPA) was developed by Yang (2012) inspired by the
flow pollination process of flowering plants:
1.Biotic and cross-pollination can be considered as a process of global pollination process,
and pollen-carrying pollinators move in a way that obeys Le ´vy flights (Rule 1).
2.For local pollination, abiotic and self-pollination are used (Rule 2).
3.Pollinators such as insects can develop flower constancy, which is equivalent to a repro-
duction probability that is proportional to the similarity of two flowers involved (Rule 3).
4.The interaction or switching of local pollination and global pollination can be controlled
by a switch probability pA½0;1/C138, with a slight bias toward local pollination (Rule 4).
In order to formulate updating formulas, we have to convert the aforementioned
rules into updating equations. For example, in the global pollination step, flowerpollen gametes are carried by pollinators such as insects, and pollen can travel overa long distance because insects can often fly and move in a much longer range.Therefore, Rule 1 and flower constancy can be represented mathematically as
x
t11
i5xt
i1LðλȚðxt
i2g/C3Țð 1:23Ț
where xt
iis the pollen ior solution vector xt
iat iteration t, and g/C3is the current best
solution found among all solutions at the current generation/iteration. Here LðλȚis
the parameter that corresponds to the strength of the pollination, which essentiallyis also a step size. Since insects may move over a long distance with various dis-tance steps, we can use a Le ´vy flight to mimic this characteristic efficiently. That
is, we draw L.0 from a Le ´vy distribution as described in
Eq. (1.22) .
For the local pollination, both Rule 2 and Rule 3 can be represented as
xt11
i5xt
i1εðxt
j2xt
kȚð 1:24Ț
where xt
jandxt
kare pollen from different flowers of the same plant species. This essen-
tially mimics the flower constancy in a limited neighborhood. Mathematically, if xt
j
andxt
kcome from the same species or selected from the same population, this equiva-
lently becomes a local random walk if we draw εfrom a uniform distribution in [0,1].
In principle, flower pollination activities can occur at all scales, both local and
global. But in reality, adjacent flower patches or flowers in the not-so-far-awayneighborhood are more likely to be pollinated by local flower pollen than those faraway. In order to mimic this feature, we can effectively use a switch probability(Rule 4) or proximity probability pto switch between common global pollination
2www.mathworks.com/matlabcentral/fileexchange/29809-cuckoo-search-cs-algorithm .19 Swarm Intelligence and Bio-Inspired Computation: An Overview

and intensive local pollination. To start with, we can use a naive value of p50:5
as an initial value. A preliminary parametric showed that p50:8 may work better
for most applications ( Yang, 2012 ).
1.5.8 Other Algorithms
There are many other metaheuristic algorithms, which may be equally popular
and powerful, and these include Tabu search ( Glover and Laguna, 1997 ), artificial
immune system ( Farmer et al., 1986 ), and others ( Koziel and Yang, 2011; Wolpert
and Macready, 1997; Yang, 2010a, b ). For example, wolf search algorithm (WSA)
was developed recently by Rui et al. (2012) based on the wolf pack predating
behavior. Preliminary results show that WSA is a very promising algorithm withconvincing performance. Other algorithms such as Krill herd algorithm and artifi-cial plant algorithm are described in detail in some relevant chapters of this book.
The efficiency of metaheuristic algorithms can be attributed to the fact that they
try to imitate the best features in nature, especially the selection of the fittest in bio-logical systems, which have evolved by natural selection over millions of years.
1.6 Open Problems and Further Research Topics
We have seen that SI and bio-inspired computing have demonstrated a great successin solving various tough optimization problems, and there are still some challengingissues, some of which have been discussed in
Section 1.2 . In order to inspire further
search in this area, we now summarize some of the key open problems:
Theoretical analysis of algorithm convergence : Up to now, only a small fraction of meta-
heuristic algorithms has some limited mathematical analyses in terms of convergence.
More studies are necessary to gain insight into various new algorithms. It is also highly
needed to build a framework for theoretical algorithm analysis.Classification and terminology : There is still some confusion in terms of classifications
and terminologies used in the current literature. Further studies should also try to classify
all known algorithms by some agreed criteria and ideally also to unify the use of key ter-minologies. This requires the effort by all researchers in the wider communities to partici-pate and to dedicate their usage in future publications.
Parameter tuning : The efficiency of an algorithm may depend on its algorithm-dependent
parameters, and optimal parameter setting of any algorithm itself is an optimization prob-lem. To find the best methods to tune these parameters are still under active research.
It can be expected that algorithms with automatic parameter tuning will be a good para-
digm shift in the near future.Large-scale problems : Most current studies in bio-inspired computing have focused on
the toy problems and small-scale problems. For large-scale problems, it still remains
untested if the same methodology for solving toy problems can be used to get solutionsefficiently. For linear programming, it was basically the case; however, nonlinearity often
poses greater challenges.
Truly intelligent algorithms : Researchers have strived to develop better and smarter algo-
rithms for many years, but truly intelligent algorithms are yet to emerge. This could bethe holy grail of optimization and computational intelligence.20 Swarm Intelligence and Bio-Inspired Computation

Obviously, challenges also bring opportunities. There is no exaggeration to say
that it is a golden time for bio-inspired computing so that researchers can rethink
existing methodologies and approaches more deeply and perhaps differently. It is
possible that significant progress can be made in the next 10 years. Any progress intheory and in large-scale practice will provide great insight and may ultimatelyalter the research landscape in metaheuristics. It is can be expected that some trulyintelligent, self-evolving algorithms may appear to solve a wide range of toughoptimization and classification problems efficiently in the not-so-far future.
References
Afshar, A., Haddad, O.B., Marino, M.A., Adams, B.J., 2007. Honey-bee mating optimization
(HBMO) algorithm for optimal reservoir operation. J. Franklin Inst. 344, 452 /C0462.
Apostolopoulos, T., Vlachos, A., 2011. Application of the firefly algorithm for solving the
economic emissions load dispatch problem. Int. J. Comb. 2011. (Article ID 523806.
,http://www.hindawi.com/journals/ijct/2011/523806.html .).
Ashby, W.R., 1962. Principles of the self-organizing system. In: Von Foerster, H., Zopf Jr.,
G.W. (Eds.), Principles of Self-Organization: Transactions of the University of IllinoisSymposium, 1962. Pergamon Press, London, UK, pp. 255 /C0278.
Aytug, H., Bhattacharrya, S., Koehler, G.J., 1996. A Markov chain analysis of genetic algo-
rithms with power of 2 cardinality alphabets. Eur. J. Oper. Res. 96, 195 /C0201.
Blum, C., Roli, A., 2003. Metaheuristics in combinatorial optimization: overview and con-
ceptual comparison. ACM Comput. Surv. 35, 268 /C0308.
Clerc, M., Kennedy, J., 2002. The particle swarm—explosion, stability, and convergence in
a multidimensional complex space. IEEE Trans. Evol. Comput. 6 (1), 58 /C073.
Conn, A.R., Gould, N.I.M., Toint, P.L., 2000. Trust-region methods, Society for Industrial
and Applied Mathematics (SIAM) Press, Philadelphia.
Dorigo, M., Stu ¨tle, T., 2004. Ant Colony Optimization. MIT Press, Cambridge, MA.
Farmer, J.D., Packard, N., Perelson, A., 1986. The immune system, adaptation and machine
learning. Physica D 2, 187 /C0204.
Fogel, L.J., Owens, A.J., Walsh, M.J., 1966. Artificial Intelligence Through Simulated
Evolution. John Wiley & Sons, New York, NY.
Gandomi, A.H., Yang, X.S., Alavi, A.H., 2011. Mixed variable structural optimization using
firefly algorithm. Comput. Struct. 89 (23/24), 2325 /C02336.
Gandomi, A.H., Yang, X.S., Alavi, A.H., 2013. Cuckoo search algorithm: a metaheuristic
approach to solve structural optimization problems. Eng. with Comput. 29 (1), 17 /C035.
Glover, F., 1986. Future paths for integer programming and links to artificial intelligence.
Comput. Oper. Res. 13, 533 /C0549.
Glover, F., Laguna, M., 1997. Tabu Search, Kluwer Academic Publishers, Boston.
Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning.
Addison Wesley, Reading, MA.
Greenhalgh, D., Marshal, S., 2000. Convergence criteria for genetic algorithms. SIAM
J. Comput. 30, 269 /C0282.
Gutjahr, W.J., 2010. Convergence analysis of metaheuristics. Ann. Inf. Syst. 10, 159 /C0187.
Holland, J., 1975. Adaptation in Natural and Artificial Systems. University of Michigan
Press, Ann Arbor, MI.21 Swarm Intelligence and Bio-Inspired Computation: An Overview

Hopper, E., Turton, B.C.H., 2000. An empirical investigation of meta-heuristic and heuristic
algorithm for a 2D packing problem. Eur. J. Oper. Res. 128 (1), 34 /C057.
Karaboga, D., 2005. An Idea Based on Honey Bee Swarm for Numerical Optimization,
Technical Report TR06. Erciyes University, Turkey.
Keller, E.F., 2009. Organisms, machines, and thunderstorms: a history of self-organiza-
tion, part two. Complexity, emergence, and stable attractors. Hist. Stud. Nat. Sci. 39,
1/C031.
Kennedy, J., Eberhart, R.C., 1995. Particle swarm optimisation. In: Proceedings of the IEEE
International Conference on Neural Networks. Piscataway, NJ, pp. 1942 /C01948.
Kennedy, J., Eberhart, R.C., Shi, Y., 2001. Swarm Intelligence, Morgan Kaufmann Publishers,
San Diego, USA.
Koziel, S., Yang, X.S., 2011. Computational Optimization, Methods and Algorithms.
Springer, Germany.
Nakrani, S., Tovey, C., 2004. On honey bees and dynamic server allocation in internet host-
ing centers. Adapt. Behav. 12 (3 /C04), 223 /C0240.
Parpinelli, R.S., Lopes, H.S., 2011. New inspirations in swarm intelligence: a survey. Int. J.
Bio-Inspired Comput. 3, 1 /C016.
Pavlyukevich, I., 2007. Le ´vy flights, non-local search and simulated annealing. J. Comput.
Phys. 226, 1830 /C01844.
Pham, D.T., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M., 2006. The bees algo-
rithm: a novel tool for complex optimisation problems. In: Proceedings of the IPROMS2006 Conference, pp. 454 /C0461.
Prigogine, I., Nicolois, G., 1967. On symmetry-breaking instabilities in dissipative systems.
J. Chem. Phys. 46, 3542 /C03550.
Rui, T., Fong, S., Yang, X.S., Deb, S., 2012. Wolf search algorithm with ephemeral memory.
In: Fong, S., Pichappan, P., Mohammed, S., Hung, P., Asghar, S. (Eds.), Proceedings of
the Seventh International Conference on Digital Information Management (ICDIM2012),(August 22 /C024), Macau, pp. 165 /C0172.
Sayadi, M.K., Ramezanian, R., Ghaffari-Nasab, N., 2010. A discrete firefly meta-heuristic
with local search for makespan minimization in permutation flow shop scheduling pro-
blems. Int. J. Ind. Eng. Comput. 1, 1 /C010.
Talbi, E.G., 2009. Metaheuristics: From Design to Implementation. John Wiley & Sons,
New Jersey.
Villalobos-Arias, M., Coello Coello, C.A., Herna ´ndez-Lerma, O., 2005. Asymptotic
convergence of metaheuristics for multiobjective optimization problems. Soft Comput.
10, 1001 /C01005.
Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE
Trans. Evol. Comput. 1, 67 /C082.
Yang, X.S., 2005. Engineering optimization via nature-inspired virtual bee algorithms,
Artificial Intelligence and Knowledge Engineering Applications: A BioinspiredApproach, Springer, Berlin/Heidelberg, (Lecture Notes in Computer Science, vol. 3562,pp. 317 /C0323).
Yang, X.S., 2008. Nature-Inspired Metaheuristic Algorithms, first ed. Luniver Press, UK.
Yang, X.S., 2009. Firefly algorithms for multimodal optimisation. In: Watanabe, O.,
Zeugmann, T. (Eds.), Fifth Symposium on Stochastic Algorithms, Foundation and
Applications (SAGA 2009), LNCS, vol. 5792, pp. 169 /C0178.
Yang, X.S., 2010a. Nature-Inspired Metaheuristic Algorithms. second ed. Luniver Press, UK.
Yang, X.S., 2010b. Engineering Optimization: An Introduction with Metaheuristic
Applications. John Wiley & Sons, New Jersey.22 Swarm Intelligence and Bio-Inspired Computation

Yang, X.S., 2010c. A new metaheuristic bat-inspired algorithm. In: Cruz, C., Gonzalez, J.R.,
Pelta, D.A., Terrazas, G., (Eds.), Nature-Inspired Cooperative Strategies for Optimization
(NICSO 2010), vol. 284. Springer, SCI, Berlin, pp. 65 /C074.
Yang, X.S., 2011a. Bat algorithm for multi-objective optimisation. Int. J. Bio-Inspired
Comput. 3 (5), 267 /C0274.
Yang, X.S., 2011b. Chaos-enhanced firefly algorithm with automatic parameter tuning.
Int. J. Swarm Intell. Res. 2 (4), 1 /C011.
Yang, X.S., 2011c. Metaheuristic optimization: algorithm analysis and open problems. In:
Pardalos, P.M., Rebennack, S. (Eds.), Proceedings of the Tenth International
Symposium on Experimental Algorithms (SEA 2011). 5 /C07 May 2011, Kolimpari,
Chania, Greece, Lecture Notes in Computer Sciences, vol. 6630, pp. 21 /C032.
Yang, X.S., 2012. Flower pollination algorithm for global optimisation. In: Durand-Lose, J.,
Jonoska, N. (Eds.), Proceedings of the 11th International Conference on Unconventional
Computation and Natural Computation (UCNC 2012). 3 /C07 September 2012, Orle ´an,
France. Springer, Lecture Notes in Computer Science, vol. 7445, pp. 240 /C0249.
Yang, X.S., Deb, S., 2009. Cuckoo search via Le ´vy flights. Proceedings of the World
Congress on Nature & Biologically Inspired Computing (NaBic 2009). IEEEPublications, USA, pp. 210 /C0214.
Yang, X.S., Deb, S., 2010. Engineering optimization by cuckoo search. Int. J. Math. Model.
Num. Opt. 1 (4), 330 /C0343.
Yang, X.S., Gandomi, A.H., 2012. Bat algorithm: a novel approach for global engineering
optimization. Eng. Comput. 29 (5), 1 /C018.
Yang, X.S., He, X.S., 2013. Firefly algorithm: recent advances and applications. Int.
J. Swarm Intell. 1 (1), 1 /C014.
Yang, X.S., Koziel, S., 2011. Computational Optimization and Applications in Engineering
and Industry. Springer, Germany.
Yang, X.S., Deb, S., Fong, S., 2011. Accelerated particle swarm optimization and support vec-
tor machine for business optimization and applications. In: Fong, S., Pichappan, P.,(Eds.), Proceedings of the NDT2011, July 2011, Communications in Computer and
Information Science, vol. 136, Springer, Heidelberg, pp. 53 /C066.23 Swarm Intelligence and Bio-Inspired Computation: An Overview
View publication statsView publication stats

Similar Posts