Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Bio-Inspired Computation in Telecommunications
Bio-Inspired Computation in Telecommunications
Bio-Inspired Computation in Telecommunications
Ebook681 pages6 hours

Bio-Inspired Computation in Telecommunications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Bio-inspired computation, especially those based on swarm intelligence, has become increasingly popular in the last decade. Bio-Inspired Computation in Telecommunications reviews the latest developments in bio-inspired computation from both theory and application as they relate to telecommunications and image processing, providing a complete resource that analyzes and discusses the latest and future trends in research directions. Written by recognized experts, this is a must-have guide for researchers, telecommunication engineers, computer scientists and PhD students.

LanguageEnglish
Release dateFeb 11, 2015
ISBN9780128017432
Bio-Inspired Computation in Telecommunications
Author

Xin-She Yang

Xin-She Yang obtained his DPhil in Applied Mathematics from the University of Oxford. He then worked at Cambridge University and National Physical Laboratory (UK) as a Senior Research Scientist. He is currently a Reader in Modelling and Simulation at Middlesex University London, Fellow of the Institute of Mathematics and its Application (IMA) and a Book Series Co-Editor of the Springer Tracts in Nature-Inspired Computing. He has published more than 25 books and more than 400 peer-reviewed research publications with over 82000 citations, and he has been on the prestigious list of highly cited researchers (Web of Sciences) for seven consecutive years (2016-2022).

Read more from Xin She Yang

Related to Bio-Inspired Computation in Telecommunications

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Bio-Inspired Computation in Telecommunications

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Bio-Inspired Computation in Telecommunications - Xin-She Yang

    China

    Chapter 1

    Bio-Inspired Computation and Optimization

    An Overview

    Xin-She Yang¹; Su Fong Chien²; Tiew On Ting³    ¹ School of Science and Technology, Middlesex University, London, UK

    ² Strategic Advanced Research (StAR) Mathematical Modeling Lab, MIMOS Berhad, Kuala Lumpur, Malaysia

    ³ Department of Electrical and Engineering, Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu Province, China

    Abstract

    All design problems in telecommunications can be formulated as optimization problems, and thus may be tackled by some optimization techniques. However, these problems can be extremely challenging due to the stringent time requirements, complex constraints, and a high number of design parameters. Solution methods tend to use conventional methods such as Lagrangian duality and fractional programming in combination with numerical solvers, while new trends tend to use evolutionary algorithms and swarm intelligence. This chapter provides a summary review of the bio-inspired optimization algorithms and their applications in telecommunications. We also discuss key issues in optimization and some active topics for further research.

    Keywords

    Algorithm

    Ant algorithm

    Bee algorithm

    Bat algorithm

    Bio-inspired computation

    Cuckoo search

    Firefly algorithm

    Harmony search

    Particle swarm optimization

    Metaheuristics

    Swarm intelligence

    Telecommunications

    Chapter Contents

    1.1 Introduction   2

    1.2 Telecommunications and Optimization   2

    1.3 Key Challenges in Optimization   4

    1.3.1 Infinite Monkey Theorem and Heuristicity   4

    1.3.2 Efficiency of an Algorithm   5

    1.3.3 How to Choose Algorithms   5

    1.3.4 Time Constraints   6

    1.4 Bio-inspired Optimization Algorithms   7

    1.4.1 SI-Based Algorithms   7

    1.4.1.1 Ant and bee algorithms   7

    1.4.1.2 Bat algorithm   8

    1.4.1.3 Particle swarm optimization   9

    1.4.1.4 Firefly algorithm   9

    1.4.1.5 Cuckoo search   9

    1.4.2 Non-SI-Based Algorithms   10

    1.4.2.1 Simulated annealing   10

    1.4.2.2 Genetic algorithms   11

    1.4.2.3 Differential evolution   12

    1.4.2.4 Harmony search   12

    1.4.3 Other Algorithms   13

    1.5 Artificial Neural Networks   13

    1.5.1 Basic Idea   13

    1.5.2 Neural Networks   14

    1.5.3 Back Propagation Algorithm   15

    1.6 Support Vector Machine   16

    1.6.1 Linear SVM   16

    1.6.2 Kernel Tricks and Nonlinear SVM   18

    1.7 Conclusions   19

    References   19

    1.1 Introduction

    One of the main aims of telecommunications is to transmit signals with the minimum noise, least energy consumption, maximum capacity, and optimal transmission quality. All these form a series of very challenging problems. In fact, optimization is almost everywhere, and in almost every application, especially engineering designs, we are always trying to optimize something—whether to minimize the cost and energy consumption, or to maximize the profit, output, performance, and efficiency. In reality, resources, time, and money are always limited; consequently, optimization is far more important in practice (Koziel and Yang, 2011; Yang, 2010b). Thus, the proper use of available resources of any sort requires a paradigm shift in scientific thinking and design innovation.

    Obviously, real-world applications are usually subject to complex constraints, and many factors and parameters may affect how the system behaves. Design processes can be slow and expensive, and thus, any saving in terms of time and resources will make designs greener and more sustainable. In the context of telecommunications, there exist a myriad of optimization techniques applicable in this sector. Conventional methods include the Newton-Raphson method, linear programming, sequential quadratic programming, interior-point methods, Lagrangian duality, fractional programming, and many others. New methods tend to be evolutionary or bio-inspired. Good examples include evolutionary algorithms, artificial neural networks (ANNs), swarm intelligence (SI), cellular signaling pathways, and others. For example, genetic algorithms (GA) and SI have been used in many applications. We will include more examples in the next chapters.

    This chapter is organized as follows: Section 1.2 provides a brief discussion of the formulation concerning the optimization problems in telecommunications. Section 1.3 discusses the key issues in optimization, followed by a detailed introduction of commonly used bio-inspired algorithms in Section 1.4. Section 1.5 discusses neural networks and Section 1.6 presents support vector machines (SVMs). Finally, Section 1.7 concludes with some discussion.

    1.2 Telecommunications and optimization

    In telecommunications, the objective in the optimization tasks can vary, depending on the requirements and communications architecture. For example, the quality of service in an orthogonal frequency division multiple access (OFDMA) system can concern the minimization of the energy consumption of the system or the maximum of the energy efficiency (Ting et al., 2014). For example, the quality of service in an OFDMA system can still be preserved in order to minimize the energy consumption of the system or to maximize the energy efficiency (Ting et al., 2014),

       (1.1)

    where N is the number of subcarriers and M is the number of users. P0 is the power offset associated with the battery backup and signal processing. pm,n is the transmission power allocated for user m on subcarrier n with a subcarrier bandwidth Bwhich means the optimization problem is a fractional, mixed-integer programming problem.

    cm,n is the achievable transmission capacity, which can be estimated as

    where CNRm,n is the channel-to-noise ratio for each wireless channel. In addition, this is subject to the constraints

       (1.2)

    is the minimum rate of the transmission speed that servers user m, and PT is the total available transmission power.

    In a nutshell, all optimization problems in telecommunications can be written in a more general form. For example, the most widely used formulation is to write a nonlinear optimization problem as

       (1.3)

    subject to the constraints

       (1.4)

    can be continuous, discrete, or mixed in a d-dimensional space (Yang, 2010b). It is worth pointing out that here we write the problem as a minimization problem, but it can also be written as a maximization problem by simply replacing f(x) by − f(x).

    When all functions are nonlinear, we are dealing with nonlinear constrained problems. In some special cases when all functions are linear, the problem becomes linear, and we can use a widely known linear programming technique such as the simplex method. When some design variables can only take discrete values (often integers), while other variables are real continuous, the problem is of mixed type, which is often difficult to solve, especially for large-scale optimization problems.

    A very special class of optimization is convex optimization, which has the guaranteed global optimality. Any optimal solution is also the global optimum, and most important, there are efficient algorithms of polynomial time to solve such problems (Conn et al., 2009). Efficient algorithms such as the interior-point methods (Karmarkar, 1984) are widely used and have been implemented in many software packages.

    1.3 Key challenges in optimization

    Optimization, especially nonlinear optimization with complex constraints, can be very challenging to solve. There are many key issues that need to be addressed, and we highlight only four main issues here: heuristicity, the efficiency of an algorithm, choice of algorithms, and time constraints.

    1.3.1 Infinite Monkey Theorem and Heuristicity

    Heuristic algorithms are essentially methods by trial and error. Such heuristicity can be understood by first analyzing a well-known thought experiment called the infinite monkey theorem, which states that the probability of producing any given text will almost surely be one if an infinite number of monkeys randomly type for an infinitely long time (Gut, 2005; Marsaglia and Zaman, 1993). In other words, the infinite monkeys can be expected to reproduce the whole works of Shakespeare. For example, to reproduce the text telecommunications (18 characters) from a random typing sequence of n characters on a 101-key computer keyboard, the probability of a consecutive 18-character random string to be swarm intelligence is ps, the probability of reproducing the collected works of Shakespeare is one, though the formal rigorous mathematical analysis requires rigorous probability theory.

    In many ways, the heuristic and metaheuristic algorithms have some similarities to the infinite monkey-typing approaches. Monkeys type randomly, and ultimately meaningful high-quality text may appear. Similarly, most stochastic algorithms use randomization to increase the search capability. If such algorithms are run for a sufficiently long time with multiple runs, it can be expected that the global optimality of a given problem can be reached or found. In theory, it may take infinitely long to guarantee such optimality, but in practice, it can take many thousands or even millions of iterations. If we consider the optimality as an important work of Shakespeare, the infinite monkeys should be able to reproduce or achieve it in an infinite amount of time.

    However, there are some key differences between heuristic algorithms and the infinite monkey approach. First, monkeys randomly type without any memory or learning processing, and each key input is independent of another. Heuristic algorithms try to learn from history and the past moves so as to generate new, better moves or solutions. Second, random monkeys do not select what has been typed, while algorithms try to select the best solutions or the fittest solutions (Holland, 1975). Third, monkeys use purely stochastic components, while all heuristic algorithms use both deterministic and stochastic components. Finally, monkey typing at most is equivalent to a random search on a flat landscape, while heuristic algorithms are often cleverly constructed to use the landscape information in combination with history (memory) and selection. All these differences ensure that heuristic algorithms are far better than the random monkey-typing approach.

    In addition, metaheuristics are usually considered as a higher level of heuristics, because metaheuristic algorithms are not simple trial-and-error approaches, and metaheuristics are designed to learn from past solutions, to be biased toward better moves, to select the best solutions, and to construct sophisticated search moves. Therefore, metaheuristics can be much better than heuristic algorithms and can definitely be far more efficient than random monkey-typing approaches.

    1.3.2 Efficiency of an Algorithm

    The efficiency of an algorithm can depend on many factors, such as the intrinsic structure of the algorithm, the way it generates new solutions, and the setting of its algorithm-dependent parameters. The essence of an optimizer is a search or optimization algorithm implemented correctly so as to carry out the desired search (though not necessarily efficiently). It can be integrated and linked with other modeling components. There are many optimization algorithms in the literature and no single algorithm is suitable for all problems, as dictated by the No Free Lunch Theorems (Wolpert and Macready, 1997). In order to solve an optimization problem efficiently, an efficient optimization algorithm is needed.

    Optimization algorithms can be classified in many ways, depending on the focus or the characteristics we are trying to compare. For example, from the mobility point of view, algorithms can be classified as local or global. Local search algorithms typically converge toward a local optimum, not necessarily (often not) the global optimum, and such algorithms are often deterministic and have no ability of escaping local optima. Simple hill-climbing is an example. On the other hand, we always try to find the global optimum for a given problem, and if this global optimality is robust, it is often the best, though it is not always possible to find such global optimality. For global optimization, local search algorithms are not suitable. We have to use a global search algorithm. Modern bio-inspired metaheuristic algorithms in most cases are intended for solving global optimization, though not always successfully or efficiently. A simple strategy such as hill-climbing with random restart may change a local search algorithm into a global search. In essence, randomization is an efficient component for global search algorithms. In this chapter, we will provide a brief review of most metaheuristic optimization algorithms.

    From the optimization point of view, a crucial question is how to make the right choice of algorithms so as to find the optimal solutions quickly.

    1.3.3 How to Choose Algorithms

    Obviously, the choice of the right optimizer or algorithm for a given problem is important. However, the choice of an algorithm for an optimization task will largely depend on the type of the problem, the nature of the algorithm, the desired quality of solutions, the available computing resources, time limit, availability of the algorithm implementation, and the expertise of the decision makers (Blum and Roli, 2003; Yang, 2010b, 2014).

    The intrinsic nature of an algorithm may essentially determine if it is suitable for a particular type of problem. For example, gradient-based algorithms such as hill-climbing are not suitable for an optimization problem whose objective is discontinuous. On the other hand, the type of problem to be solved also determines the algorithms needed to obtain good quality solutions. If the objective function of an optimization problem at hand is highly nonlinear and multimodal, classic algorithms such as hill-climbing are usually not suitable because the results obtained by these local search methods tend to be dependent on the initial starting points. For nonlinear global optimization problems, SI-based algorithms such as firefly algorithm (FA) and particle swarm optimization (PSO) can be very effective (Yang, 2010a, 2014).

    In addition, the desired solution quality and available computing resources may also affect the choice of the algorithms. In most applications, computing resources are limited and good solutions (not necessary the best) have to be obtained in a reasonable and practical time, which means that there is a certain compromise between the available resources and the required solution quality. For example, if the main aim is to find a feasible solution (not necessarily the best solution) in the least amount of time, greedy methods or gradient-based methods should be the first choice.

    Moreover, the availability of the software packages and expertise of the designers are also key factors that can affect the algorithm choice. For example, Newton’s method, hill-climbing, Nelder-Mead downhill simplex, trust-region methods (Conn et al., 2009), and interior-point methods are implemented in many software packages, which partly increases their popularity in applications. In practice, even with the best possible algorithms and well-crafted implementation, we may still not get the desired solutions. This is the nature of nonlinear global optimization, as most of these problems are NP-hard (nondeterministic polynomial-time hard), and no efficient solutions (in the polynomial sense) exist for a given problem.

    Therefore, from a practical point of view, one of the main challenges in many applications is to find the right algorithm(s) most suitable for a given problem so as to obtain good solutions, hopefully also the global best solutions, in a reasonable timescale with a limited amount of resources. However, there is no simple answer to this problem. Consequently, the choice of the algorithms still largely depends on the expertise of the researchers involved and the available resources in a more or less heuristic way.

    1.3.4 Time Constraints

    Apart from the challenges mentioned above, one of the most pressing requirements in telecommunications is the speed for finding the solutions. As users may come and go in a dynamic manner, real-time dynamic allocation is needed in practice. Therefore, the time should be sufficiently short so that the solution methods can be useful in practice, and this time factor poses a key constraint to most algorithms. In a way, the choice of the algorithm all depends on the requirements. If the requirement is to solve the optimization problem in real time or in situ, then the number of steps to find the solution should be minimized. In this case, in addition to the critical design and choice of algorithms, the details of implementation and feasibility of realization in terms of both software and hardware can be also very important.

    In the vast majority of optimization applications, the evaluations of the objective functions often form the most computationally expensive part. In simulations using the finite element methods and finite volume methods in engineering applications, the numerical solver typically takes a few hours up to a few weeks, depending on the problem size of interest. In this case, the use of the numerical evaluator or solver becomes crucial (Yang, 2008). On the other hand, in the combinatorial problems, the evaluation of a single design may not take long, but the number of possible combinations can be astronomical. In this case, the way to move from one solution to another solution becomes more critical, and thus optimizers can be more important than numerical solvers.

    Therefore, any approach to save computational time either by reducing the number of evaluations or by increasing the simulator’s efficiency will save time and money (Koziel and Yang, 2011).

    1.4 Bio-inspired optimization algorithms

    Metaheuristic algorithms are often nature-inspired, and they are now among the most widely used algorithms for optimization. They have many advantages over conventional algorithms, as we can see from many case studies presented in later chapters in this book. There are a few recent books that are solely dedicated to metaheuristic algorithms (Dorigo and Stütle, 2004; Talbi, 2009; Yang, 2008, 2010a,b). Metaheuristic algorithms are very diverse, including GA, simulated annealing, differential evolution (DE), ant and bee algorithms, PSO, harmony search (HS), FA, cuckoo search (CS), and others. In general, they can be put into two categories: SI-based and non-SI-based. In the rest of this section, we will introduce some of these algorithms briefly.

    1.4.1 SI-Based Algorithms

    SI-based algorithms typically use multiple agents, and their characteristics have often been drawn from the inspiration of social insects, birds, fish, and other swarming behavior in the biological systems. As a multiagent system, they can possess certain characteristics of the collective intelligence.

    1.4.1.1 Ant and bee algorithms

    There is a class of algorithms based on the social behavior of ants. For example, ant colony optimization (Dorigo and Stütle, 2004) mimics the foraging behavior of social ants via the pheromone deposition, evaporation, and route marking. In fact, the pheromone concentrations along the paths in a transport problem can also be considered as the indicator of quality solutions. In addition, the movement of an ant is controlled by a pheromone that will evaporate over time. Without such time-dependent evaporation, ant algorithms will lead to premature convergence to the (often wrong) solutions. With proper pheromone evaporation, they usually behave very well. For example, exponential decay for pheromone exploration is often used, and the deposition is incremental for transverse routes such that

       (1.5)

    where ρ is the evaporation rate, while δ is the incremental deposition. Obviously, if the route is not visited during an iteration, then δ = 0 should be used for nonvisited routes. In addition to the pheromone variation, the probability of choosing a route needs to be defined properly. Different variants and improvements of ant algorithms may largely differ in terms of ways of handling pheromone deposition, evaporation, and route-dependent probabilities.

    On the other hand, bee algorithms are based on the foraging behavior of honey bees. Interesting characteristics such as the waggle dance and nectar maximization are often used to simulate the allocation of the foraging bees along flower patches and thus different search regions in the search space (Karaboga, 2005; Nakrani and Tovey, 2004). For a more comprehensive review, please refer to Yang (2010a) and Parpinelli and Lopes (2011).

    1.4.1.2 Bat algorithm

    The bat algorithm (BA) was developed by Xin-She Yang in 2010 (Yang, 2011), based on the echolocation behavior of microbats. Such echolocation is essentially frequency tuning. These bats emit a very loud sound pulse and listen for the echo that bounces back from the surrounding objects. In the BA, a bat flies randomly with a velocity vi at position xi with a fixed frequency range [fmin, fand loudness A0 to search for prey, depending on the proximity of its target. The main updating equations are

       (1.6)

    where ε is a random number drawn from a uniform distribution, and x* is the current best solution found so far during iterations. The loudness and pulse rate can vary with iteration t in the following way:

       (1.7)

    Here, α and γ are constants. In fact, α is similar to the cooling factor of a cooling schedule in the simulated annealing, to be discussed later. In the simplest case, we can use α = β, and we have in fact used α = β = 0.9 in most simulations. The BA has been extended to multiobjective bat algorithm by Yang (2011), and preliminary results suggest that it is very efficient (Yang and Gandomi, 2012). Yang and He provide a relatively comprehensive review of the BA and its variants (Yang and He, 2013).

    1.4.1.3 Particle swarm optimization

    PSO was developed by Kennedy and Eberhart in 1995 (Kennedy and Eberhart, 1995), based on the swarm behavior such as fish and bird schooling in nature. In PSO, each particle has a velocity vi and position xi, and their updates can be determined by the following formula:

       (1.8)

    where gis the individual best solution for particle i. Here, ε1 and ε2 are two random variables drawn from the uniform distribution in [0,1]. In addition, α and β are the learning parameters. The position is updated as

       (1.9)

    There are many variants that extend the standard PSO algorithm, and the most noticeable improvement is probably to use an inertia function. There is relatively vast literature about PSO (Kennedy et al., 2001; Yang, 2014).

    1.4.1.4 Firefly algorithm

    The FA was first developed by Xin-She Yang in 2007 (Yang, 2008, 2009), based on the flashing patterns and behavior of fireflies. In the FA, the movement of a firefly i that is attracted to another more attractive (brighter) firefly j is determined by the following nonlinear updating equation:

       (1.10)

    where β0 is the attractiveness at distance r = 0. The attractiveness of a firefly varies with distance from other fireflies. The variation of attractiveness β with the distance r can be defined by

       (1.11)

    A demo version of FA implementation, without Lévy flights, can be found at the Mathworks file exchange website.¹ The FA has attracted much attention and there exist some comprehensive reviews (Fister et al., 2013; Gandomi et al., 2011; Yang, 2014).

    1.4.1.5 Cuckoo search

    CS was developed in 2009 by Yang and Deb (2009). CS is based on the brood parasitism of some cuckoo species. In addition, this algorithm is enhanced by Lévy flights (Pavlyukevich, 2007), rather than by simple isotropic random walks. Recent studies show that CS is potentially far more efficient than PSO and GA (Yang and Deb, 2010).

    This algorithm uses a balanced combination of the local random walk and the global explorative random walk, controlled by a switching parameter pa. The local random walk can be written as

       (1.12)

    where xjt and xkt are two different solutions selected randomly by random permutation.

    Here, H(u) is the Heaviside function, and ɛ is a random number drawn from a uniform distribution, while the step size s is drawn from a Lévy distribution. On the other hand, the global random walk is carried out by using Lévy flights

       (1.13)

    where

       (1.14)

    Here, α > 0 is a scaling factor controlling the scale of the step sizes, and s0 is a small fixed step size.

    For an algorithm to be efficient, a substantial fraction of the new solutions should be generated by far field randomization, with locations far enough from the current best solution. This will ensure that the system will not be trapped in a local optimum (Yang and Deb, 2010). A Matlab implementation is given by the author, and can be downloaded.² CS is very efficient in solving engineering optimization problems (Yang, 2014).

    1.4.2 Non-SI-Based Algorithms

    Not all bio-inspired algorithms are based on SI. In fact, their sources of inspiration can be very diverse from physical processes to biological processes. In this case, it is better to extend the bio-inspired computation to nature-inspired

    Enjoying the preview?
    Page 1 of 1