Genetic algorithm - Simple English Wikipedia, the free encyclopedia

The unusual form of this antenna, developed by NASA, was found using a genetic algorithm.

A genetic algorithm is an algorithm that imitates the process of natural selection. They help solve optimization and search problems. Genetic algorithms are part of the bigger class of evolutionary algorithms. Genetic algorithms imitate natural biological processes, such as inheritance, mutation, selection and crossover.

Genetic algorithms is a search technique often used in computer science to find complex, non-obvious solutions to algorithmic optimisation and search problems. Genetic algorithms are global search heuristics.[1]

What is it

[change | change source]

Natural evolution can be modeled as a game, in which the rewards for an organism that plays a good game of life are the passing on of its genetic material to its successors and its continued survival.[2] In natural evolution, how well an individual performs depends on who it works with and who it competes with, as well as the environment. Genetic algorithms are a simulation of natural selection on a population of candidate solutions to an optimization problem (chromosomes, individuals, creatures, or phenotypes).

Candidates are evaluated and crossbred in an attempt to make good solutions. Such solutions may take a lot of time and are not obvious to a human. An evolutionary phase is started with a population of randomly generated beings. In each generation, the fitness of every individual in the population is evaluated. Some are randomly selected from the current population (based on their fitness) and modified (recombined and possibly randomly mutated) to form a new population. The cycle repeats with this new population. The algorithm ends either after a set number of generations have passed, or a good fitness level has been reached for the population. If the algorithm has ended due to reaching a maximum number of generations, it does not necessarily mean a good fitness level has been obtained.

Programming this on a computer

[change | change source]

The pseudocode is:

  1. Initialization: Some possible solutions are created; very often these will have random values
  2. Evaluation: A fitness function scores each solution; the score will be a number that tells how well this solution solves the problem.
  3. The following steps are run until the requirement to stop is met:
    1. Selection: Pick the solutions/individuals for the next iteration
    2. Recombination: Combine the solutions picked
    3. Mutation: Randomly change the newly created solutions
    4. Evaluation: Apply the fitness function, see step 2.
    5. If the requirement to stop is not met, re-start with the selection step.

The above description is abstract. A concrete example helps.

Applications

[change | change source]

In general

[change | change source]

Genetic algorithms are good at solving problems that include timetabling and scheduling. They have also been applied to engineering.[1] They are often used to solve global optimization problems. They might be useful in problem domains that have a complex fitness landscape as mixing is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).

Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,[2] antennae designed to pick up radio signals in space,[3] walking methods for computer figures,[4] optimal design of aerodynamic bodies in complex flowfields [5]

In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task: "It is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] The analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate. [...] I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs."[6]: 267 

Board games

[change | change source]

Board games are a very relevant part of the area of genetic algorithms as applied to game theory problems. Much of the early work on computational intelligence and games was directed toward classic board games, such as tic-tac-toe,[3] chess, and checkers.[4] Board games can now, in most cases, be played by a computer at a higher level than the best humans, even with blind exhaustive search techniques. Go is a noted exception to this tendency, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice.[5][6] Go strategy is said to rely heavily on pattern recognition, and not just logical analysis as with chess and other more piece-independent games. The huge effective branching factor required for finding high quality solutions heavily restricts the look-ahead that can be used within a move sequence search.

Computer games

[change | change source]

The genetic algorithm can be used in computer games to create artificial intelligence (the computer plays against you). This allows for a more realistic game experience; if a human player can find a sequence of steps which always lead to success even when repeated in different games, there can be no challenge left. Conversely if a learning technique such as a genetic algorithm for a strategist can avoid repeating past mistakes, the game will have more playability.

Genetic algorithms require the following parts:

  • A method for representing the challenge in terms of the solution (e.g. routing soldiers in an attack in a strategy game)
  • A fitness or evaluation function in order to determine the quality of an instance (e.g. a measurement of damage done to an opponent in such an attack).

The fitness function accepts a mutated instantiation of an entity and measures its quality. This function is customized to the problem domain. In many cases, particularly those involving code optimization, the fitness function may simply be a system timing function. Once a genetic representation and fitness function are defined, a genetic algorithm will instantiate initial candidates as described before, and then improve through repetitive application of mutation, crossover, inversion and selection operators (as defined according to the problem domain).

Limitations

[change | change source]

There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms:

  • Repeated fitness function evaluation for complex problems is often the most limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. It is often necessary to use approximation, as calculating the exact solution takes too long. Genetic algorithms sometimes combine different approximation models to solve complex real life problems.
  • Genetic algorithms do not scale well. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane. To use genetic algorithms with such problems, they must be broken down into the simplest representation possible. For this reason, we see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
  • The "better" solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem.
  • In many problems, genetic algorithms have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that the algorithm does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness function: certain problems make it easy to find the global optimum, others may make it easier for the function to find the local optima. This problem may be lessened by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions.[7] A common technique to maintain diversity is to use a "niche penalty": any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in the following generations, permitting other (less similar) individuals to be kept in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.
  • Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to fix this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
  • Genetic algorithms cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
  • For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.

In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution.[8] Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey.[9][10] His 1954 publication was not widely noticed. Starting in 1957,[11] the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[12] and Crosby (1973).[13] Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.[14] Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).[15]

Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,[16] artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies.[17][18][19][20] Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's schema theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.

References

[change | change source]
  1. ^ Herrera, F.; Lozano, M.; and Verdegay, J. L. 1998. Tackling real-coded genetic algorithms: Operators and tools for behavioural analysis. Artif. Intell. Rev. 12(4):265–319.
  2. ^ Lucas, S., and Kendell, G. 2006. Evolutionary computation and games. In IEEE Comput Intell Mag. February, 10–18. IEEE.
  3. ^ Yao, X. Recent new development in evolutionary programming.
  4. ^ Samuel, A. L. 1995. Some studies in machine learning using the game of checkers. 71–105.
  5. ^ Muller, M. 2002. Computer go. Artif. Intell. 134(1-2):145–179.
  6. ^ Bouzy, B., and Cazenave, T. 2001. Computer go: an AI oriented survey. Artificial Intelligence 132:39–103.
  1. Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies. 2013; 6(3):1439-1455.
  2. Gross, Bill. "A solar energy system that tracks the sun". TED. Retrieved 20 November 2013.
  3. Hornby, G. S.; Linden, D. S.; Lohn, J. D., Automated Antenna Design with Evolutionary Algorithms (PDF)
  4. "Flexible Muscle-Based Locomotion for Bipedal Creatures".
  5. Evans, B.; Walton, S.P. (December 2017). "Aerodynamic optimisation of a hypersonic reentry vehicle based on solution of the Boltzmann–BGK equation and evolutionary optimisation". Applied Mathematical Modelling. 52: 215–240. doi:10.1016/j.apm.2017.07.024. ISSN 0307-904X.
  6. Skiena, Steven (2010). The Algorithm Design Manual (2nd ed.). Springer Science+Business Media. ISBN 978-1-849-96720-4.
  7. Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi (19 November 2012). "An efficient algorithm for function optimization: modified stem cells algorithm". Central European Journal of Engineering. 3 (1): 36–50. doi:10.2478/s13531-012-0047-8. S2CID 108711765.
  8. Turing, Alan M. (October 1950). "Computing machinery and intelligence". Mind. LIX (238): 433–460. doi:10.1093/mind/LIX.236.433.
  9. Barricelli, Nils Aall (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68.
  10. Barricelli, Nils Aall (1957). "Symbiogenetic evolution processes realized by artificial methods". Methodos: 143–182.
  11. Fraser, Alex (1957). "Simulation of genetic systems by automatic digital computers. I. Introduction". Aust. J. Biol. Sci. 10 (4): 484–491. doi:10.1071/BI9570484.
  12. Fraser, Alex; Burnell, Donald (1970). Computer Models in Genetics. New York: McGraw-Hill. ISBN 978-0-07-021904-5.
  13. Crosby, Jack L. (1973). Computer Simulation in Genetics. London: John Wiley & Sons. ISBN 978-0-471-18880-3.
  14. 02.27.96 - UC Berkeley's Hans Bremermann, professor emeritus and pioneer in mathematical biology, has died at 69
  15. Fogel, David B., ed. (1998). Evolutionary Computation: The Fossil Record. New York: IEEE Press. ISBN 978-0-7803-3481-6.
  16. Barricelli, Nils Aall (1963). "Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life". Acta Biotheoretica. 16 (3–4): 99–126. doi:10.1007/BF01556602. S2CID 86717105.
  17. Rechenberg, Ingo (1973). Evolutionsstrategie. Stuttgart: Holzmann-Froboog. ISBN 978-3-7728-0373-4.
  18. Schwefel, Hans-Paul (1974). Numerische Optimierung von Computer-Modellen (PhD thesis).
  19. Schwefel, Hans-Paul (1977). Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie : mit einer vergleichenden Einführung in die Hill-Climbing- und Zufallsstrategie. Basel; Stuttgart: Birkhäuser. ISBN 978-3-7643-0876-6.
  20. Schwefel, Hans-Paul (1981). Numerical optimization of computer models (Translation of 1977 Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie). Chichester ; New York: Wiley. ISBN 978-0-471-09988-8.