Techniques of "blank slate" automated learning generally require considerable computational resources to solve non-trivial problems of interest. As they say, "computer time is the mother's milk of automated learning."
Increases in computing power can be realized in two ways: either by using a faster computer or by parallelizing the application. The first approach is aided by the fact that computer speeds have approximately doubled every 18 months in accordance with Moore’s law and are expected to continue to do so. The second approach (i.e., parallelization) is available for applications that can be parallelized efficiently. Genetic algorithms, genetic programming, and other techniques of evolutionary computation are highly amenable to parallelization (at essentially 100% efficiency).
A run of genetic programming begins with the initial creation of individuals for the population. Then, on each generation of the run, the fitness of each individual in the population is evaluated. Then, on each generation, individuals are selected (probabilistically based on fitness) to participate in the genetic operations (e.g., reproduction, crossover, mutation, and architecture-altering operations). These three steps (i.e., fitness evaluation, Darwinian selection, and genetic operations) are iteratively performed over many generations until the termination criterion for the run is satisfied. Typically, the best single individual obtained during the run is designated as the result of the run.
In computer runs of genetic programming for most categories of problems, relatively little computer time is expended on the one-time task of creating the initial population at the beginning of the run and relatively little computer time is expended on the execution of the Darwinian selection and genetic operations on each generation during the run. Instead, the task of measuring the fitness of each individual in each generation of the evolving population is (for most categories of problems) the dominant component of the computational burden (with respect to computer time) in solving non-trivial problems using evolutionary algorithms. These observations give rise to the most commonly used approach to parallelization of genetic programming, namely the asynchronous island approach to parallelization.
In the "island" approach to parallelization of genetic programming, the population for a given run is divided into semi-isolated subpopulations (called demes). Each subpopulation is assigned to a separate processor of the parallel computing system. The run begins with the one-time random creation of a separate population of individuals at each processor of the parallel computer system. A typical run of genetic programming on today's computers might entail randomly creating between perhaps 500 and 30,000 individuals at each processor. This process of initial random creation takes place in parallel at each processor. As soon as each separate processor finishes this one-time task, it begins the main generational loop.
In the main generational loop of genetic programming, the task of measuring the fitness of each individual is first performed locally at each processor. Then, the Darwinian selection step is performed locally at each processor. Finally, the genetic operations are performed locally at each processor. The processors operate asynchronously in the sense that each generation starts and ends independently at each processor. Because each of these tasks is performed independently at each processor and because the processors are not synchronized, this asynchronous island approach to parallelization efficiently uses all the processing power of each processor.
Upon completion of a generation (or other interval), a relatively small percentage of the individuals in each subpopulation are probabilistically selected (based on fitness) for emigration from each processor to various neighboring processors. The processors are typically arranged in a rectangular toroidal topology, so that each processor has four neighbors (North, East, South, West). Emigrants are exported to the neighboring processors, where they wait in an "importer" buffer of the destination processor until the destination processor is ready to assimilate its accumulating immigrants. This assimilation typically occurs just after the processor has exported its own emigrants at the end of its generation. The immigrants are typically inserted into the subpopulation at the destination processor in lieu of the just-departed emigrants of that processor's subpopulation. The overall iterative process proceeds asynchronously from generation to generation on each separate processor. Experience indicates (Andre and Koza 1996; Koza, Bennett, Andre, and Keane 1999) that a modest amount of migration (say 2% of a processor's population, in each of four directions) is better than extremely high or extremely low amounts of migration. Thus, 8% of the individuals from a given processor migrate to neighboring processors on each generation.
The inter-processor communication requirements of migration are low because only a modest number of individuals migrate during each generation and because each migration is separated by substantially longer periods of time for fitness evaluation, Darwinian selection, and genetic operations. A generation in a run of parallel genetic programming may be of any length, but typically takes about 10 minutes to 1 hour. If a generation takes, say,10 minutes, about 150 generations can be run in a day.
This the asynchronous island model for parallelization delivers an overall increase in the total amount of work performed that is nearly linear with the number of processors. That is, nearly 100% efficiency is routinely realized when genetic programming is run on a parallel computer system using the asynchronous island model for parallelization. This near-100% efficiency is in marked contrast to the efficiency achieved in parallelizing the vast majority of computer calculations.
There are four guiding principles in parallel genetic programming.
One important guiding principle in implementing parallel genetic programming is to fully utilize the computing power of each processor at all times. Thus, each processor immediately (and separately) begins its main generational loop as soon as it finishes it initial random creation of individuals at the beginning of the run. There is no synchronization of the generations between processors. Similarly, each processor moves on to its next generation, regardless of the status of any other processor. Less than 1% of the processors cycles are consumed by the low-bandwidth migration and communications tasks of the parallel genetic programming algorithm.
A related principle is that processors should not wait on the achievement of any other event in the system. For example, if immigrants are not received from all (four) of a processor's neighbors at the time when the processor is ready to assimilate immigrants, the processor does not wait for the missing immigrants. It simply assimilates whatever immigrants are in its buffer and moves on. The deficiency in immigrants is made up from randomly chosen copies of the processor's just-departed emigrants. Similarly, if a processor receives two groups of immigrants from a particular neighboring processor before it is ready to assimilate immigrants, the most recent group of incoming immigrants overwrite the previously received immigrants from the particular neighboring processor (on the principle that later immigrants from a particular subpopulation are, in general, better than earlier immigrants from that same subpopulation).
The entire system is designed to be highly fault-tolerant. No processor is considered essential to the run. No acknowledgment is required for any message, including the messages containing emigrants from one processor to another. If a processor fails (e.g., for software or the occasional hardware reason), the boatload of emigrants exported to that processor are simply lost at sea and the run continues.
Recently, we decided to rearrange the Ethernet cables between the 1,000 processors. We did so during a "live" run. The connecting and disconnecting of wires during the run did not affect the successful completion of the run. Moreover, power cords were disconnected and reconnected during the run during this rewiring. By providing a new start-up message to all nodes, the newly reconnected nodes came to life and proceeded to participate in the run. After just one generation, they received emigrants from their neighbors (representing the better individuals of neighboring processors). Darwinian selection immediately favored the newly arrived immigrants (who were, in general, superior to the random individuals on the recently restarted processors) and the genetic operations were consequently performed primarily on the newly arrived immigrants.
Many researchers have noted that, for many problems, genetic programming solves a problem faster with semi-isolated subpopulations and occasional migration than would otherwise be the case. That is, the performance of genetic programming is often enhanced because of the use the island model of parallelization. Thus, it is often said that parallel genetic programming often delivers a super linear speed-up in terms of the computational effort required to yield a solution (recognizing that, of course, the benefit of semi-isolated subpopulations can be simulated on a serial computer). In any event, the island model of parallelization is an effective way to make runs of genetic programming.
Click here for a description of the 1,000-Pentium beowulf-style cluster computer for implementing the "island" approach to parallel genetic programming.
Additional information on genetic programming can be found in the book Genetic Programming III: Darwinian Invention and Problem Solving and its accompanying videotape. Visit www.genetic-programming.org for citations and links to numerous other authored books, edited collections of papers, conference proceedings, and individual published papers concerning genetic programming..
Bennett, Forrest H III, Koza, John R., Shipman, James, and Stiffelman, Oscar. 1999. Building a parallel computer system for $18,000 that performs a half peta-flop per day. In Banzhaf, Wolfgang, Daida, Jason, Eiben, A. E., Garzon, Max H., Honavar, Vasant, Jakiela, Mark, and Smith, Robert E. (editors). 1999. GECCO-99: Proceedings of the Genetic and Evolutionary Computation Conference, July 13-17, 1999, Orlando, Florida USA. San Francisco, CA: Morgan Kaufmann. Pages 1484 - 1490.
Techniques of evolutionary computation generally require significant computational resources to solve non-trivial problems of interest. Increases in computing power can be realized either by using a faster computer or by parallelizing the application. Techniques of evolutionary computation are especially amenable to parallelization. This paper describes how to build a 10-node Beowulf-style parallel computer system for $18,000 that delivers about a half peta-flop (1015 floating-point operations) per day on runs of genetic programming. Each of the 10 nodes of the system contains a 533 MHz Alpha processor and runs with the GNU / Linux operating system. This amount of computational power is sufficient to yield solutions (within a couple of days per problem) to 14 published problems where genetic programming has produced results that are competitive with human-produced results.
Click here for Postscript copy of
this GECCO-99 conference paper on "Building
a parallel computer."
· The home page of Genetic Programming Inc. at www.genetic-programming.com.
· For information about the field of genetic programming in general, visit www.genetic-programming.org
· The home page of John R. Koza at Genetic Programming Inc. (including online versions of most papers) and the home page of John R. Koza at Stanford University
· Information about the 1992 book Genetic Programming: On the Programming of Computers by Means of Natural Selection, the 1994 book Genetic Programming II: Automatic Discovery of Reusable Programs, the 1999 book Genetic Programming III: Darwinian Invention and Problem Solving, and the 2003 book Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Click here to read chapter 1 of Genetic Programming IV book in PDF format.
· For information on 3,198
papers (many on-line) on genetic programming (as of June 27, 2003) by over 900
authors, see William
Langdon’s bibliography on genetic programming.
· For information on the Genetic Programming and Evolvable Machines journal published by Kluwer Academic Publishers
· For information on the Genetic Programming book series from Kluwer Academic Publishers, see the Call For Book Proposals
· For information about the
annual Genetic and
Evolutionary Computation (GECCO) conference (which includes the annual
GP conference) to be held on June 26–30, 2004 (Saturday – Wednesday) in Seattle
and its sponsoring organization, the International Society for Genetic and
Evolutionary Computation (ISGEC).
For information about the annual NASA/DoD Conference on Evolvable
Hardware Conference (EH) to be held on June 24-26 (Thursday-Saturday),
2004 in Seattle. For information about the annual Euro-Genetic-Programming
Conference to be held on April 5-7, 2004 (Monday – Wednesday) at the
University of Coimbra in Coimbra Portugal.
Last updated on August 27, 2003