数模论坛

 找回密码
 注-册-帐-号
搜索
热搜: 活动 交友 discuz
查看: 9885|回复: 14

[来个E文牛的]GENETIC 之 FAQ

[复制链接]
发表于 2004-2-27 01:32:24 | 显示全部楼层 |阅读模式
遗传算法新闻组的FAQ
内容比较多,推荐有选择的看:)
TABLE OF CONTENTS
Part1

     Q0: How about an introduction to comp.ai.genetic?

Part2

     Q1: What are Evolutionary Algorithms (EAs)?
     Q1.1: What's a Genetic Algorithm (GA)?
     Q1.2: What's Evolutionary Programming (EP)?
     Q1.3: What's an Evolution Strategy (ES)?
     Q1.4: What's a Classifier System (CFS)?
     Q1.5: What's Genetic Programming (GP)?

Part3

     Q2: What applications of EAs are there?

     Q3: Who is concerned with EAs?

     Q4: How many EAs exist? Which?
     Q4.1: What about Alife systems, like Tierra and VENUS?

     Q5: What about all this Optimization stuff?

Part4

     Q10: What introductory material on EAs is there?
     Q10.1: Suitable background reading for beginners?
     Q10.2: Textbooks on EC?
     Q10.3: The Classics?
     Q10.4: Introductory Journal Articles?
     Q10.5: Introductory Technical Reports?
     Q10.6: Not-quite-so-introductory Literature?
     Q10.7: Biological Background Readings?
     Q10.8: On-line bibliography collections?
     Q10.9: Videos?
     Q10.10: CD-ROMs?
     Q10.11: How do I get a copy of a dissertation?

     Q11: What EC related journals and magazines are there?

     Q12: What are the important conferences/proceedings on EC?

     Q13: What Evolutionary Computation Associations exist?

     Q14: What Technical Reports are available?

     Q15: What information is available over the net?
     Q15.1: What digests are there?
     Q15.2: What mailing lists are there?
     Q15.3: What online information repositories are there?
     Q15.4: What relevant newsgroups and FAQs are there?
     Q15.5: What about all these Internet Services?

Part5

     Q20: What EA software packages are available?
     Q20.1: Free software packages?
     Q20.2: Commercial software packages?
     Q20.3: Current research projects?

Part6

     Q21: What are Gray codes, and why are they used?

     Q22: What test data is available?

     Q42: What is Life all about?
     Q42b: Is there a FAQ to this group?

     Q98: Are there any patents on EAs?

     Q99: A Glossary on EAs?

----------------------------------------------------------------------



 楼主| 发表于 2004-2-27 01:36:10 | 显示全部楼层
Subject: Q1: What are Evolutionary Algorithms (EAs)?

     Evolutionary algorithm is an umbrella term used to describe computer-
     based problem solving systems which use computational models of  some
     of  the known mechanisms of EVOLUTION as key elements in their design
     and implementation. A variety of EVOLUTIONARY  ALGORITHMs  have  been
     proposed.   The  major  ones  are:  GENETIC  ALGORITHMs  (see  Q1.1),
     EVOLUTIONARY PROGRAMMING (see Q1.2), EVOLUTION STRATEGIEs (see Q1.3),
     CLASSIFIER  SYSTEMs  (see  Q1.4), and GENETIC PROGRAMMING (see Q1.5).
     They all share a common conceptual base of simulating  the  evolution
     of  INDIVIDUAL  structures  via processes of SELECTION, MUTATION, and
     REPRODUCTION.  The processes depend on the perceived  PERFORMANCE  of
     the individual structures as defined by an ENVIRONMENT.

     More  precisely, EAs maintain a POPULATION of structures, that evolve
     according to  rules  of  selection  and  other  operators,  that  are
     referred  to  as  "search operators", (or GENETIC OPERATORs), such as
     RECOMBINATION  and  mutation.  Each  individual  in  the   population
     receives  a  measure  of its FITNESS in the environment. Reproduction
     focuses attention on high fitness individuals, thus  exploiting  (cf.
     EXPLOITATION)  the  available fitness information.  Recombination and
     mutation perturb those individuals, providing general heuristics  for
     EXPLORATION.  Although simplistic from a biologist's viewpoint, these
     algorithms are sufficiently complex to provide  robust  and  powerful
     adaptive search mechanisms.

     --- "An Overview of Evolutionary Computation" [ECML93], 442-459.

BIOLOGICAL BASIS
     To  understand  EAs, it is necessary to have some appreciation of the
     biological processes on which they are based.

     Firstly, we should note that EVOLUTION (in nature or  anywhere  else)
     is  not  a  purposive  or  directed  process.   That  is, there is no
     evidence to support the assertion that the goal of  evolution  is  to
     produce Mankind. Indeed, the processes of nature seem to boil down to
     a haphazard GENERATION of biologically diverse  organisms.   Some  of
     evolution is determined by natural SELECTION or different INDIVIDUALs
     competing for resources in the ENVIRONMENT.   Some  are  better  than
     others.  Those  that  are  better  are  more  likely  to  survive and
     propagate their genetic material.

     In nature, we see that the encoding for genetic information  (GENOME)
     is   done  in  a  way  that  admits  asexual  REPRODUCTION.   Asexual
     reproduction typically results  in  OFFSPRING  that  are  genetically
     identical  to  the  PARENT.   (Large  numbers  of organisms reproduce
     asexually; this includes most bacteria which some biologists hold  to
     be the most successful SPECIES known.)

     Sexual  reproduction  allows  some shuffing of CHROMOSOMEs, producing
     offspring that contain a combination of information from each parent.
     At  the  molecular level what occurs (wild oversimplification alert!)
     is that a pair of almost identical chromosomes bump into one another,
     exchange  chunks  of genetic information and drift apart. This is the
     RECOMBINATION operation, which is  often  referred  to  as  CROSSOVER
     because   of  the  way  that  biologists  have  observed  strands  of
     chromosomes crossing over during the exchange.

     Recombination happens in an environment where the  selection  of  who
     gets  to mate is largely a function of the FITNESS of the individual,
     i.e. how good the individual is at competing in its environment. Some
     "luck" (random effect) is usually involved too. Some EAs use a simple
     function   of   the   fitness   measure   to    select    individuals
     (probabilistically)  to  undergo genetic operations such as crossover
     or  asexual  reproduction  (the  propagation  of   genetic   material
     unaltered).    This   is   fitness-proportionate   selection.   Other
     implementations use  a  model  in  which  certain  randomly  selected
     individuals  in  a subgroup compete and the fittest is selected. This
     is called tournament selection and is the form of selection we see in
     nature  when stags rut to vie for the privilege of mating with a herd
     of hinds.

     Much EA research  has  assumed  that  the  two  processes  that  most
     contribute   to   evolution   are   crossover   and   fitness   based
     selection/reproduction.    Evolution,   by   definition,   absolutely
     requires  diversity in order to work.  In nature, an important source
     of diversity is MUTATION.  In an EA, a large amount of  diversity  is
     usually  introduced at the start of the algorithm, by randomising the
     GENEs  in  the  POPULATION.   The  importance  of   mutation,   which
     introduces   further   diversity  while  the  algorithm  is  running,
     therefore continues to be a matter of debate. Some refer to it  as  a
     background  operator, simply replacing some of the original diversity
     which may have been  lost,  while  others  view  it  as  playing  the
     dominant role in the evolutionary process.

     It cannot be stressed too strongly that an EVOLUTIONARY ALGORITHM (as
     a SIMULATION of a genetic process) is  not  a  random  search  for  a
     solution  to  a  problem (highly fit individual).  EAs use stochastic
     processes, but the  result  is  distinctly  non-random  (better  than
     random).
PSEUDO CODE
     Algorithm EA is

          // start with an initial time
          t := 0;

          // initialize a usually random population of individuals
          initpopulation P (t);

          // evaluate fitness of all initial individuals in population
          evaluate P (t);

          // test for termination criterion (time, fitness, etc.)
          while not done do

               // increase the time counter
               t := t + 1;

               // select sub-population for offspring production
               P' := selectparents P (t);

               // recombine the "genes" of selected parents
               recombine P' (t);

               // perturb the mated population stochastically
               mutate P' (t);

               // evaluate its new fitness
               evaluate P' (t);

               // select the survivors from actual fitness
               P := survive P,P' (t);
          od
     end EA.
 楼主| 发表于 2004-2-27 01:41:22 | 显示全部楼层
Subject: Q1.2: What's Evolutionary Programming (EP)?

  Introduction
     EVOLUTIONARY  PROGRAMMING, originally conceived by Lawrence J.  Fogel
     in 1960, is a stochastic OPTIMIZATION  strategy  similar  to  GENETIC
     ALGORITHMs,  but  instead  places  emphasis on the behavioral linkage
     between PARENTs and their OFFSPRING, rather than seeking  to  emulate
     specific  GENETIC  OPERATORs  as  observed  in  nature.  Evolutionary
     programming is similar to  EVOLUTION  STRATEGIEs,  although  the  two
     approaches developed independently (see below).

     Like  both  ES  and  GAs,  EP is a useful method of optimization when
     other techniques such  as  gradient  descent  or  direct,  analytical
     discovery  are  not  possible.  Combinatoric and real-valued FUNCTION
     OPTIMIZATION in which the optimization surface or  FITNESS  landscape
     is  "rugged",  possessing  many  locally  optimal solutions, are well
     suited for evolutionary programming.

  History
     The 1966 book, "Artificial Intelligence Through Simulated  Evolution"
     by  Fogel,  Owens  and  Walsh  is  the  landmark  publication  for EP
     applications, although  many  other  papers  appear  earlier  in  the
     literature.   In  the  book,  finite  state  automata were evolved to
     predict symbol strings  generated  from  Markov  processes  and  non-
     stationary  time  series.  Such evolutionary prediction was motivated
     by a  recognition  that  prediction  is  a  keystone  to  intelligent
     behavior  (defined  in  terms  of  adaptive  behavior,  in  that  the
     intelligent  organism  must  anticipate  events  in  order  to  adapt
     behavior in light of a goal).

     In  1992, the First Annual Conference on evolutionary programming was
     held in La Jolla, CA.  Further conferences have  been  held  annually
     (See  Q12).   The  conferences  attract  a diverse group of academic,
     commercial and military researchers engaged in  both  developing  the
     theory  of  the  EP  technique  and in applying EP to a wide range of
     optimization problems, both in engineering and biology.

     Rather  than  list  and  analyze  the  sources  in  detail,   several
     fundamental  sources  are  listed  below  which  should serve as good
     pointers to the entire body of work in the field.

  The Process
     For EP, like GAs, there is an underlying assumption  that  a  fitness
     landscape  can be characterized in terms of variables, and that there
     is an optimum solution (or multiple such optima) in  terms  of  those
     variables.  For example, if one were trying to find the shortest path
     in a Traveling Salesman Problem, each solution would be a path.   The
     length  of the path could be expressed as a number, which would serve
     as the solution's fitness.  The fitness landscape  for  this  problem
     could  be  characterized  as  a hypersurface proportional to the path
     lengths in a space of possible paths.  The goal would be to find  the
     globally  shortest  path  in that space, or more practically, to find
     very short tours very quickly.

     The basic EP method involves 3 steps (Repeat until  a  threshold  for
     iteration is exceeded or an adequate solution is obtained):

     (1)  Choose  an  initial POPULATION of trial solutions at random. The
          number of solutions in a population is highly  relevant  to  the
          speed  of optimization, but no definite answers are available as
          to how many solutions are appropriate (other than  >1)  and  how
          many solutions are just wasteful.

     (2)  Each  solution  is  replicated  into  a new population.  Each of
          these  offspring  solutions   are   mutated   according   to   a
          distribution  of  MUTATION  types, ranging from minor to extreme
          with a continuum of mutation types  between.   The  severity  of
          mutation is judged on the basis of the functional change imposed
          on the parents.

     (3)  Each offspring solution is assessed by  computing  its  fitness.
          Typically,  a  stochastic  tournament  is  held  to  determine N
          solutions to  be  retained  for  the  population  of  solutions,
          although   this  is  occasionally  performed  deterministically.
          There is  no  requirement  that  the  population  size  be  held
          constant, however, nor that only a single offspring be generated
          from each parent.

     It should be pointed out that EP typically does not use any CROSSOVER
     as a genetic operator.

  EP and GAs
     There are two important ways in which EP differs from GAs.

     First,  there is no constraint on the representation.  The typical GA
     approach involves encoding the  problem  solutions  as  a  string  of
     representative tokens, the GENOME.  In EP, the representation follows
     from the problem.  A neural network can be represented  in  the  same
     manner  as  it  is  implemented,  for  example,  because the mutation
     operation does not demand a linear encoding.  (In this  case,  for  a
     fixed topology, real- valued weights could be coded directly as their
     real values and mutation operates by perturbing a weight vector  with
     a   zero  mean  multivariate  Gaussian  perturbation.   For  variable
     topologies, the architecture is also perturbed, often  using  Poisson
     distributed additions and deletions.)

     Second, the mutation operation simply changes aspects of the solution
     according  to  a  statistical  distribution   which   weights   minor
     variations  in  the  behavior of the offspring as highly probable and
     substantial  variations  as  increasingly  unlikely.   Further,   the
     severity  of  mutations  is  often  reduced  as the global optimum is
     approached.  There is a certain tautology here: if the global optimum
     is not already known, how can the spread of the mutation operation be
     damped as the solutions approach it?  Several  techniques  have  been
     proposed  and  implemented  which  address  this difficulty, the most
     widely studied being the "Meta-Evolutionary" technique in  which  the
     variance  of  the  mutation  distribution is subject to mutation by a
     fixed variance mutation operator and evolves along with the solution.

  EP and ES
     The  first  communication  between  the  evolutionary programming and
     EVOLUTION STRATEGY groups occurred in early 1992, just prior  to  the
     first  annual  EP  conference.  Despite their independent development
     over 30 years, they share many  similarities.   When  implemented  to
     solve  real-valued  function  optimization  problems,  both typically
     operate on the real values themselves (rather than any coding of  the
     real values as is often done in GAs). Multivariate zero mean Gaussian
     mutations are applied to each parent in a population and a  SELECTION
     mechanism  is  applied  to determine which solutions to remove (i.e.,
     "cull") from the population.  The similarities extend to the  use  of
     self-adaptive  methods  for  determining the appropriate mutations to
     use -- methods in which each parent  carries  not  only  a  potential
     solution  to the problem at hand, but also information on how it will
     distribute new trials (offspring). Most of the theoretical results on
     convergence  (both  asymptotic  and  velocity) developed for ES or EP
     also apply directly to the other.

     The main differences between ES and EP are:

     1.   Selection:  EP  typically  uses  stochastic  selection   via   a
          tournament.    Each  trial  solution  in  the  population  faces
          competition  against  a  preselected  number  of  opponents  and
          receives  a  "win"  if it is at least as good as its opponent in
          each encounter.  Selection then eliminates those solutions  with
          the  least  wins.   In contrast, ES typically uses deterministic
          selection in which the  worst  solutions  are  purged  from  the
          population based directly on their function evaluation.

     2.   RECOMBINATION: EP is an abstraction of EVOLUTION at the level of
          reproductive   populations   (i.e.,   SPECIES)   and   thus   no
          recombination    mechanisms    are    typically   used   because
          recombination does not occur between species (by definition: see
          Mayr's  biological  species  concept).   In  contrast,  ES is an
          abstraction of evolution at the level  of  INDIVIDUAL  behavior.
          When  self-adaptive  information  is incorporated this is purely
          genetic information (as opposed to  phenotypic)  and  thus  some
          forms   of  recombination  are  reasonable  and  many  forms  of
          recombination have  been  implemented  within  ES.   Again,  the
          effectiveness  of such operators depends on the problem at hand.

  References
     Some references which provide an excellent introduction (by no  means
     extensive) to the field, include:

     Artificial   Intelligence   Through   Simulated  Evolution  [Fogel66]
     (primary)

     Fogel DB (1995) "Evolutionary Computation: Toward a New Philosophy of
     Machine Intelligence," IEEE Press, Piscataway, NJ.

     Proceeding of the first [EP92], second [EP93] and third [EP94] Annual
     Conference on Evolutionary Programming (primary) (See Q12).

PSEUDO CODE
     Algorithm EP is

          // start with an initial time
          t := 0;

          // initialize a usually random population of individuals
          initpopulation P (t);

          // evaluate fitness of all initial individuals of population
          evaluate P (t);

          // test for termination criterion (time, fitness, etc.)
          while not done do

               // perturb the whole population stochastically
               P'(t) := mutate P (t);

               // evaluate its new fitness
               evaluate P' (t);

               // stochastically select the survivors from actual fitness
               P(t+1) := survive P(t),P'(t);

               // increase the time counter
               t := t + 1;

          od
     end EP.

     [Eds note: An extended version of this introduction is available from
     ENCORE (see Q15.3) in /FAQ/supplements/what-is-ep ]
 楼主| 发表于 2004-2-27 01:42:05 | 显示全部楼层
Subject: Q1.3: What's an Evolution Strategy (ES)?

     In  1963 two students at the Technical University of Berlin (TUB) met
     and were soon to collaborate  on  experiments  which  used  the  wind
     tunnel  of  the Institute of Flow Engineering.  During the search for
     the optimal shapes of bodies in a flow, which was then  a  matter  of
     laborious  intuitive  experimentation,  the  idea  was  conceived  of
     proceeding strategically.  However, attempts with the coordinate  and
     simple  gradient  strategies  (cf Q5) were unsuccessful.  Then one of
     the  students,  Ingo  Rechenberg,  now  Professor  of   Bionics   and
     Evolutionary  Engineering, hit upon the idea of trying random changes
     in the parameters  defining  the  shape,  following  the  example  of
     natural  MUTATIONs.   The  EVOLUTION  STRATEGY  was  born.   A  third
     student, Peter Bienert, joined them and started the  construction  of
     an  automatic  experimenter, which would work according to the simple
     rules of mutation  and  SELECTION.   The  second  student,  Hans-Paul
     Schwefel,  set  about  testing the efficiency of the new methods with
     the help of a Zuse Z23 computer; for there were plenty of  objections
     to these "random strategies."

     In spite of an occasional lack of financial support, the Evolutionary
     Engineering Group which had been formed held  firmly  together.  Ingo
     Rechenberg  received  his  doctorate  in  1970  (Rechenberg  73).  It
     contains the theory of the two  membered  EVOLUTION  strategy  and  a
     first proposal for a multimembered strategy which in the nomenclature
     introduced here is of the (m+1) type.   In the  same  year  financial
     support  from  the  Deutsche  Forschungsgemeinschaft  (DFG, Germany's
     National Science Foundation) enabled the work, that was concluded, at
     least  temporarily,  in 1974 with the thesis "Evolutionsstrategie und
     numerische Optimierung" (Schwefel 77).

     Thus,  EVOLUTION  STRATEGIEs  were  invented   to   solve   technical
     OPTIMIZATION  problems  (TOPs)  like  e.g.  constructing  an  optimal
     flashing nozzle, and until recently  ES  were  only  known  to  civil
     engineering  folks, as an alternative to standard solutions.  Usually
     no closed form analytical objective function is  available  for  TOPs
     and   hence,  no  applicable  optimization  method  exists,  but  the
     engineer's intuition.

     The first attempts to imitate principles of organic  evolution  on  a
     computer  still resembled the iterative optimization methods known up
     to that time (cf Q5):  In a two-membered  or  (1+1)  ES,  one  PARENT
     generates   one   OFFSPRING   per  GENERATION  by  applying  NORMALLY
     DISTRIBUTED mutations, i.e. smaller steps occur more likely than  big
     ones,  until  a child performs better than its ancestor and takes its
     place. Because of this  simple  structure,  theoretical  results  for
     STEPSIZE control and CONVERGENCE VELOCITY could be derived. The ratio
     between successful and all mutations should  come  to  1/5:  the  so-
     called  1/5  SUCCESS RULE was discovered. This first algorithm, using
     mutation only, has then been  enhanced  to  a  (m+1)  strategy  which
     incorporated  RECOMBINATION  due  to  several,  i.e.  m parents being
     available. The mutation scheme and  the  exogenous  stepsize  control
     were taken across unchanged from  (1+1) ESs.

     Schwefel  later  generalized these strategies to the multimembered ES
     now denoted by (m+l) and (m,l) which  imitates  the  following  basic
     principles  of  organic  evolution:  a  POPULATION,  leading  to  the
     possibility  of  recombination  with  random  mating,  mutation   and
     selection.  These  strategies  are  termed  PLUS  STRATEGY  and COMMA
     STRATEGY, respectively: in the plus case, the parental generation  is
     taken into account during selection, while in the comma case only the
     offspring undergoes selection, and the parents die off. m (usually  a
     lowercase mu, denotes the population size, and l, usually a lowercase
     lambda denotes the number of offspring generated per generation).  Or
     to  put  it  in  an  utterly  insignificant  and  hopelessly outdated
     language:

          (define (Evolution-strategy population)
            (if (terminate? population)
              population
              (evolution-strategy
                (select
                  (cond (plus-strategy?
                          (union (mutate
                                   (recombine population))
                                 population))
                        (comma-strategy?
                          (mutate
                            (recombine population))))))))

     However, dealing with ES is sometimes seen as "strong  tobacco,"  for
     it takes a decent amount of probability theory and applied STATISTICS
     to understand the inner workings of an ES, while it navigates through
     the  hyperspace  of  the  usually  n-dimensional  problem  space,  by
     throwing hyperelipses into the deep...

     Luckily, this medium doesn't allow for  much  mathematical  ballyhoo;
     the  author  therefore  has  to come up with a simple but brilliantly
     intriguing explanation to save the reader from falling asleep  during
     the rest of this section, so here we go:

     Imagine a black box. A large black box. As large as, say for example,
     a Coca-Cola vending machine. Now, [..] (to be continued)

     A single INDIVIDUAL of the ES' population consists of  the  following
     GENOTYPE representing a point in the SEARCH SPACE:

     OBJECT VARIABLES
          Real-valued  x_i  have to be tuned by recombination and mutation
          such that an objective  function  reaches  its  global  optimum.
          Referring   to   the  metaphor  mentioned  previously,  the  x_i
          represent the regulators of the alien Coka-Cola vending machine.

     STRATEGY VARIABLEs
          Real-valued  s_i  (usually denoted by a lowercase sigma) or mean
          stepsizes determine the mutability of the  x_i.  They  represent
          the STANDARD DEVIATION of a  (0, s_i) GAUSSIAN DISTRIBUTION (GD)
          being added to each x_i as  an  undirected  mutation.   With  an
          "expectancy  value"  of  0  the  parents  will produce offspring
          similar to themselves on  average.  In order to make a  doubling
          and  a  halving  of  a stepsize equally probable, the s_i mutate
          log-normally, distributed,  i.e.  exp(GD),  from  generation  to
          generation.    These  stepsizes  hide  the  internal  model  the
          population has made of its ENVIRONMENT, i.e.  a  SELF-ADAPTATION
          of the stepsizes has replaced the exogenous control of the (1+1)
          ES.

          This concept works because selection  sooner  or  later  prefers
          those  individuals  having  built  a good model of the objective
          function, thus producing better offspring. Hence, learning takes
          place  on  two levels: (1) at the genotypic, i.e. the object and
          strategy variable level and (2) at the  phenotypic  level,  i.e.
          the FITNESS level.

          Depending  on  an  individual's  x_i,  the  resulting  objective
          function value f(x), where x denotes  the  vector  of  objective
          variables,  serves  as  the PHENOTYPE (fitness) in the selection
          step. In a plus strategy, the m best of  all  (m+l)  individuals
          survive to become the parents of the next generation.  Using the
          comma variant, selection takes place only among the l offspring.
          The   second   scheme  is  more  realistic  and  therefore  more
          successful, because no individual  may  survive  forever,  which
          could  at  least  theoretically  occur  using  the plus variant.
          Untypical for conventional optimization algorithms and lavish at
          first    sight,   a   comma   strategy   allowing   intermediate
          deterioration performs better! Only  by  forgetting  highly  fit
          individuals  can  a  permanent  adaptation of the stepsizes take
          place and avoid long stagnation phases due to misadapted  s_i's.
          This  means  that these individuals have built an internal model
          that is no longer appropriate for  further  progress,  and  thus
          should better be discarded.

          By   choosing  a  certain  ratio  m/l,  one  can  determine  the
          convergence property of the evolution strategy: If one  wants  a
          fast,  but  local  convergence,  one  should choose a small HARD
          SELECTION, ratio, e.g.  (5,100),  but  looking  for  the  global
          optimum, one should favour  a softer selection (15,100).

          Self-adaptation  within  ESs  depends  on  the  following agents
          (Schwefel 87):

     Randomness: One cannot model mutation
          as a purely random process. This would  mean  that  a  child  is
          completely independent of its parents.

     Population size: The population has to be sufficiently large. Not
          only
          the current best should be allowed to reproduce, but  a  set  of
          good  individuals.   Biologists  have coined the term "requisite
          variety" to mean the genetic  variety  necessary  to  prevent  a
          SPECIES   from   becoming  poorer  and  poorer  genetically  and
          eventually dying out.

     COOPERATION:
          In order to exploit the effects of a population  (m  >  1),  the
          individuals should recombine their knowledge with that of others
          (cooperate)  because  one  cannot  expect   the   knowledge   to
          accumulate in the best individual only.
     Deterioration: In order to allow better internal models (stepsizes)
          to  provide  better  progress  in  the future, one should accept
          deterioration from one generation to the next. A  limited  life-
          span  in nature is not a sign of failure, but an important means
          of preventing a species from freezing genetically.

          ESs prove to be successful  when  compared  to  other  iterative
          methods  on a large number of test problems (Schwefel 77).  They
          are adaptable to nearly all sorts of problems  in  optimization,
          because  they  need  very  little information about the problem,
          especially no derivatives of the objective function. For a  list
          of  some  300  applications  of EAs, see the SyS-2/92 report (cf
          Q14).  ESs are capable of solving high dimensional,  multimodal,
          nonlinear   problems   subject   to   linear   and/or  nonlinear
          constraints.  The objective  function  can  also,  e.g.  be  the
          result of a SIMULATION, it does not have to be given in a closed
          form.  This also holds for the constraints which  may  represent
          the  outcome  of, e.g. a finite elements method (FEM).  ESs have
          been adapted to VECTOR OPTIMIZATION problems (Kursawe  92),  and
          they can also serve as a heuristic for NP-complete combinatorial
          problems like the TRAVELLING SALESMAN PROBLEM or problems with a
          noisy or changing response surface.

          References

          Kursawe,   F.   (1992)   "   Evolution   strategies  for  vector
          optimization", Taipei, National Chiao Tung University,  187-193.

          Kursawe,  F.  (1994)  "  Evolution  strategies: Simple models of
          natural processes?", Revue Internationale de Systemique,  France
          (to appear).

          Rechenberg,    I.   (1973)   "Evolutionsstrategie:   Optimierung
          technischer Systeme nach Prinzipien der biologischen Evolution",
          Stuttgart: Fromman-Holzboog.

          Schwefel,    H.-P.    (1977)    "Numerische    Optimierung   von
          Computermodellen  mittels   der   Evolutionsstrategie",   Basel:
          Birkhaeuser.

          Schwefel,  H.-P.  (1987)  "Collective Phaenomena in Evolutionary
          Systems", 31st Annu.  Meet.  Inter'l  Soc.  for  General  System
          Research, Budapest, 1025-1033.

------------------------------
 楼主| 发表于 2004-2-27 01:42:53 | 显示全部楼层
Subject: Q1.4: What's a Classifier System (CFS)?

The name of the Game
     First,  a word on naming conventions is due, for no other paradigm of
     EC has undergone more changes  to  its  name  space  than  this  one.
     Initially,  Holland  called his cognitive models "Classifier Systems"
     abbrv. with CS, and sometimes CFS, as can be found in [GOLD89].

     Whence Riolo came into play in 1986 and Holland added a reinforcement
     component to the overall design of a CFS, that emphasized its ability
     to learn. So, the word "learning" was prepended to the name, to make:
     "Learning  Classifier Systems" (abbrv. to LCS).  On October 6-9, 1992
     the "1st Inter'l Workshop on Learning Classifier Systems" took  place
     at  the  NASA  Johnson  Space Center, Houston, TX.  A summary of this
     "summit" of all leading researchers in LCS can  be  found  on  ENCORE
     (See Q15.3) in file CFS/papers/lcs92.ps.gz

     Today, the story continues, LCSs are sometimes subsumed under a "new"
     machine  learning   paradigm   called   "Evolutionary   Reinforcement
     Learning" or ERL for short, incorporating LCSs, "Q-Learning", devised
     by Watkins (1989), and a paradigm of the same name, devised by Ackley
     and Littman [ALIFEIII].

     And  then,  this  latter  statement  is really somewhat confusing, as
     Marco Dorigo has pointed out in a letter to editors  of  this  guide,
     since  Q-Learning  has  no  evolutionary component. So please let the
     Senior Editor explain: When I wrote this part of the  guide,  I  just
     had  in mind that Q-Learning would make for a pretty good replacement
     of  Holland's  bucket-brigade  algorithm,  so  I  used   this   litte
     speculation to see what comes out of it; in early December 95, almost
     two years  later,  it  has  finally  caught  Marco's  attention.  But
     meanwhile,  I  have been proven right: Wilson has suggested to use Q-
     Learning in CLASSIFIER SYSTEM (Wilson  1994)  and  Dorigo  &  Bersini
     (1994)  have  shown  that Q-Learning and the bucket-brigade are truly
     equivalent concepts.

     We would therefore be allowed to call a CFS that uses Q-Learning  for
     rule  discovery,  rather than a bucket-brigade, a Q-CFS, Q-LCS, or Q-
     CS; in any case would the result be subsumed under the term ERL, even
     if Q-Learning itself is not an EVOLUTIONARY ALGORITHM!

     Interestingly,  Wilson  called  his  system  ZCS  (apparantly  no "Q"
     inside), while Dorigo & Bersini called their system a D-Max-VSCS,  or
     "discounted  max  very  simple classifier system" (and if you know Q-
     Learning at least the D-Max part of the name will remind you of  that
     algorithm).   The  latter paper can be found on Encore (see Q15.3) in
     file CFS/papers/sab94.ps.gz

     And by the way in [HOLLAND95] the term  "classifier  system"  is  not
     used  anymore.  You cannot find it in the index. Its a gone!  Holland
     now stresses the adaptive component  of  his  invention,  and  simply
     calls  the  resulting systems adaptive agents.  These agents are then
     implemented within the framework of his recent invention called ECHO.
     (See http://www.santafe.edu/projects/echo for more.)

     Alwyn  Barry's  LCS  Web has a great deal of information on LCS. See:
     http://www.csm.uwe.ac.uk/~ambarry/LCSWEB/

On Schema Processors and ANIMATS
     So, to get back to the question above, "What  are  CFSs?",  we  first
     might  answer,  "Well,  there  are  many interpretations of Holland's
     ideas...what do you like to know in particular?"  And then we'd start
     with  a recitation from [HOLLAND75], [HOLLAND92], and explain all the
     SCHEMA processors, the broadcast language, etc.  But, we will take  a
     more  comprehensive,  and intuitive way to understand what CLASSIFIER
     SYSTEMs are all about.

     The easiest road to explore the very nature of classifier systems, is
     to take the animat (ANIMAl + ROBOT = ANIMAT) "lane" devised by Booker
     (1982) and later studied  extensively  by  Wilson  (1985),  who  also
     coined  the  term for this approach. Work continues on animats but is
     often  regarded  as  ARTIFICIAL   LIFE   rather   than   EVOLUTIONARY
     COMPUTATION.   This  thread  of  research has even its own conference
     series: "Simulation of Adaptive Behavior (SAB)" (cf  Q12),  including
     other   notions   from   machine  learning,  connectionist  learning,
     evolutionary robotics, etc.  [NB: the latter is obvious, if an animat
     lives  in  a  digital microcosm, it can be put into the real world by
     implantation   into   an   autonomous   robot   vehicle,   that   has
     sensors/detectors   (camera   eyes,  whiskers,  etc.)  and  effectors
     (wheels, robot arms, etc.); so  all  that's  needed  is  to  use  our
     algorithm  as  the  "brain"  of this vehicle, connecting the hardware
     parts with the software learning component.  For a fascinating  intro
     to the field see, e.g. Braitenberg (1984)]

     classifier  systems,  however,  are  yet  another  offspring  of John
     Holland's aforementioned book, and can be seen as one  of  the  early
     applications  of  GAs,  for  CFSs  use this EVOLUTIONARY ALGORITHM to
     adapt their behavior toward a changing ENVIRONMENT, as  is  explained
     below in greater detail.

     Holland  envisioned  a  cognitive  system  capable of classifying the
     goings on in its environment, and then reacting to  these  goings  on
     appropriately.  So  what is needed to build such a system? Obviously,
     we need (1) an environment; (2) receptors that tell our system  about
     the  goings  on;  (3)  effectors,  that let our system manipulate its
     environment; and (4) the system itself, conveniently a "black box" in
     this first approach, that has (2) and (3) attached to it, and "lives"
     in (1).

     In the animat  approach,  (1)  usually  is  an  artificially  created
     digital  world,  e.g.  in Booker's Gofer system, a 2 dimensional grid
     that contains "food" and "poison".  And the Gofer itself, that  walks
     across  this grid and tries (a) to learn to distinguish between these
     two items, and (b) survive well fed.

     Much the same for Wilson's animat,  called  "*".  Yes,  its  just  an
     asterisk,  and a "Kafka-esque naming policy" is one of the sign posts
     of the whole field; e.g. the  first  implementation  by  Holland  and
     Reitmann  1978  was  called CS-1, (cognitive system 1); Smith's Poker
     player LS-1 (1980)  followed  this  "convention".  Riolo's  1988  LCS
     implementations  on  top  of  his CFS-C library (cf Q20), were dubbed
     FSW-1 (Finite State World 1), and LETSEQ-1 (LETter SEQuence predictor
     1).

     So  from  the  latter  paragraph we can conclude that environment can
     also mean something completely different (e.g. an infinite stream  of
     letters,  time  serieses,  etc.)  than  in  the  animat approach, but
     anyway; we'll stick to it, and go on.

     Imagine a very simple animat, e.g. a  simplified  model  of  a  frog.
     Now,  we  know  that  frogs  live  in (a) Muppet Shows, or (b) little
     ponds; so we chose the latter as our demo environment  (1);  and  the
     former  for  a  non-Kafka-esque  name  of  our model, so let's dub it
     "Kermit".

     Kermit has eyes, i.e. sensorial input detectors (2); hands and  legs,
     i.e.    environment-manipulating   effectors  (3);  is  a  spicy-fly-
     detecting-and-eating device, i.e. a frog (4); so we  got  all  the  4
     pieces needed.

Inside the Black Box
     The most primitive "black box" we can think of is a computer.  It has
     inputs (2), and outputs (3), and a message passing system  inbetween,
     that  converts  (i.e.,  computes), certain input messages into output
     messages, according to a set of rules, usually called  the  "program"
     of that computer.  From the theory of computer science, we now borrow
     the simplest of all program  structures,  that  is  something  called
     "production  system"  or  PS  for  short.   A PS has been shown to be
     computationally complete by Post (1943), that's why it  is  sometimes
     called  a  "ost  System",  and  later by Minsky (1967).  Although it
     merely consists of a set of if-then rules, it still resembles a full-
     fledged computer.

     We  now  term  a  single  "if-then" rule a "classifier", and choose a
     representation that makes it easy to manipulate these, for example by
     encoding  them  into  binary  strings.   We  then  term  the  set  of
     classifiers, a "classifier population", and immediately know  how  to
     breed  new  rules  for  our  system:  just  use  a GA to generate new
     rules/classifiers from the current POPULATION!

     All that's left are the messages  floating  through  the  black  box.
     They  should also be simple strings of zeroes and ones, and are to be
     kept in a data structure, we call "the message list".

     With all this given, we can imagine the goings on  inside  the  black
     box as follows: The input interface (2) generates messages, i.e., 0/1
     strings, that are written on the message list.  Then  these  messages
     are  matched  against  the condition-part of all classifiers, to find
     out which actions are to be triggered.   The  message  list  is  then
     emptied,  and  the  encoded  actions,  themselves  just messages, are
     posted to the message list.  Then, the output  interface  (3)  checks
     the message list for messages concerning the effectors. And the cycle
     restarts.

     Note, that it is possible in this set-up to have "internal messages",
     because  the message list is not emptied after (3) has checked; thus,
     the input interface messages are added to the initially  empty  list.
     (cf Algorithm CFS, LCS below)

     The  general  idea  of  the  CFS is to start from scratch, i.e., from
     tabula rasa  (without  any  knowledge)  using  a  randomly  generated
     classifier  population,  and  let  the  system  learn  its program by
     induction, (cf Holland et al. 1986), this reduces the input stream to
     recurrent  input patterns, that must be repeated over and over again,
     to enable the animat to classify its  current  situation/context  and
     react on the goings on appropriately.

What does it need to be a frog?
     Let's  take a look at the behavior emitted by Kermit. It lives in its
     digital microwilderness where it moves around  randomly.   [NB:  This
     seemingly  "random"  behavior  is not that random at all; for more on
     instinctive, i.e., innate behavior  of  non-artificial  animals  see,
     e.g.  Tinbergen (1951)]

     Whenever  a  small flying object appears, that has no stripes, Kermit
     should eat it, because its very likely a spicy fly, or  other  flying
     insect.   If it has stripes, the insect is better left alone, because
     Kermit had better not bother with wasps, hornets, or bees.  If Kermit
     encounters a large, looming object, it immediately uses its effectors
     to jump away, as far as possible.

     So, part of these behavior patterns within the "pond  world",  in  AI
     sometimes called a "frame," from traditional knowledge representation
     techniques, Rich (1983), can be expressed in a set of "if <condition>
     then <action>" rules, as follows:

          IF small, flying object to the left THEN send @
          IF small, flying object to the right THEN send %
          IF small, flying object centered THEN send $
          IF large, looming object THEN send !
          IF no large, looming object THEN send *
          IF * and @ THEN move head 15 degrees left
          IF * and % THEN move head 15 degrees right
          IF * and $ THEN move in direction head pointing
          IF ! THEN move rapidly away from direction head pointing

     Now,  this set of rules has to be encoded for use within a CLASSIFIER
     SYSTEM.  A possible encoding of the above rule set in  CFS-C  (Riolo)
     classifier   terminology.   The   condition   part  consists  of  two
     conditions, that are combined with a logical AND, thus  must  be  met
     both  to  trigger  the  associated action. This structure needs a NOT
     operation, (so we get NAND, and know from hardware  design,  that  we
     can  build  any computer solely with NANDs), in CFS-C this is denoted
     by the `~' prefix character (rule #5).

          IF             THEN
           0000,  00 00  00 00
           0000,  00 01  00 01
           0000,  00 10  00 10
           1111,  01 ##  11 11
          ~1111,  01 ##  10 00
           1000,  00 00  01 00
           1000,  00 01  01 01
           1000,  00 10  01 10
           1111,  ## ##  01 11
     Obviously, string `0000' denotes small, and `00' in the fist part  of
     the  second  column,  denotes flying.  The last two bits of column #2
     encode the direction of the  object  approaching,  where  `00'  means
     left, `01' means right, etc.

     In  rule  #4  a the "don't care symbol" `#' is used, that matches `1'
     and `0',  i.e.,  the  position  of  the  large,  looming  object,  is
     completely   arbitrary.   A  simple  fact,  that  can  save  Kermit's
     (artificial) life.
 楼主| 发表于 2004-2-27 01:43:48 | 显示全部楼层
PSEUDO CODE (Non-Learning CFS)
     Algorithm CFS is

          // start with an initial time
          t := 0;

          // an initially empty message list
          initMessageList ML (t);

          // and a randomly generated population of classifiers
          initClassifierPopulation P (t);

          // test for cycle termination criterion (time, fitness, etc.)
          while not done do

               // increase the time counter
               t := t + 1;

               // 1. detectors check whether input messages are present
               ML := readDetectors (t);

               // 2. compare ML to the classifiers and save matches
               ML' := matchClassifiers ML,P (t);

               // 3. process new messages through output interface
               ML := sendEffectors ML' (t);
          od
     end CFS.

     To convert the previous, non-learning CFS into a learning  CLASSIFIER
     SYSTEM,  LCS,  as  has  been proposed in Holland (1986), it takes two
     steps:

     (1)   the major cycle has to be changed such that the  activation  of
           each  classifier depends on some additional parameter, that can
           be modified as a result of experience, i.e. reinforcement  from
           the ENVIRONMENT;

     (2)   and/or  change  the  contents  of  the  classifier  list, i.e.,
           generate  new  classifiers/rules,  by  removing,   adding,   or
           combining condition/action-parts of existing classifiers.

     The algorithm thus changes accordingly:

PSEUDO CODE (Learning CFS)
     Algorithm LCS is

          // start with an initial time
          t := 0;

          // an initially empty message list
          initMessageList ML (t);

          // and a randomly generated population of classifiers
          initClassifierPopulation P (t);

          // test for cycle termination criterion (time, fitness, etc.)
          while not done do

               // increase the time counter
               t := t + 1;

               // 1. detectors check whether input messages are present
               ML := readDetectors (t);

               // 2. compare ML to the classifiers and save matches
               ML' := matchClassifiers ML,P (t);

               // 3. highest bidding classifier(s) collected in ML' wins the
               // "race" and post the(ir) message(s)
               ML' := selectMatchingClassifiers ML',P (t);

               // 4. tax bidding classifiers, reduce their strength
               ML' := taxPostingClassifiers ML',P (t);

               // 5. effectors check new message list for output msgs
               ML := sendEffectors ML' (t);

               // 6. receive payoff from environment (REINFORCEMENT)
               C := receivePayoff (t);

               // 7. distribute payoff/credit to classifiers (e.g. BBA)
               P' := distributeCredit C,P (t);

               // 8. Eventually (depending on t), an EA (usually a GA) is
               // applied to the classifier population
               if criterion then
                    P := generateNewRules P' (t);
               else
                    P := P'
          od
     end LCS.

What's the problem with CFSs?
     Just  to list the currently known problems that come with CFSs, would
     take some additional pages; therefore only  some  interesting  papers
     dealing  with  unresolved riddles are listed; probably the best paper
     containing most of these is the aforementioned  summary  of  the  LCS
     Workshop:

     Smith,  R.E.  (1992) "A report on the first Inter'l Workshop on LCSs"
     avail. from ENCORE (See Q15.3) in file CFS/papers/lcs92.ps.gz

     Other noteworthy critiques on LCSs include:

     Wilson, S.W. (1987)  "Classifier  Systems  and  the  Animat  Problem"
     Machine Learning, 2.

     Wilson,  S.W.  (1988)  "Bid Competition and Specificity Reconsidered"
     Complex Systems, 2(5):705-723.

     Wilson, S.W. & Goldberg, D.E. (1989) "A critical review of classifier
     systems" [ICGA89], 244-255.

     Goldberg,  D.E., Horn, J. & Deb, K. (1992) "What makes a problem hard
     for a classifier system?"  (containing the Goldberg  citation  below)
     is    also    available    from    Encore   (See   Q15.3)   in   file
     CFS/papers/lcs92-2.ps.gz

     Dorigo, M. (1993) "Genetic  and  Non-genetic  Operators  in  ALECSYS"
     Evolutionary  Computation,  1(2):151-164.   The technical report, the
     journal article is based on is avail. from Encore (See Q15.3) in file
     CFS/papers/icsi92.ps.gz

     Smith,  R.E.  Forrest,  S.  &  Perelson,  A.S.  (1993) "Searching for
     Diverse,   Cooperative   POPULATIONs   with    Genetic    Algorithms"
     Evolutionary Computation, 1(2):127-149.

Conclusions?
     Generally speaking:  "There's much to do in CFS research!"

     No  other  notion of EC provides more space to explore and if you are
     interested in a PhD in the field, you might want  to  take  a  closer
     look  at  CFS.   However,  be warned!, to quote Goldberg: "classifier
     systems  are  a  quagmire---a  glorious,  wondrous,  and    inventing
     quagmire, but a quagmire nonetheless."

     References

     Booker, L.B. (1982) "Intelligent behavior as an adaption to the  task
     environment" PhD Dissertation, Univ. of Michigan, Logic of  Computers
     Group, Ann Arbor, MI.

     Braitenberg,   V.   (1984)   "Vehicles:   Experiments   in  Synthetic
     Psychology" Boston, MA: MIT Press.

     Dorigo M. & H.  Bersini  (1994).  "A  Comparison  of  Q-Learning  and
     Classifier  Systems."   Proceedings of From Animals to Animats, Third
     International Conference on SIMULATION of Adaptive Behavior  (SAB94),
     Brighton,  UK, D.Cliff, P.Husbands, J.-A.Meyer and S.W.Wilson (Eds.),
     MIT                          Press,                          248-255.
     http://iridia.ulb.ac.be/dorigo/dorigo/conferences/IC.11-SAB94.ps.gz

     Holland,  J.H.  (1986)  "Escaping  Brittleness:  The possibilities of
     general-purpose learning algorithms applied  to  parallel  rule-based
     systems".  In:  R.S. Michalski, J.G. Carbonell & T.M. Mitchell (eds),
     Machine  Learning:  An  Artificial  Intelligence  approach,  Vol  II,
     593-623, Los Altos, CA: Morgan Kaufman.

     Holland,  J.H.,  et  al.  (1986)  "Induction: Processes of Inference,
     Learning, and Discovery", Cambridge, MA: MIT Press.

     Holland, J.H. (1992) "Adaptation in natural and  artificial  systems"
     Boston, MA: MIT Press.

     Holland, J.H. (1995) "Hidden Order: How adaptation builds complexity"
     Reading, MA: Addison-Wesley. [HOLLAND95]:

     Holland, J.H. & Reitman, J.S.  (1978)  "Cognitive  Systems  based  on
     Adaptive  Algorithms" In D.A. Waterman & F.Hayes-Roth, (eds) Pattern-
     directed inference systems. NY: Academic Press.

     Minsky,  M.L.   (1961)   "Steps   toward   Artificial   Intelligence"
     Proceedings  IRE, 49, 8-30. Reprinted in E.A. Feigenbaum & J. Feldman
     (eds) Computers and Thought, 406-450, NY: McGraw-Hill, 1963.

     Minsky, M.L.  (1967)  "Computation:  Finite  and  Infinite  Machines"
     Englewood Cliffs, NJ: Prentice-Hall.

     Post,  Emil L. (1943) "Formal reductions of the general combinatorial
     decision problem" American Journal of Mathematics, 65, 197-215.
     Rich, E. (1983) "Artificial Intelligence" NY: McGraw-Hill.

     Tinbergen, N. (1951) "The Study of Instinct" NY: Oxford Univ.  Press.

     Watkins,  C. (1989) "Learning from Delayed Rewards" PhD Dissertation,
     Department of Psychology, Cambridge Univ., UK.

     Wilson, S.W. (1985) "Knowledge growth in  an  artificial  animal"  in
     [ICGA85], 16-23.

     Wilson,  S.W.  (1994)  "ZCS:  a zeroth level classifier system" in EC
     2(1), 1-18.

------------------------------
 楼主| 发表于 2004-2-27 01:44:31 | 显示全部楼层
Subject: Q1.5: What's Genetic Programming (GP)?

     GENETIC PROGRAMMING is the extension of the genetic model of learning
     into  the space of programs. That is, the objects that constitute the
     POPULATION  are  not  fixed-length  character  strings  that   encode
     possible  solutions  to  the problem at hand, they are programs that,
     when executed, "are" the candidate solutions to  the  problem.  These
     programs  are expressed in genetic programming as parse trees, rather
     than as lines of code.  Thus, for example, the simple program "a +  b
     * c" would be represented as:

                 +
                / \
                a  *
                 / \
                 b  c

     or,  to  be  precise,  as suitable data structures linked together to
     achieve this effect. Because this is a very simple thing to do in the
     programming language Lisp, many GPers tend to use Lisp. However, this
     is simply an implementation detail. There are straightforward methods
     to implement GP using a non-Lisp programming environment.

     The  programs  in  the  population  are composed of elements from the
     FUNCTION SET and the TERMINAL SET, which are typically fixed sets  of
     symbols selected to be appropriate to the solution of problems in the
     domain of interest.

     In GP the CROSSOVER  operation  is  implemented  by  taking  randomly
     selected  subtrees in the INDIVIDUALs (selected according to FITNESS)
     and exchanging them.

     It should be pointed out that GP usually does not use any MUTATION as
     a GENETIC OPERATOR.

     More  information  is  available  in  the  GP mailing list FAQ.  (See
     Q15.2) and from http://smi-web.stanford.edu/people/koza/

------------------------------

     Copyright (c) 1993-2000 by J. Heitkoetter and D. Beasley, all  rights
     reserved.

     This  FAQ  may be posted to any USENET newsgroup, on-line service, or
     BBS as long as it  is  posted  in  its  entirety  and  includes  this
     copyright  statement.   This FAQ may not be distributed for financial
     gain.  This FAQ may not be  included  in  commercial  collections  or
     compilations without express permission from the author.
发表于 2004-2-27 01:45:49 | 显示全部楼层
好东西!
 楼主| 发表于 2004-2-27 01:53:09 | 显示全部楼层
Subject: Q2: What applications of EAs are there?

     In   principle,   EAs  can  compute  any  computable  function,  i.e.
     everything a normal digital computer can do.

     But EAs are especially badly suited for problems where efficient ways
     of  solving  them  are  already  known,  (unless  these  problems are
     intended to serve as benchmarks).  Special purpose  algorithms,  i.e.
     algorithms  that  have  a  certain amount of problem domain knowledge
     hard coded into them, will usually outperform EAs,  so  there  is  no
     black  magic  in EC.  EAs should be used when there is no other known
     problem solving strategy, and  the  problem  domain  is  NP-complete.
     That's  where  EAs  come  into  play: heuristically finding solutions
     where all else fails.

     Following  is  an  incomplete   (sic!)    list   of   successful   EA
     applications:

ARTIFICIAL LIFE
     ARTIFICIAL  LIFE  (ALIFE)  systems  attempt  to  simulate the kind of
     behaviour exhibited by real, living  creatures.  Not  all  Artificial
     Life systems employ EVOLUTIONARY ALGORITHMs (see Q4.1).

     Framsticks

     Framsticks  is  a three-dimensional life SIMULATION project. Both the
     physical  structure  of  creatures  and  their  control  systems  are
     evolved.  Evolutionary  algorithms are used with SELECTION, CROSSOVER
     and MUTATION.  Finite element methods are used for  simulation.  Both
     spontaneous and directed EVOLUTIONs are possible.

     This  system  uses  the  standard  EA  framework  to evolve 3D agents
     equipped with neural networks. It has proved to be an attractive tool
     for  people who want to learn about the way evolutionary OPTIMIZATION
     techniques work.

     This is shareware, but all the evolutionary  features  are  available
     free.  The  project  is open, and developers can take part in it, and
     also conduct their own experiments (i.e.   using  their  own  GENETIC
     OPERATORs).   There  are  links  to  the scientific papers on the web
     page, as well as the detailed program documentation. The software  is
     quite general and can be used to study a range of problems, including
     coevolution of body and brain.

     For more details, see: http://www.frams.poznan.pl/

BIOCOMPUTING
     Biocomputing, or Bioinformatics, is the field of biology dedicated to
     the automatic analysis of experimental data (mostly sequencing data).
     Several  approaches  to  specific  biocomputing  problems  have  been
     described  that  involve  the  use of GA, GP and simulated annealing.
     General information about biocomputing (software,  databases,  misc.)
     can  be found on the server of the European Bioinformatics Institute:
     http://www.ebi.ac.uk/ebi_home.html ENCORE has  a  good  selection  of
     pointers  related  to  this subject.  VSCN provides a detailed online
     course       on        bioinformatics:        http://www.techfak.uni-
     bielefeld.de/bcd/Curric/welcome.html

     There  are  three  main  domains  to  which  GA  have been applied in
     Bioinformatics: protein folding, RNA folding, sequence alignment.

     Protein Folding

     Proteins are one of the essential components of  any  form  of  life.
     They  are  made of twenty different types of amino acid.  These amino
     acids are chained together in order to  form  the  protein  that  can
     contain  from  a  few  to  several thousands residues. In most of the
     cases, the properties and the function of a protein are a  result  of
     its  three  dimensional  structure.  It seems that in many cases this
     structure is a direct consequence of the sequence. Unfortunately,  it
     is  still  very  difficult/impossible to deduce the three dimensional
     structure, knowing only the sequence.  A part  of  the  VSCN  on-line
     bioinformatics  course  is  dedicated  to  the  use of GAs in Protein
     Folding Prediction. It  contains  an  extensive  bibliography  and  a
     detailed  presentation  of  the subject with LOTS of explanations and
     on-line    papers.    The     URL     is:     http://www.techfak.uni-
     bielefeld.de/bcd/Curric/ProtEn/contents.html

     Koza  [KOZA92]  gives  one  example of GP applied to Protein Folding.
     Davis [DAVIS91] gives an example of DNA  conformation  prediction  (a
     closely related problem) in his Handbook of GAs.

     RNA Folding

     Describing  the  tertiary  structure  of an RNA molecule, is about as
     hard as for a protein,  but  describing  the  intermediate  structure
     (secondary  structure)  is  somehow  easier because RNA molecules are
     using the same pairing rules as DNA, (Watson and Crick base pairing).
     There  exist deterministic algorithms that given a set of constraints
     (rules), compute the more stable structure, but: (a) their  time  and
     memory  requirement increase quadratically or more with the length of
     the sequences, and (b) they require simplified rules.  Lots of effort
     has  recently been put into applying GAs to this problem, and several
     papers can be found (on-line if your institute  subscribes  to  these
     journals):

     A genetic Algorithm Based Molecular Modelling Technique For RNA Stem-
     loop Structures H. Ogata, Y. Akiyama and  M  Kanehisa,  Nucleic  Acid
     Research, 1995, vol 23,3 419-426

     An  Annealing Mutation Operator in the GA for RNA folding B.A Shapiro
     and J. C. Wu, CABIOS, 1996, vol 12, 3, 171-180

     The computer Simulation  of  RNA  Folding  Pathway  Using  a  Genetic
     Algorithm  A.P.  Gultyaev,  F.D.H van Batenburg and C. W. A. Pleij in
     Journal of Molecular Biology, 1995, vol 250 37-51

     Simulated Annealing  has  also  been  applied  successfully  to  this
     problem:

     Description  of RNA folding by SA M. Schmitz and G. Steger in Journal
     of Molecular Biology, 1995, 255, 245-266
     Sequence Alignments

     Sequence Alignment is another important  problem  of  Bioinformatics.
     The  aim  is to align together several related sequences (from two to
     hundreds) given a cost function.  For  the  most  widely   used  cost
     functions,  the  problem  has  been shown to be NP-complete.  Several
     attempts have been made using SA:
     Multiple Sequence Alignment Using SA J. Kim, Sakti Pramanik and  M.J.
     Chung, CABIOS, 1994, vol 10, 4, 419-426

     Multiple  Sequence Alignment by Parallel SA M. Isshikawa, T. Koya and
     al, CABIOS, 1993,vol 9, 3, 267-273

     SAM, software which uses Hidden Markov Models for  Multiple  Sequence
     Alignment,  can  use  SA to train the model. Several papers have been
     published on SAM.   The  software,  documentation  and  an  extensive
     bibliography           can           be           found           in:
     http://www.cse.ucsc.edu/research/compbio/sam.html

     More recently, various software using different  methods  like  Gibbs
     sampling or GAs has been developed:

     A Gibbs Sampling Strategy for Multiple Alignment C.E. Lawrence, S. F.
     Altschull and al, Science, October 1993, vol 262, 208-214

     SAGA: Sequence Alignment by Genetic Algorithm C. Notredame  and  D.G.
     Higgins, Nucleic Acid Research, 1995, vol 24, 8,
      1515-1524

     A   beta  release  of SAGA (along with the paper) is available on the
     European   Bioinformatics    Institute    anonymous    FTP    server:
     ftp.ebi.ac.uk/pub/software/unix/saga.tar.Z

CELLULAR PROGRAMMING: Evolution of Parallel Cellular Machines
     Nature  abounds  in systems involving the actions of simple, locally-
     interacting  components,  that  give  rise  to   coordinated   global
     behavior.   These collective systems have evolved by means of natural
     SELECTION  to  exhibit  striking  problem-solving  capacities,  while
     functioning  within a complex, dynamic ENVIRONMENT.  Employing simple
     yet versatile parallel cellular  models,  coupled  with  EVOLUTIONARY
     COMPUTATION  techniques,  cellular  programming  is  an  approach for
     constructing man-made systems that exhibit  characteristics  such  as
     those manifest by their natural counterparts.

     Parallel  cellular  machines  hold  potential both scientifically, as
     vehicles for studying phenomena of interest in areas such as  complex
     adaptive  systems  and  ARTIFICIAL  LIFE,  as  well  as  practically,
     enabling  the   construction   of   novel   systems,   endowed   with
     evolutionary,  reproductive, regenerative, and learning capabilities.

     Web site: http://lslwww.epfl.ch/~moshes/cp.html

     References:

     Sipper, M. (1997)  "Evolution  of  Parallel  Cellular  Machines:  The
     Cellular Programming Approach", Springer-Verlag, Heidelberg.

     Sipper,  M.  (1996)  "Co-evolving  Non-Uniform  Cellular  Automata to
     Perform Computations", Physica D, 92, 193-208.

     Sipper, M. and  Ruppin,  E.  (1997)  "Co-evolving  architectures  for
     cellular machines", Physica D, 99, 428-441.

     Sipper,  M.  and  Tomassini,  M.  (1996)  "Generating Parallel Random
     Number Generators By Cellular Programming", International Journal  of
     Modern Physics C, 7(2), 181-190.
     Sipper, M. (1997) "Evolving Uniform and Non-uniform Cellular Automata
     Networks", in Annual Reviews of Computational  Physics,  D.  Stauffer
     (ed)

Evolvable Hardware
     The  idea  of  evolving  machines, whose origins can be traced to the
     cybernetics movement  of  the  1940s  and  the  1950s,  has  recently
     resurged in the form of the nascent field of bio-inspired systems and
     evolvable hardware. The field draws on ideas  from  the  EVOLUTIONARY
     COMPUTATION   domain  as  well  as  on  novel  hardware  innovations.
     Recently, the term evolware has been used to describe  such  evolving
     ware,  with  current  implementations  centering  on  hardware, while
     raising the possibility of using other forms in the future,  such  as
     bioware.   The  inaugural  workshop, Towards Evolvable Hardware, took
     place  in  Lausanne,  in  October  1995,  followed   by   the   First
     International  Conference  on  Evolvable  Systems:  From  Biology  to
     Hardware (ICES96) held in Japan, in October 1996. Another major event
     in the field, ICES98, was held in Lausanne, Switzerland, in September
     1998.

     References:

     Sipper, M. et al (1997) "A Phylogenetic, Ontogenetic, and  Epigenetic
     View   of   Bio-Inspired  Hardware  Systems",  IEEE  Transactions  on
     Evolutionary Computation, 1(1).

     Sanchez,  E.  and  Tomassini,  M.  (eds)  (1996)  "Towards  Evolvable
     Hardware",  Springer-Verlag, Lecture Notes in Computer Science, 1062.

     Higuchi,  T.  et  al  (1997)  "roceedings  of  First   International
     Conference  on Evolvable Systems: From Biology to Hardware (ICES96)",
     Springer-Verlag, Lecture Notes in Computer Science.

GAME PLAYING
     GAs can be used to  evolve  behaviors  for  playing  games.  Work  in
     evolutionary  GAME  THEORY  typically  surrounds  the  EVOLUTION of a
     POPULATION of players who meet randomly to play a game in which  they
     each  must  adopt  one  of  a limited number of moves. (Maynard-Smith
     1982).  Let's suppose it is just two moves,  X  and  Y.  The  players
     receive  a reward, analogous to Darwinian FITNESS, depending on which
     combination of moves occurs and which  move  they  adopted.  In  more
     complicated models there may be several players and several moves.

     The  players  iterate such a game a series of times, and then move on
     to a new partner. At the end of all such moves, the players will have
     a cumulative payoff, their fitness.  This fitness can then be used to
     generate a new population.

     The real key in using a  GA  is  to  come  up  with  an  encoding  to
     represent  player's strategies, one that is amenable to CROSSOVER and
     to MUTATION.  Possibilities are to suppose at each iteration a player
     adopts  X with some probability (and Y with one minus such). A player
     can thus be represented as a real number, or  a  bit-string  suitably
     interpreted as a probability

     An  alternative  characterisation  is  to model the players as Finite
     State Machines, or Finite Automata (FA). These can be though of as  a
     simple  flow chart governing behaviour in the "next" play of the game
     depending upon previous plays. For example:

          100 Play X
          110 If opponent plays X go to 100
          120 Play Y
          130 If opponent plays X go to 100 else go to 120
     represents a strategy that does whatever its opponent did  last,  and
     begins  by  playing  X,  known as "Tit-For-Tat." (Axelrod 1982). Such
     machines can readily be encoded as bit-strings. Consider the encoding
     "1  0  1  0 0 1" to represent TFT.  The first three bits, "1 0 1" are
     state 0. The first bit, "1" is interpreted as "lay  X."  The  second
     bit,  "0"  is interpreted as "if opponent plays X go to state 1," the
     third bit, "1", is interpreted as "if the opponent  plays  Y,  go  to
     state  1."   State 1 has a similar interpretation. Crossing over such
     bit-strings always yields valid strategies.
     SIMULATIONs in the Prisoner's dilemma have been  undertaken  (Axelrod
     1987, Fogel 1993, Miller 1989) of these machines.

     Alternative   representations  of  game  players  include  CLASSIFIER
     SYSTEMs (Marimon, McGrattan and Sargent 1990, [GOLD89]), and  Neural-
     networks  (Fogel and Harrald 1994), though not necessarily with a GA.
     (Fogel  1993),  and  Fogel  and  Harrald  1994  use  an  Evolutionary
     Program).  Chellapilla and Fogel (1999) have evolved a neural network
     which can play checkers (draughts) at near expert level.

     Other methods of evolving a population can be found in Lindgren 1991,
     Glance and Huberman 1993 and elsewhere.

     A  GA  for  playing  the  game  "Mastermind"  has  been produced. See
     http://kal-el.ugr.es/mastermind
 楼主| 发表于 2004-2-27 01:54:19 | 显示全部楼层
JOB-SHOP SCHEDULING
     The Job-Shop Scheduling  Problem  (JSSP)  is  a  very  difficult  NP-
     complete problem which, so far, seems best addressed by sophisticated
     branch and bound search techniques.   GA  researchers,  however,  are
     continuing  to  make  progress  on  it.   (Davis  85)  started off GA
     research on  the  JSSP,  (Whitley  89)  reports  on  using  the  edge
     RECOMBINATION operator (designed initially for the TSP) on JSSPs too.
     More recent work includes (Nakano 91),(Yamada & Nakano 92), (Fang  et
     al.  93).   The  latter  three  report increasingly better results on
     using GAs on fairly large benchmark JSSPs (from Muth & Thompson  63);
     neither  consistently  outperform branch & bound search yet, but seem
     well on the way. A crucial aspect  of  such  work  (as  with  any  GA
     application)  is  the  method  used to encode schedules. An important
     aspect of some of the recent work on this is that better results have
     been  obtained  by  rejecting the conventional wisdom of using binary
     representations  (as  in  (Nakano  91))  in  favor  of  more   direct
     encodings.  In  (Yamada  & Nakano 92), for example, a GENOME directly
     encodes operation completion times, while in (Fang et al. 93) genomes
     represent  implicit instructions for building a schedule. The success
     of these latter techniques, especially since their  applications  are
     very  important  in  industry, should eventually spawn advances in GA
     theory.

     Concerning the point of using GAs at all on hard job-shop  scheduling
     problems,  the  same  goes here as suggested above for `Timetabling':
     The  GA  approach  enables  relatively  arbitrary   constraints   and
     objectives  to  be incorporated painlessly into a single OPTIMIZATION
     method.  It  is  unlikely  that  GAs  will   outperform   specialized
     knowledge-based  and/or  conventional  OR-based  approaches  to  such
     problems in terms of raw solution quality,  however  GAs  offer  much
     greater  simplicity  and flexibility, and so, for example, may be the
     best method for quick high-quality solutions, rather than finding the
     best  possible  solution at any cost. Also, of course, hybrid methods
     will have a lot to offer, and GAs are far easier to parallelize  than
     typical knowledge-based/OR methods.

     Similar  to  the  JSSP  is  the  Open Shop Scheduling Problem (OSSP).
     (Fang et al. 93) reports an initial attempt at using  GAs  for  this.
     Ongoing  results  from  the same source shows reliable achievement of
     results within less than 0.23% of optimal on moderately  large  OSSPs
     (so  far,  up  to  20x20), including an improvement on the previously
     best known solution for a benchmark 10x10 OSSP. A simpler form of job
     shop  problem  is the Flow-Shop Sequencing problem; recent successful
     work on applying GAs to this includes (Reeves 93)."

     Other scheduling problems

     In contrast  to  job  shop  scheduling  some  maintenance  scheduling
     problems  consider  which  activities  to  schedule  within a planned
     maintenance period, rather than seeking to minimise  the  total  time
     taken  by the activities. The constraints on which parts may be taken
     out of service for  maintenance  at  particular  times  may  be  very
     complex,  particularly as they will in general interact. Some initial
     work is given in (Langdon, 1995).

     References

     Davis, L.  (1985)  "Job-Shop  Scheduling  with  Genetic  Algorithms",
     [ICGA85], 136-140.

     Muth,  J.F. & Thompson, G.L. (1963) "Industrial Scheduling". Prentice
     Hall, Englewood Cliffs, NJ, 1963.
     Nakano, R.  (1991)  "Conventional  Genetic  Algorithms  for  Job-Shop
     Problems", [ICGA91], 474-479.

     Reeves,  C.R.  (1993)  "A Genetic Algorithm for Flowshop Sequencing",
     Coventry Polytechnic Working Paper, Coventry, UK.

     Whitley, D., Starkweather,  T.  &  D'Ann  Fuquay  (1989)  "Scheduling
     Problems  and  Traveling  Salesmen:  The  Genetic  Edge Recombination
     Operator", [ICGA89], 133-140.

     Fang, H.-L., Ross,  P.,  &  Corne  D.  (1993)  "A  Promising  Genetic
     Algorithm  Approach  to Job-Shop Scheduling, Rescheduling & Open-Shop
     Scheduling Problems", [ICGA93], 375-382.

     Yamada, T. & Nakano, R. (1992) "A  Genetic  Algorithm  Applicable  to
     Large-Scale Job-Shop Problems", [PPSN92], 281-290.

     Langdon,  W.B.  (1995)  "Scheduling  Planned  Maintenance of the (UK)
     National Grid", cs.ucl.ac.uk/genetic/papers/grid_aisb-95.ps

MANAGEMENT SCIENCES
     "Applications of EA in management science and closely related  fields
     like organizational ecology is a domain that has been covered by some
     EA researchers - with considerable bias towards scheduling  problems.
     Since  I believe that EA have considerable potential for applications
     outside  the  rather  narrow  domain  of   scheduling   and   related
     combinatorial  problems,  I  started  collecting references about the
     status quo of EA-applications in  management  science.   This  report
     intends  to  make  available  my findings to other researchers in the
     field. It is a short  overview  and  lists  some  230  references  to
     current as well as finished research projects.  [..]

     "At  the end of the paper, a questionnaire has been incorporated that
     may be used for this purpose. Other comments are also appreciated."

     --- from the Introduction of (Nissen 93)

     References

     Nissen, V. (1993) "Evolutionary Algorithms in Management Science:  An
     Overview  and List of References", Papers on Economics and Evolution,
     edited by the European Study Group for Evolutionary Economics.   This
     report     is     also     avail.     via     anon.      FTP     from
     ftp.gwdg.de/pub/msdos/reports/wi/earef.eps

     Boulding, K.E. (1991) "What is evolutionary economics?",  Journal  of
     Evolutionary Economics, 1, 9-17.

NON-LINEAR FILTERING
     New  connections  between GENETIC ALGORITHMs and Non Linear Filtering
     Theory have been established.  GAs  have  already  been  successfully
     applied  to  a  large  class of non-linear filtering problems such as
     RADAR / SONAR/ GPS signal processing.  This relatively new branch  of
     GA  application  has  also  lead to new results on the convergence of
     GAs: large deviations, fluctuations...

     Some preprints and references on this topic are available in the  web
     page: http://www-sv.cict.fr/lsp/Delmoral/index.html

     The  new  results  also  points out some natural connections between:
     genetic type algorithms,  information  theory,  non-linear  filtering
     theory, interacting and branching particle systems.

TIMETABLING
     This  has  been addressed quite successfully with GAs.  A very common
     manifestation of this kind of problem is the timetabling of exams  or
     classes in Universities, etc.

     The  first application of GAs to the timetabling problem was to build
     the schedule  of  the  teachers  in  an  Italian  high  school.   The
     research,  conducted at the Department of Electronics and Information
     of Politecnico di Milano, Italy, showed that a GA was as good as Tabu
     Search,  and  better  than  simulated  annealing,  at finding teacher
     schedules satisfying a number of  hard  and  soft  constraints.   The
     software package developed is now in current use in some high schools
     in Milano. (Colorni et al 1990)

     At  the  Department  of  Artificial   Intelligence,   University   of
     Edinburgh, timetabling the MSc exams is now done using a GA (Corne et
     al. 93, Fang 92). An example  of  the  use  of  GAs  for  timetabling
     classes is (Abramson & Abela 1991).

     In  the  exam  timetabling  case,  the  FITNESS function for a GENOME
     representing a timetable involves computing degrees of punishment for
     various  problems  with  the timetable, such as clashes, instances of
     students having to take  consecutive  exams,  instances  of  students
     having  (eg)  three  or  more  exams  in one day, the degree to which
     heavily-subscribed exams occur late in  the  timetable  (which  makes
     marking harder), overall length of timetable, etc. The modular nature
     of the fitness function has the key to the main potential strength of
     using  GAs  for  this  sort of thing as opposed to using conventional
     search and/or constraint programming methods. The  power  of  the  GA
     approach  is  the  ease  with  which it can handle arbitrary kinds of
     constraints and  objectives;  all  such  things  can  be  handled  as
     weighted  components of the fitness function, making it easy to adapt
     the GA to the  particular  requirements  of  a  very  wide  range  of
     possible overall objectives . Very few other timetabling methods, for
     example, deal with such objectives at all, which shows how  difficult
     it  is  (without  GAs)  to  graft  the  capacity  to handle arbitrary
     objectives onto the basic "find shortest- length  timetable  with  no
     clashes"  requirement.  The  proper  way  to  weight/handle different
     objectives in the fitness function in  relation  to  the  general  GA
     dynamics remains, however, an important research problem!

     GAs thus offer a combination of simplicity, flexibility & speed which
     competes very favorably with other approaches, but  are  unlikely  to
     outperform   knowledge-based  (etc)  methods  if  the  best  possible
     solution is required at any cost. Even then,  however,  hybridisation
     may yield the best of both worlds; also, the ease (if the hardware is
     available!)  of implementing GAs in parallel enhances the possibility
     of  using  them for good, fast solutions to very hard timetabling and
     similar problems.

     References

     Abramson & Abela (1991) "A Parallel Genetic Algorithm for Solving the
     School  Timetabling  Problem",  Technical  Report,  Division of I.T.,
     C.S.I.R.O,  April  1991.   (Division   of   Information   Technology,
     C.S.I.R.O.,  c/o  Dept.  of  Communication  & Electronic Engineering,
     Royal Melbourne Institute of  Technology,  PO  BOX  2476V,  Melbourne
     3001, Australia)

     Colorni  A.,  M. Dorigo & V. Maniezzo (1990).  Genetic Algorithms And
     Highly Constrained Problems: The Time-Table Case. Proceedings of  the
     First International Workshop on Parallel Problem Solving from Nature,
     Dortmund, Germany, Lecture Notes in Computer Science  496,  Springer-
     Verlag,                                                        55-59.
     http://iridia.ulb.ac.be/dorigo/dorigo/conferences/IC.01-PPSN1.ps.gz

     Colorni A., M. Dorigo & V. Maniezzo (1990).   Genetic  Algorithms:  A
     New  Approach  to  the Time-Table Problem. NATO ASI Series, Vol.F 82,
     COMBINATORIAL OPTIMIZATION,  (Ed.  M.Akguel  and  others),  Springer-
     Verlag,                                                      235-239.
     http://iridia.ulb.ac.be/dorigo/dorigo/conferences/IC.02-NATOASI90.ps.gz

     Colorni  A.,  M. Dorigo & V. Maniezzo (1990).  A Genetic Algorithm to
     Solve  the  Timetable  Problem.    Technical   Report   No.   90-060,
     Politecnico               di              Milano,              Italy.
     http://iridia.ulb.ac.be/dorigo/dorigo/tec.reps/TR.01-TTP.ps.gz
     Corne, D. Fang, H.-L. & Mellish, C. (1993) "Solving the Modular  Exam
     Scheduling  Problem  with  Genetic  Algorithms".   Proc. of 6th Int'l
     Conf.  on  Industrial  and  Engineering  Applications  of  Artificial
     Intelligence & Expert Systems, ISAI.

     Fang,   H.-L.   (1992)   "Investigating   GAs  for  scheduling",  MSc
     Dissertation,   University   of   Edinburgh   Dept.   of   Artificial
     Intelligence, Edinburgh, UK.
------------------------------
您需要登录后才可以回帖 登录 | 注-册-帐-号

本版积分规则

小黑屋|手机版|Archiver|数学建模网 ( 湘ICP备11011602号 )

GMT+8, 2024-11-27 09:37 , Processed in 0.068019 second(s), 18 queries .

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表