数模论坛

 找回密码
 注-册-帐-号
搜索
热搜: 活动 交友 discuz
12
返回列表 发新帖
楼主: itmwk

[来个E文牛的]GENETIC 之 FAQ

[复制链接]
 楼主| 发表于 2004-2-27 01:54:58 | 显示全部楼层
Subject: Q3: Who is concerned with EAs?

     EVOLUTIONARY  COMPUTATION  attracts  researchers  and people of quite
     dissimilar disciplines, i.e.   EC  is  a  interdisciplinary  research
     field:

Computer scientists
     Want  to  find  out  about the properties of sub-symbolic information
     processing with EAs and about learning,  i.e.   adaptive  systems  in
     general.

     They   also  build  the  hardware  necessary  to  enable  future  EAs
     (precursors are already beginning  to  emerge)  to  huge  real  world
     problems,  i.e. the term "massively parallel computation" [HILLIS92],
     springs to mind.

Engineers
     Of many kinds want to exploit the capabilities of EAs on  many  areas
     to solve their application, esp.  OPTIMIZATION problems.

Roboticists
     Want  to  build  MOBOTs (MOBile ROBOTs, i.e. R2D2's and #5's cousins)
     that navigate through uncertain ENVIRONMENTs, without using  built-in
     "maps".   The  MOBOTS  thus  have to adapt to their surroundings, and
     learn what they can do "move-through-door" and what they can't "move-
     through-wall" on their own by "trial-and-error".

Cognitive scientists
     Might view CFS as a possible apparatus to describe models of thinking
     and cognitive systems.

Physicists
     Use EC hardware, e.g. Hillis' (Thinking Machine  Corp.'s)  Connection
     Machine  to  model  real  world  problems  which include thousands of
     variables, that run "naturally" in parallel, and thus can be modelled
     more  easily  and  esp.   "faster"  on  a parallel machine, than on a
     serial "C" one.

Biologists
     Are finding EAs useful when it comes to  protein  folding  and  other
     such bio-computational problems (see Q2).

     EAs  can  also  be used to model the behaviour of real POPULATIONs of
     organisms.  Some biologists are hostile to modeling,  but  an  entire
     community  of  Population  Biologists  arose  with  the 'evolutionary
     synthesis' of the 1930's created by the polymaths R.A. Fisher, J.B.S.
     Haldane,  and  S.  Wright.   Wright's SELECTION in small populations,
     thereby avoiding  local  optima)  is  of  current  interest  to  both
     biologists and ECers -- populations are naturally parallel.

     A  good  exposition  of  current  population  Biology  modeling is J.
     Maynard Smith's text Evolutionary Genetics.  Richard Dawkin's Selfish
     Gene and Extended Phenotype are unparalleled (sic!) prose expositions
     of  evolutionary  processes.   Rob  Collins'  papers  are   excellent
     parallel  GA  models of evolutionary processes (available in [ICGA91]
     and by FTP from ftp.cognet.ucla.edu/pub/alife/papers/ ).

     As fundamental motivation, consider Fisher's comment:  "No  practical
     biologist  interested  in  (e.g.) sexual REPRODUCTION would be led to
     work out the detailed consequences experienced  by  organisms  having
     three  or more sexes; yet what else should [s/]he do if [s/]he wishes
     to understand why the sexes are, in fact, always
      two?"  (Three sexes would make  for  even  weirder  grammar,  [s/]he
     said...)

Chemists
     And  in particular biochemists and molecular chemists, are interested
     in problems such as the conformational analysis of molecular clusters
     and  related  problems in molecular sciences.  The application of GAs
     to molecular systems has opened an interesting area of  research  and
     the number of chemists involved in it increases day-by-day.

     Some typical research topics include:

     o  protein    folding;   o   conformational   analysis   and   energy
        minimization; o docking algorithms for drug-design; o solvent site
        prediction in macromolecules;
     Several  papers  have  been  published in journals such as Journal of
     Computational Chemistry and Journal of Computer-Aided Design.

     Some interesting WWW sites related to  the  applications  of  GAs  to
     chemistry (or molecular science in general) include:

     o  http://garage.cps.msu.edu/projects/biochem/biochem.html  about GAs
        in biochemistry (water site prediction,  drug-design  and  protein
        folding);                                                        o
        http://www.tc.cornell.edu/Edu/SPUR/SPUR94/Main/John.html about the
        application  of GAs to the search of conformational energy minima;
        o http://cmp.ameslab.gov/cmp/CMP_Theory/gsa/gen2.html By  using  a
        GA in combiation with a Tight-binding model, David Deaven and Kai-
        Ming Ho founded fullerene  cages  (including  C60)  starting  from
        random coordinates.
     See also Q2 for applications in biocomputing.

Philosophers
     and some other really curious people may also be interested in EC for
     various reasons.

------------------------------

Subject: Q4: How many EAs exist? Which?

The All Stars
     There  are  currently  3  main  paradigms  in  EA  research:  GENETIC
     ALGORITHMs,   EVOLUTIONARY  PROGRAMMING,  and  EVOLUTION  STRATEGIEs.
     CLASSIFIER SYSTEMs and GENETIC PROGRAMMING are OFFSPRING  of  the  GA
     community.   Besides  this  leading  crop,  there  are numerous other
     different approaches, alongside hybrid experiments, i.e. there  exist
     pieces  of software residing in some researchers computers, that have
     been described in papers in conference proceedings, and  may  someday
     prove  useful  on certain tasks. To stay in EA slang, we should think
     of these evolving strands as BUILDING BLOCKs,  that  when  recombined
     someday,  will  produce  new  offspring  and  give  birth  to  new EA
     paradigm(s).

     One such interesting offspring is the Memetic Algorithm.  This  is  a
     hybrid  evolutionary  algorithm,  which  makes  use  of  local search
     operators.               For               details,               see
     http://www.densis.fee.unicamp.br/~moscato/memetic_home.html

Promising Rookies
     As  far  as  "solving complex function and COMBINATORIAL OPTIMIZATION
     tasks" is concerned, Davis' work on real-valued  representations  and
     adaptive operators should be mentioned (Davis 89). Moreover Whitley's
     Genitor system incorporating ranking  and  "steady  state"  mechanism
     (Whitley    89),    Goldberg's   "messy   GAs",   involves   adaptive
     representations (Goldberg 91), and Eshelman's CHC algorithm (Eshelman
     91).   For  real  FUNCTION OPTIMIZATION, Differential EVOLUTION seems
     hard to beat in terms of convergence speed  as  well  as  simplicity:
     With just three control variables, tuning is particularly easy to do.

     For  "the  design  of  robust  learning  systems",  i.e.  the   field
     characterized  by  CFS,  Holland's (1986) CLASSIFIER SYSTEM, with its
     state-of-the-art implementation CFS-C  (Riolo  88),  we  should  note
     developments  in  SAMUEL  (Grefenstette  89), GABIL (De Jong & Spears
     91), and GIL (Janikow 91).

     References

     Davis,  L.  (1989)  "Adapting  operator  probabilities   in   genetic
     algorithms", [ICGA89], 60-69.

     De  Jong  K.A.  &  Spears  W. (1991) "Learning concept classification
     rules using genetic algorithms". Proc. 12th IJCAI,  651-656,  Sydney,
     Australia: Morgan Kaufmann.

     Dorigo  M.  &  E.  Sirtori (1991)."ALECSYS: A Parallel Laboratory for
     Learning Classifier Systems". Proceedings of the Fourth International
     Conference  on  Genetic  Algorithms, San Diego, California, R.K.Belew
     and L.B.Booker (Eds.), Morgan Kaufmann, 296-302.

     Dorigo M. (1995). "ALECSYS and the AutonoMouse: Learning to Control a
     Real  Robot by Distributed Classifier Systems". Machine Learning, 19,
     3, 209-240.

     Eshelman, L.J. et al. (1991)  "reventing  premature  convergence  in
     genetic algorithms by preventing incest", [ICGA91], 115-122.

     Goldberg,  D. et al. (1991) "Don't worry, be messy", [ICGA91], 24-30.

     Grefenstette, J.J. (1989) "A system for learning  control  strategies
     with genetic algorithms", [ICGA89], 183-190.

     Holland,  J.H.  (1986)  "Escaping  brittleness:  The possibilities of
     general-purpose learning algorithms applied  to  parallel  rule-based
     systems".   In R. Michalski, J. Carbonell, T. Mitchell (eds), Machine
     Learning: An Artificial  Intelligence  Approach.  Los  Altos:  Morgan
     Kaufmann.

     Janikow   C.  (1991)  "Inductive  learning  of  decision  rules  from
     attribute-based examples:  A  knowledge-intensive  Genetic  Algorithm
     approach". TR91-030, The University of North Carolina at Chapel Hill,
     Dept. of Computer Science, Chapel Hill, NC.

     Riolo,  R.L.  (1988)  "CFS-C:  A  package   of   domain   independent
     subroutines  for  implementing classifier systems in arbitrary, user-
     defined  environments".   Logic  of  computers  group,  Division   of
     computer science and engineering, University of Michigan.

     Whitley,  D.  et  al.  (1989)  "The  GENITOR  algorithm and selection
     pressure: why rank-based allocation of reproductive trials is  best",
     [ICGA89], 116-121.
 楼主| 发表于 2004-2-27 01:55:32 | 显示全部楼层
Subject: Q4.1: What about Alife systems, like Tierra and VENUS?

     None  of  these  are EVOLUTIONARY ALGORITHMs, but all of them use the
     evolutionary metaphor as their "playing field".

Tierra
     Synthetic organisms have been created based on a computer metaphor of
     organic  life in which CPU time is the ``energy'' resource and memory
     is the ``material'' resource.  Memory is organized into informational
     patterns  that  exploit  CPU  time  for  self-replication.   MUTATION
     generates new forms, and EVOLUTION proceeds by natural  SELECTION  as
     different GENOTYPEs compete for CPU time and memory space.

     Observation  of  nature  shows that evolution by natural selection is
     capable of both OPTIMIZATION and creativity.   Artificial  models  of
     evolution  have  demonstrated the optimizing ability of evolution, as
     exemplified by the field of GENETIC ALGORITHMs.  The creative aspects
     of evolution have been more elusive to model.  The difficulty derives
     in part from a tendency of models  to  specify  the  meaning  of  the
     ``genome''  of  the  evolving  entities, precluding new meanings from
     emerging.  I will present a natural model of evolution  demonstrating
     both  optimization  and  creativity,  in which the GENOME consists of
     sequences of executable machine code.

     From a single rudimentary ancestral ``creature'', very quickly  there
     evolve  parasites,  which  are  not  able  to  replicate in isolation
     because they lack a large portion  of  the  genome.   However,  these
     parasites  search  for the missing information, and if they locate it
     in a nearby creature, parasitize the information from the neighboring
     genome, thereby effecting their own replication.

     In  some  runs,  hosts  evolve immunity to attack by parasites.  When
     immune hosts appear, they often increase  in  frequency,  devastating
     the  parasite POPULATIONs.  In some runs where the community comes to
     be dominated by immune hosts, parasites evolve that are resistant  to
     immunity.

     Hosts  sometimes  evolve  a  response  to  parasites that goes beyond
     immunity,  to  actual  (facultative)  hyper-parasitism.   The  hyper-
     parasite  deceives  the  parasite  causing the parasite to devote its
     energetic resources to  replication  of  the  hyper-parastie  genome.
     This  drives the parasites to extinction.  Evolving in the absence of
     parasites,  hyper-parasites  completely   dominate   the   community,
     resulting  in  a relatively uniform community characterized by a high
     degree   of   relationship   between   INDIVIDUALs.    Under    these
     circumstances,  sociality evolves, in the form of creatures which can
     only replicate in aggregations.

     The cooperative behavior of the  social  hyper-parasites  makes  them
     vulnerable to a new class of parasites.  These cheaters, hyper-hyper-
     parasites, insert themselves between cooperating social  individuals,
     deceiving the social creatures, causing them to replicate the genomes
     of the cheaters.

     The only genetic change imposed on the simulator is random bit  flips
     in  the  machine  code  of the creatures.  However, it turns out that
     parasites  are  very  sloppy  replicators.   They  cause  significant
     RECOMBINATION  and  rearrangement  of  the genomes.  This spontaneous
     sexuality is a powerful force for evolutionary change in the  system.

     One  of the most interesting aspects of this instance of life is that
     the bulk of the evolution  is  based  on  adaptation  to  the  biotic
     ENVIRONMENT rather than the physical environment.  It is co-evolution
     that drives the system.

     --- "Tierra announcement" by Tom Ray (1991)
  How to get Tierra?
     Tierra is available (source and executables, for Unix  and  NT)  from
     alife.santafe.edu/pub/SOFTWARE/Tierra
      .

     Related work

     David Bennett <dmb@pfxcorp.com> reported in March 2000: Much new work
     has   been   done    in    Tierra    since    1993.     Thomas    Ray
     <tray@mail.nhn.ou.edu>  is  now  working in Japan.  I have been using
     another similar system called Avida.  It has some advantages,  and  a
     significant  body  of  research  results.  The  contact  for Avida is
     <avida@krl.caltech.edu>.

     References

     Ray, T. S. (1991)  "Is it alive, or is it GA?" in [ICGA91], 527--534.

     Ray,  T.  S.  (1991)   "An  approach  to  the  synthesis of life." in
     [ALIFEII], 371--408.

     Ray, T. S.  (1991)  "opulation dynamics of  digital  organisms."  in
     [ALIFEII].

     Ray,   T.   S.    (1991)   "Evolution  and  optimization  of  digital
     organisms."  Scientific Excellence in Supercomputing:  The  IBM  1990
     Contest Prize Papers, Eds. Keith R. Billingsley, Ed Derohanes, Hilton
     Brown, III.  Athens, GA, 30602, The Baldwin Press, The University  of
     Georgia.

     Ray,  T.  S.   (1992) "Evolution, ecology and optimization of digital
     organisms."  Santa Fe Institute working paper 92-08-042.

     Ray, T. S.  "Evolution, complexity, entropy, and artificial reality."
     submitted Physica D.

     Ray,  T.  S.   (1993) "An evolutionary approach to synthetic biology,
     Zen and the art of creating life.  Artificial Life 1(1).

VENUS
     Steen Rasmussen's (et al.) VENUS I+II "coreworlds"  as  described  in
     [ALIFEII]  and  [LEVY92],  are  inspired by A.K. Dewdney's well-known
     article (Dewdney 1984). Dewdney proposed a game called  "Core  Wars",
     in  which hackers create computer programs that battle for control of
     a computer's "core" memory (Strack 93).  Since computer programs  are
     just  patterns  of  information, a successful program in core wars is
     one that replicates its pattern within the memory, so that eventually
     most  of  the  memory  contains  its  pattern rather than that of the
     competing program.

     VENUS is a modification of Core Wars in which the  Computer  programs
     can  mutate, thus the pseudo assembler code creatures of VENUS evolve
     steadily.  Furthermore  each  memory   location   is   endowed   with
     "resources"  which,  like  sunshine  are  added at a steady state.  A
     program must have sufficient resources in the regions  of  memory  it
     occupies  in  order  to  execute.   The input of resources determines
     whether the VENUS ecosystem is a "jungle" or a "desert."   In  jungle
     ENVIRONMENTs,  Rasmussen  et al. observe the spontaneous emergence of
     primitive "copy/split" organisms starting  from  (structured)  random
     initial conditions.

     --- [ALIFEII], p.821

     Dewdney,  A.K.  (1984) "Computer Recreations: In the Game called Core
     War Hostile Programs Engage in a Battle of Bits", Sci. Amer.  250(5),
     14-22.
     Farmer  &  Belin  (1992)  "Artificial  Life:  The  Coming Evolution",
     [ALIFEII], 815-840.

     Rasmussen, et al. (1990) "The Coreworld: Emergence and  Evolution  of
     Cooperative  Structures  in  a Computational Chemistry", [FORREST90],
     111-134.

     Rasmussen,  et  al.  (1992)  "Dynamics   of   Programmable   Matter",
     [ALIFEII], 211-254.

     Strack    (1993)    "Core    War   Frequently   Asked   Questions   (
     rec.games.corewar    FAQ)"    Avail.    by    anon.      FTP     from
     rtfm.mit.edu/pub/usenet/news.answers/games/corewar-faq.Z
 楼主| 发表于 2004-2-27 01:56:01 | 显示全部楼层
PolyWorld
     Larry  Yaeger's  PolyWorld as described in [ALIFEIII] and [LEVY92] is
     available          via          anonymous          FTP           from
     alife.santafe.edu/pub/SOFTWARE/Polyworld/

     "The  subdirectories in this "polyworld" area contain the source code
     for the PolyWorld ecological simulator, designed and written by Larry
     Yaeger, and Copyright 1990, 1991, 1992 by Apple Computer.

     PostScript  versions  of  my ARTIFICIAL LIFE III technical paper have
     now been added to the directory.  These should be directly  printable
     from most machines.  Because some unix systems' "lpr" commands cannot
     handle very large files (ours at least), I have split the paper  into
     Yaeger.ALife3.1.ps and Yaeger.ALife3.2.ps.  These files can be ftp-ed
     in "ascii" mode.  For unix users  I  have  also  included  compressed
     versions  of  both these files (indicated by the .Z suffix), but have
     left the uncompressed versions around for people connecting from non-
     unix  systems.   I  have  not  generated  PostScript  versions of the
     images, because they are color and the resulting files are  much  too
     large  to  store,  retrieve,  or  print.   Accordingly, though I have
     removed a Word-formatted version of the textual  body  of  the  paper
     that  used  to  be  here, I have left a Word-formatted version of the
     color images.  If you wish to acquire it, you will need  to  use  the
     binary transfer mode to move it to first your unix host and then to a
     Macintosh (unless Word on a PC can read it - I don't know),  and  you
     may  need to do something nasty like use ResEdit to set the file type
     and creator to match those of a standard Word document (Type =  WDBN,
     Creator = MSWD).  [..]"

     --- from the README by Larry Yaeger <larryy@apple.com>

General Alife repositories?
     Also, all of the following FTP sites carry ALIFE related info:

     ftp.cognet.ucla.edu/pub/alife/                                      ,
     life.anu.edu.au/pub/complex_systems/alife/                          ,
     ftp.cogs.susx.ac.uk/pub/reports/csrp/   ,   xyz.lanl.gov/nlin-sys/  ,
     alife.santafe.edu/pub/ .

------------------------------

Subject: Q5: What about all this Optimization stuff?

     Just think of an OPTIMIZATION problem as a black box.  A large  black
     box.  As  large as, for example, a Coca-Cola vending machine. Now, we
     don't know anything about the inner workings of this  box,  but  see,
     that  there  are some regulators to play with, and of course we know,
     that we want to have a bottle of the real thing...

     Putting this everyday problem into a mathematical model,  we  proceed
     as follows:

     (1) we  label all the regulators with x and a number starting from 1;
         the result is a vector x, i.e.  (x_1,...,x_n),  where  n  is  the
         number of visible regulators.

     (2) we must find an objective function, in this case it's obvious, we
         want to get k bottles of the real thing, where k is equal  to  1.
         [some  might  want  a  "greater or equal" here, but we restricted
         ourselves to the visible regulators (we all know that sometimes a
         "kick  in  the  right  place" gets use more than 1, but I have no
         idea how to put this mathematically...)]

     (3) thus, in the language some mathematicians  prefer  to  speak  in:
         f(x)  =  k  =  1. So, what we have here is a maximization problem
         presented in a form we know from some  boring  calculus  lessons,
         and   we   also   know  that  there  at  least  a  dozen  utterly
         uninteresting techniques to solve problems presented this  way...

What can we do in order to solve this problem?
     We  can  either try to gain more knowledge or exploit what we already
     know about the interior of the black box. If the  objective  function
     turns  out  to  be smooth and differentiable, analytical methods will
     produce the exact solution.

     If this turns out to be impossible, we  might  resort  to  the  brute
     force  method  of  enumerating the entire SEARCH SPACE.  But with the
     number of possibilities growing exponentially in  n,  the  number  of
     dimensions  (inputs),  this  method  becomes infeasible even for low-
     dimensional spaces.

     Consequently, mathematicians  have  developed  theories  for  certain
     kinds  of  problems  leading  to specialized OPTIMIZATION procedures.
     These  algorithms  perform  well  if  the  black  box  fulfils  their
     respective  prerequisites.   For example, Dantzig's simplex algorithm
     (Dantzig 66) probably  represents  the  best  known  multidimensional
     method capable of efficiently finding the global optimum of a linear,
     hence convex, objective function in a search space limited by  linear
     constraints.   (A  USENET  FAQ on linear programming is maintained by
     Professor   Robert   Fourer   <4er@iems.nwu.edu>   (and   "nonlinear-
     programming-faq")  that  is  posted monthly to sci.op-research and is
     mostly interesting to read.  It is also  available  from  http://www-
     unix.mcs.anl.gov/otc/Guide/faq/linear-programming-faq.html )

     Gradient  strategies  are  no longer tied to these linear worlds, but
     they smooth their world by exploiting the objective function's  first
     partial  derivatives  one  has to supply in advance. Therefore, these
     algorithms rely on a locally linear internal model of the black  box.

     Newton   strategies   additionally   require   the   second   partial
     derivatives, thus building a quadratic internal model.  Quasi-Newton,
     conjugate  gradient  and  variable metric strategies approximate this
     information during the search.

     The deterministic  strategies  mentioned  so  far  cannot  cope  with
     deteriorations,  so  the search will stop if anticipated improvements
     no longer occur. In a multimodal ENVIRONMENT  these  algorithms  move
     "uphill"  from their respective starting points. Hence, they can only
     converge to the next local optimum.

     Newton-Raphson-methods might even diverge if  a  discrepancy  between
     their  internal assumptions and reality occurs.  But of course, these
     methods turn out to  be  superior  if  a  given  task  matches  their
     requirements.  Not relying on derivatives, polyeder strategy, pattern
     search and rotating coordinate search should also be  mentioned  here
     because  they  represent  robust  non-linear  optimization algorithms
     (Schwefel 81).

     Dealing with technical optimization problems, one will rarely be able
     to write down the objective function in a closed form.  We often need
     a SIMULATION model in order to grasp reality.  In general, one cannot
     even   expect   these   models   to  behave  smoothly.  Consequently,
     derivatives do not exist. That is why  optimization  algorithms  that
     can  successfully  deal  with  black  box-type  situations  have been
     developed. The increasing applicability is of course paid  for  by  a
     loss  of  "convergence  velocity,"  compared  to algorithms specially
     designed for the given problem.  Furthermore, the guarantee  to  find
     the global optimum no longer exists!
 楼主| 发表于 2004-2-27 01:56:20 | 显示全部楼层
But why turn to nature when looking for more powerful algorithms?
     In  the  attempt  to  create  tools for various purposes, mankind has
     copied, more often instinctively than geniously,  solutions  invented
     by  nature.  Nowadays, one can prove in some cases that certain forms
     or structures are not only well adapted to their ENVIRONMENT but have
     even reached the optimum (Rosen 67). This is due to the fact that the
     laws of nature have remained  stable  during  the  last  3.5  billion
     years.  For  instance,  at branching points the measured ratio of the
     diameters in a system of blood-vessels comes close to the theoretical
     optimum  provided  by  the laws of fluid dynamics (2^-1/3).  This, of
     course, only represents a  limited,  engineering  point  of  view  on
     nature. In general, nature performs adaptation, not optimization.

     The idea to imitate basic principles of natural processes for optimum
     seeking procedures emerged more than three decades  ago  (cf  Q10.3).
     Although  these  algorithms  have  proven  to  be  robust  and direct
     OPTIMIZATION tools, it is only in the last five years that they  have
     caught  the researchers' attention. This is due to the fact that many
     people still look at organic EVOLUTION as a giantsized game of  dice,
     thus  ignoring  the  fact  that  this  model of evolution cannot have
     worked: a human germ-cell comprises approximately 50,000 GENEs,  each
     of  which  consists  of about 300 triplets of nucleic bases. Although
     the four  existing  bases  only  encode  20  different  amino  acids,
     20^15,000,000,  ie  circa 10^19,500,000 different GENOTYPEs had to be
     tested in only circa 10^17 seconds, the age of our planet. So, simply
     rolling  the  dice  could  not have produced the diversity of today's
     complex living systems.

     Accordingly,  taking  random  samples   from   the   high-dimensional
     parameter  space  of an objective function in order to hit the global
     optimum must fail (Monte-Carlo search). But  by  looking  at  organic
     evolution  as  a  cumulative,  highly  parallel  sieving process, the
     results of which pass on slightly modified into the next  sieve,  the
     amazing   diversity   and  efficiency  on  earth  no  longer  appears
     miraculous. When building a model, the point is to isolate  the  main
     mechanisms  which  have  led  to  today's  world  and which have been
     subjected to evolution themselves.  Inevitably, nature  has  come  up
     with  a  mechanism  allowing  INDIVIDUALs  of one SPECIES to exchange
     parts of their genetic information (RECOMBINATION or CROSSOVER), thus
     being able to meet changing environmental conditions in a better way.

     Dantzig, G.B.  (1966)  "Lineare  Programmierung  und  Erweiterungen",
     Berlin: Springer. (Linear programming and extensions)

     Kursawe,  F.  (1994) " Evolution strategies: Simple models of natural
     processes?", Revue Internationale de Systemique, France (to  appear).

     Rosen,   R.  (1967)  "Optimality  Principles  in  Biologie",  London:
     Butterworth.

     Schwefel, H.-P. (1981) "Numerical Optimization of  Computer  Models",
     Chichester: Wiley.

------------------------------

     Copyright  (c) 1993-2000 by J. Heitkoetter and D. Beasley, all rights
     reserved.

     This FAQ may be posted to any USENET newsgroup, on-line  service,  or
     BBS  as  long  as  it  is  posted  in  its entirety and includes this
     copyright statement.  This FAQ may not be distributed  for  financial
     gain.   This  FAQ  may  not  be included in commercial collections or
     compilations without express permission from the author.

End of ai-faq/genetic/part3
发表于 2004-2-28 21:06:59 | 显示全部楼层
哈哈 echo看的明白是好东西
echo的e文牛
您需要登录后才可以回帖 登录 | 注-册-帐-号

本版积分规则

小黑屋|手机版|Archiver|数学建模网 ( 湘ICP备11011602号 )

GMT+8, 2024-11-27 07:24 , Processed in 0.052056 second(s), 12 queries .

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表