All very good points, Chris.

On Thu, Sep 5, 2013 at 10:27 AM, Chris Warburton
<chriswa...@googlemail.com>wrote:

> David Barbour <dmbarb...@gmail.com> writes:
>
> > I agree we can gain some inspirations from life. Genetic programming,
> > neural networks, the development of robust systems in terms of reactive
> > cycles, focus on adaptive rather than abstractive computation.
> >
> > But it's easy to forget that life had millions or billions of years to
> get
> > where it's at, and that it has burned through materials, that it fails to
> > recognize the awesomeness of many of the really cool 'programs' it has
> > created (like Wolfgang Amadeus Mozart ;).
>
> Artificial neural networks and genetic programming are often grouped
> together, eg. as "nature-inspired optimisation", but it's important to
> keep in mind that their natural counterparts work on very different
> timescales. Neural networks can take a person's lifetime to become
> proficient at some task, but genetics can take a planet's lifetime ;)
> (of course, there has been a lot of overlap as brains are the product of
> evolution and organisms must compete in a world full of brains).
>
> > A lot of logic must be encoded in the heuristic to evaluate some programs
> > as better than others. It can be difficult to recognize value that one
> did
> > not anticipate finding. It can be difficult to recognize how a particular
> > mutation might evolve into something great, especially if it causes
> > problems in the short term. The search space is unbelievably large, and
> it
> > can take a long time to examine it.
>
> There is interesting work going on in "artificial curiosity", where
> regular rewards/fitness/reinforcement is treated as "external", but
> there is also an "internal" reward, usually based on finding new
> patterns and how to predict/compress them. In theory this rewards a
> system for learning more about its domain, regardless of whether it
> leads to an immediate increase in the given fitness function.
>
> There are some less drastic departures from GP like Fitness Uniform
> Optimisation, which values population diversity rather than high
> fitness: we only need one fit individual, the rest can explore.
>
> Bayesian Exploration is also related: which addresses the
> exploration/exploitation problem explicitly by assuming that a more-fit
> solution exists and choosing our next candidate based on the highest
> expected fitness (this is known as 'optimism').
>
> These algorithms attempt to value unique/novel solutions, which may
> contribute to solving 'deceptive' problems; where high-fitness solutions
> may be surrounded by low-fitness ones.
>
> > It isn't a matter of life being 'inefficient'. It's that, if we want to
> use
> > this 'genetic programming' technique that life used to create cool things
> > like Mozart, we need to be vastly more efficient than life at searching
> the
> > spaces, developing value, recognizing how small things might contribute
> to
> > a greater whole and thus should be preserved. In practice, this will
> often
> > require very special-purpose applications - e.g. "genetic programming for
> > the procedural generation of cities in a video game" might use a
> completely
> > different set of primitives than "genetic programming for the facial
> > structures and preferred behaviors/habits of NPCs" (and it still wouldn't
> > be easy to decide whether a particular habit contributes value).
>
> You're dead right, but at the same time these kind of situations make me
> instinctively want to go up a level and solve the meta-problem. If I
> were programming Java, I'd want a geneticProgrammingFactory ;)
>
> > Machine code - by which I mean x86 code and similar - would be a terribly
> > inefficient way to obtain value using genetic programming. It is far too
> > fragile (breaks easily under minor mutations), too fine grained
> (resulting
> > in a much bigger search space), and far too difficult to evaluate.
>
> True. 'Optimisation' is often seen as the quest to get closer to machine
> code, when actually there are potentially bigger gains to be had by
> working at a level where we know enough about our code to eliminate lots
> of it. For example all of the "fusion" work going on in Haskell, or even
> something as everyday as constant folding. Whilst humans can scoff that
> 'real' programmers would have written their assembly with all of these
> optimisations already-applied, it's far more likely that auto-generated
> code will be full of such high-level optimisation potentials. For
> example, we could evolve programs using an interpreter until they reach
> a desired fitness, then compile the best solution with a
> highly-aggressive optimising compiler for use in production.
>
> > Though, we could potentially create a virtual-machine code suitable for
> > genetic programming.
>
> This will probably be the best option for most online adaptation, where
> the system continues to learn over the course of its life. The search
> must use high-level code to be efficient, but compiling every candidate
> when most will only be run once usually won't be worth it.
>
> The counter-examples are where evaluation takes long enough to offset the
> initial compilation cost, and problems where compiler optimisations can
> have a significant effect on fitness (eg. heavy time-dependence).
>
> It is also possible to have a hybrid, where we compile/optimise
> solutions when they remain the fittest for some length of time. This is
> similar to Hutter Search, which is a hypothetical online learner with
> one process that runs the current-best solution and another process
> which tries to find a better solution, replacing the first process if it
> does so.
>
> Cheers,
> Chris
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to