Au contraire, I suspect that the fact that biological organisms grow
via the same sorts of processes as the biological environment in which
the live, causes the organisms' minds to be built with **a lot** of implicit
bias that is useful for surviving in the environment...

Some have argued that this kind of bias is **all you need** for evolution...
see "Evolution without Selection" by A. Lima de Faria.  I think that is
wrong, but it's interesting that there's enough evidence to even try to
make the argument...

ben g

On Tue, Oct 28, 2008 at 2:37 PM, Ed Porter <[EMAIL PROTECTED]> wrote:

> It appears to me that the assumptions about initial priors used by a self
> learning AGI or an evolutionary line of AGI's could be quite minimal.
>
> My understanding is that once a probability distribution starts receiving
> random samples from its distribution the effect of the original prior
> becomes rapidly lost, unless it is a rather rare one.  Such rare problem
> priors would get selected against quickly by evolution.  Evolution would
> tend to tune for the most appropriate priors for the success of subsequent
> generations (either or computing in the same system if it is capable of
> enough change or of descendant systems).  Probably the best priors would
> generally be ones that could be trained moderately rapidly by data.
>
> So it seems an evolutionary system or line could initially learn priors
> without any assumptions for priors other than a random picking of priors.
> Over time and multiple generations it might develop hereditary priors, an
> perhaps even different hereditary priors for parts of its network connected
> to different inputs, outputs or internal controls.
>
> The use of priors in an AGI could be greatly improved by having a gen/comp
> hiearachy in which models for a given concept could be inherited from the
> priors of sets of models for similar concepts, and that the set of priors
> appropriate could change contextually.  It would also seem that the notion
> of a prior could be improve by blending information from episodic and
> probabilistic models.
>
> It would appear than in almost any generally intelligent system, being able
> to approximate reality in a manner sufficient for evolutionary success with
> the most efficient representations would be a characteristic that would be
> greatly preferred by evolution, because it would allow systems to better
> model more of their environement sufficiently well for evolutionary success
> with whatever current modeling capacity they have.
>
> So, although a completely accurate description of virtually anything may
> not
> find much use for Occam's Razor, as a practically useful representation it
> often will.  It seems to me that Occam's Razor is more oriented to deriving
> meaningful generalizations that it is exact descriptions of anything.
>
> Furthermore, it would seem to me that a more simple set of preconditions,
> is
> generally more probable than a more complex one, because it requires less
> coincidence.  It would seem to me this would be true under most random sets
> of priors for the probabilities of the possible sets of components involved
> and Occam's Razor type selection.
>
> The are the musings of an untrained mind, since I have not spent much time
> studying philosophy, because such a high percent of it was so obviously
> stupid (such as what was commonly said when I was young, that you can't
> have
> intelligence without language) and my understanding of math is much less
> than that of many on this list.  But none the less I think much of what I
> have said above is true.
>
> I think its gist is not totally dissimilar to what Abram has said.
>
> Ed Porter
>
>
>
>
> -----Original Message-----
> From: Pei Wang [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 28, 2008 3:05 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Occam's Razor and its abuse
>
>
> Abram,
>
> I agree with your basic idea in the following, though I usually put it in
> different form.
>
> Pei
>
> On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> > Ben,
> >
> > You assert that Pei is forced to make an assumption about the
> > regulatiry of the world to justify adaptation. Pei could also take a
> > different argument. He could try to show that *if* a strategy exists
> > that can be implemented given the finite resources, NARS will
> > eventually find it. Thus, adaptation is justified on a sort of "we
> > might as well try" basis. (The proof would involve showing that NARS
> > searches the state of finite-state-machines that can be implemented
> > with the resources at hand, and is more probable to stay for longer
> > periods of time in configurations that give more reward, such that
> > NARS would eventually settle on a configuration if that configuration
> > consistently gave the highest reward.)
> >
> > So, some form of learning can take place with no assumptions. The
> > problem is that the search space is exponential in the resources
> > available, so there is some maximum point where the system would
> > perform best (because the amount of resources match the problem), but
> > giving the system more resources would hurt performance (because the
> > system searches the unnecessarily large search space). So, in this
> > sense, the system's behavior seems counterintuitive-- it does not seem
> > to be taking advantage of the increased resources.
> >
> > I'm not claiming NARS would have that problem, of course.... just that
> > a theoretical no-assumption learner would.
> >
> > --Abram
> >
> > On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel <[EMAIL PROTECTED]>
> > wrote:
> >>
> >>
> >> On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang <[EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> Ben,
> >>>
> >>> Thanks. So the other people now see that I'm not attacking a straw
> >>> man.
> >>>
> >>> My solution to Hume's problem, as embedded in the
> >>> experience-grounded semantics, is to assume no predictability, but
> >>> to justify induction as adaptation. However, it is a separate topic
> >>> which I've explained in my other publications.
> >>
> >> Right, but justifying induction as adaptation only works if the
> >> environment is assumed to have certain regularities which can be
> >> adapted to.  In a random environment, adaptation won't work.  So,
> >> still, to justify induction as adaptation you have to make *some*
> >> assumptions about the world.
> >>
> >> The Occam prior gives one such assumption: that (to give just one
> >> form) sets of observations in the world tend to be producible by
> >> short computer programs.
> >>
> >> For adaptation to successfully carry out induction, *some* vaguely
> >> comparable property to this must hold, and I'm not sure if you have
> >> articulated which one you assume, or if you leave this open.
> >>
> >> In effect, you implicitly assume something like an Occam prior,
> >> because you're saying that  a system with finite resources can
> >> successfully adapt to the world ... which means that sets of
> >> observations in the world *must* be approximately summarizable via
> >> subprograms that can be executed within this system.
> >>
> >> So I argue that, even though it's not your preferred way to think
> >> about it, your own approach to AI theory and practice implicitly
> >> assumes some variant of the Occam prior holds in the real world.
> >>>
> >>>
> >>> Here I just want to point out that the original and basic meaning of
> >>> Occam's Razor and those two common (mis)usages of it are not
> >>> necessarily the same. I fully agree with the former, but not the
> >>> latter, and I haven't seen any convincing justification of the
> >>> latter. Instead, they are often taken as granted, under the name of
> >>> Occam's Razor.
> >>
> >> I agree that the notion of an Occam prior is a significant conceptual
> >> beyond the original "Occam's Razor" precept enounced long ago.
> >>
> >> Also, I note that, for those who posit the Occam prior as a **prior
> >> assumption**, there is not supposed to be any convincing
> >> justification for it.  The idea is simply that: one must make *some*
> >> assumption (explicitly or
> >> implicitly) if one wants to do induction, and this is the assumption
> that
> >> some people choose to make.
> >>
> >> -- Ben G
> >>
> >>
> >>
> >> ________________________________
> >> agi | Archives | Modify Your Subscription
> >
> >
> > -------------------------------------------
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: https://www.listbox.com/member/?&; Powered by
> > Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to