Ben,

I never claimed that NARS is not based on assumptions (or call them
biases), but only on "truths". It surely is, and many of the
assumptions are my beliefs and intuitions, which I cannot convince
other people to accept very soon.

However, it does not mean that all assumptions are equally acceptable,
or as soon as something is called a "assumption", the author will be
released from the duty of justifying it.

Going back to the original topic, since "simplicity/complexity of a
description is correlated with its prior probability" is the core
assumption of certain research paradigms, it should be justified. Call
it "Occam's Razor" so as to suggest it is self-evident is not the
proper way to do the job. This is all I want to argue in this
discussion.

Pei

On Wed, Oct 29, 2008 at 12:10 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> But, NARS as an overall software system will perform more effectively
> (i.e., learn more rapidly) in
> some environments than in others, for a variety of reasons.  There are many
> biases built into the NARS architecture in various ways ... it's just not
> obvious
> to spell out what they are, because the NARS system was not explicitly
> designed based on that sort of thinking...
>
> The same is true of every other complex AGI architecture...
>
> ben g
>
>
> On Wed, Oct 29, 2008 at 12:07 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>
>> Ed,
>>
>> When NARS extrapolates its past experience to the current and the
>> future, it is indeed based on the assumption that its future
>> experience will be similar to its past experience (otherwise any
>> prediction will be equally valid), however it does not assume the
>> world can be captured by any specific mathematical model, such as a
>> Turing Machine or a probability distribution defined on a
>> propositional space.
>>
>> Concretely speaking, when a statement S has been tested N times, and
>> in M times it is true, but in N-M times it is false, then NARS's
>> "expectation value" for it to be true in the next testing is E(S) =
>> (M+0.5)/(N+1) [if there is no other relevant knowledge], and the
>> system will use this value to decide whether to accept a bet on S.
>> However, neither the system nor its designer assumes that there is a
>> "true probability" for S to occur for which the above expectation is
>> an approximation. Also, it is not assumed that E(S)  will converge
>> when the testing on S continues.
>>
>> Pei
>>
>>
>> On Wed, Oct 29, 2008 at 11:33 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
>> > Pei,
>> >
>> > My understanding is that when you reason from data, you often want the
>> > ability to extrapolate, which requires some sort of assumptions about
>> > the
>> > type of mathematical model to be used.  How do you deal with that in
>> > NARS?
>> >
>> > Ed Porter
>> >
>> > -----Original Message-----
>> > From: Pei Wang [mailto:[EMAIL PROTECTED]
>> > Sent: Tuesday, October 28, 2008 9:40 PM
>> > To: agi@v2.listbox.com
>> > Subject: Re: [agi] Occam's Razor and its abuse
>> >
>> >
>> > Ed,
>> >
>> > Since NARS doesn't follow the Bayesian approach, there is no initial
>> > priors
>> > to be assumed. If we use a more general term, such as "initial
>> > knowledge" or
>> > "innate beliefs", then yes, you can add them into the system, will will
>> > improve the system's performance. However, they are optional. In NARS,
>> > all
>> > object-level (i.e., not meta-level) innate beliefs can be learned by the
>> > system afterward.
>> >
>> > Pei
>> >
>> > On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>> >> It appears to me that the assumptions about initial priors used by a
>> >> self learning AGI or an evolutionary line of AGI's could be quite
>> >> minimal.
>> >>
>> >> My understanding is that once a probability distribution starts
>> >> receiving random samples from its distribution the effect of the
>> >> original prior becomes rapidly lost, unless it is a rather rare one.
>> >> Such rare problem priors would get selected against quickly by
>> >> evolution.  Evolution would tend to tune for the most appropriate
>> >> priors for the success of subsequent generations (either or computing
>> >> in the same system if it is capable of enough change or of descendant
>> >> systems).  Probably the best priors would generally be ones that could
>> >> be trained moderately rapidly by data.
>> >>
>> >> So it seems an evolutionary system or line could initially learn
>> >> priors without any assumptions for priors other than a random picking
>> >> of priors. Over time and multiple generations it might develop
>> >> hereditary priors, an perhaps even different hereditary priors for
>> >> parts of its network connected to different inputs, outputs or
>> >> internal controls.
>> >>
>> >> The use of priors in an AGI could be greatly improved by having a
>> >> gen/comp hiearachy in which models for a given concept could be
>> >> inherited from the priors of sets of models for similar concepts, and
>> >> that the set of priors appropriate could change contextually.  It
>> >> would also seem that the notion of a prior could be improve by
>> >> blending information from episodic and probabilistic models.
>> >>
>> >> It would appear than in almost any generally intelligent system, being
>> >> able to approximate reality in a manner sufficient for evolutionary
>> >> success with the most efficient representations would be a
>> >> characteristic that would be greatly preferred by evolution, because
>> >> it would allow systems to better model more of their environement
>> >> sufficiently well for evolutionary success with whatever current
>> >> modeling capacity they have.
>> >>
>> >> So, although a completely accurate description of virtually anything
>> >> may not find much use for Occam's Razor, as a practically useful
>> >> representation it often will.  It seems to me that Occam's Razor is
>> >> more oriented to deriving meaningful generalizations that it is exact
>> >> descriptions of anything.
>> >>
>> >> Furthermore, it would seem to me that a more simple set of
>> >> preconditions, is generally more probable than a more complex one,
>> >> because it requires less coincidence.  It would seem to me this would
>> >> be true under most random sets of priors for the probabilities of the
>> >> possible sets of components involved and Occam's Razor type selection.
>> >>
>> >> The are the musings of an untrained mind, since I have not spent much
>> >> time studying philosophy, because such a high percent of it was so
>> >> obviously stupid (such as what was commonly said when I was young,
>> >> that you can't have intelligence without language) and my
>> >> understanding of math is much less than that of many on this list.
>> >> But none the less I think much of what I have said above is true.
>> >>
>> >> I think its gist is not totally dissimilar to what Abram has said.
>> >>
>> >> Ed Porter
>> >>
>> >>
>> >>
>> >>
>> >> -----Original Message-----
>> >> From: Pei Wang [mailto:[EMAIL PROTECTED]
>> >> Sent: Tuesday, October 28, 2008 3:05 PM
>> >> To: agi@v2.listbox.com
>> >> Subject: Re: [agi] Occam's Razor and its abuse
>> >>
>> >>
>> >> Abram,
>> >>
>> >> I agree with your basic idea in the following, though I usually put it
>> >> in different form.
>> >>
>> >> Pei
>> >>
>> >> On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski <[EMAIL PROTECTED]>
>> >> wrote:
>> >>> Ben,
>> >>>
>> >>> You assert that Pei is forced to make an assumption about the
>> >>> regulatiry of the world to justify adaptation. Pei could also take a
>> >>> different argument. He could try to show that *if* a strategy exists
>> >>> that can be implemented given the finite resources, NARS will
>> >>> eventually find it. Thus, adaptation is justified on a sort of "we
>> >>> might as well try" basis. (The proof would involve showing that NARS
>> >>> searches the state of finite-state-machines that can be implemented
>> >>> with the resources at hand, and is more probable to stay for longer
>> >>> periods of time in configurations that give more reward, such that
>> >>> NARS would eventually settle on a configuration if that configuration
>> >>> consistently gave the highest reward.)
>> >>>
>> >>> So, some form of learning can take place with no assumptions. The
>> >>> problem is that the search space is exponential in the resources
>> >>> available, so there is some maximum point where the system would
>> >>> perform best (because the amount of resources match the problem), but
>> >>> giving the system more resources would hurt performance (because the
>> >>> system searches the unnecessarily large search space). So, in this
>> >>> sense, the system's behavior seems counterintuitive-- it does not
>> >>> seem to be taking advantage of the increased resources.
>> >>>
>> >>> I'm not claiming NARS would have that problem, of course.... just
>> >>> that a theoretical no-assumption learner would.
>> >>>
>> >>> --Abram
>> >>>
>> >>> On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel <[EMAIL PROTECTED]>
>> >>> wrote:
>> >>>>
>> >>>>
>> >>>> On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang <[EMAIL PROTECTED]>
>> >>>> wrote:
>> >>>>>
>> >>>>> Ben,
>> >>>>>
>> >>>>> Thanks. So the other people now see that I'm not attacking a straw
>> >>>>> man.
>> >>>>>
>> >>>>> My solution to Hume's problem, as embedded in the
>> >>>>> experience-grounded semantics, is to assume no predictability, but
>> >>>>> to justify induction as adaptation. However, it is a separate topic
>> >>>>> which I've explained in my other publications.
>> >>>>
>> >>>> Right, but justifying induction as adaptation only works if the
>> >>>> environment is assumed to have certain regularities which can be
>> >>>> adapted to.  In a random environment, adaptation won't work.  So,
>> >>>> still, to justify induction as adaptation you have to make *some*
>> >>>> assumptions about the world.
>> >>>>
>> >>>> The Occam prior gives one such assumption: that (to give just one
>> >>>> form) sets of observations in the world tend to be producible by
>> >>>> short computer programs.
>> >>>>
>> >>>> For adaptation to successfully carry out induction, *some* vaguely
>> >>>> comparable property to this must hold, and I'm not sure if you have
>> >>>> articulated which one you assume, or if you leave this open.
>> >>>>
>> >>>> In effect, you implicitly assume something like an Occam prior,
>> >>>> because you're saying that  a system with finite resources can
>> >>>> successfully adapt to the world ... which means that sets of
>> >>>> observations in the world *must* be approximately summarizable via
>> >>>> subprograms that can be executed within this system.
>> >>>>
>> >>>> So I argue that, even though it's not your preferred way to think
>> >>>> about it, your own approach to AI theory and practice implicitly
>> >>>> assumes some variant of the Occam prior holds in the real world.
>> >>>>>
>> >>>>>
>> >>>>> Here I just want to point out that the original and basic meaning
>> >>>>> of Occam's Razor and those two common (mis)usages of it are not
>> >>>>> necessarily the same. I fully agree with the former, but not the
>> >>>>> latter, and I haven't seen any convincing justification of the
>> >>>>> latter. Instead, they are often taken as granted, under the name of
>> >>>>> Occam's Razor.
>> >>>>
>> >>>> I agree that the notion of an Occam prior is a significant
>> >>>> conceptual beyond the original "Occam's Razor" precept enounced long
>> >>>> ago.
>> >>>>
>> >>>> Also, I note that, for those who posit the Occam prior as a **prior
>> >>>> assumption**, there is not supposed to be any convincing
>> >>>> justification for it.  The idea is simply that: one must make *some*
>> >>>> assumption (explicitly or
>> >>>> implicitly) if one wants to do induction, and this is the assumption
>> >>>> that some people choose to make.
>> >>>>
>> >>>> -- Ben G
>> >>>>
>> >>>>
>> >>>>
>> >>>> ________________________________
>> >>>> agi | Archives | Modify Your Subscription
>> >>>
>> >>>
>> >>> -------------------------------------------
>> >>> agi
>> >>> Archives: https://www.listbox.com/member/archive/303/=now
>> >>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> >>> Modify Your Subscription: https://www.listbox.com/member/?&; Powered
>> >>> by
>> >>> Listbox: http://www.listbox.com
>> >>>
>> >>
>> >>
>> >> -------------------------------------------
>> >> agi
>> >> Archives: https://www.listbox.com/member/archive/303/=now
>> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> >> Modify Your Subscription:
>> >> https://www.listbox.com/member/?&;
>> >> Powered by Listbox: http://www.listbox.com
>> >>
>> >>
>> >>
>> >> -------------------------------------------
>> >> agi
>> >> Archives: https://www.listbox.com/member/archive/303/=now
>> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> >> Modify Your Subscription: https://www.listbox.com/member/?&; Powered by
>> >> Listbox: http://www.listbox.com
>> >>
>> >
>> >
>> > -------------------------------------------
>> > agi
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> > Modify Your Subscription:
>> > https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>> >
>> >
>> > -------------------------------------------
>> > agi
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> > Modify Your Subscription: https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "A human being should be able to change a diaper, plan an invasion, butcher
> a hog, conn a ship, design a building, write a sonnet, balance accounts,
> build a wall, set a bone, comfort the dying, take orders, give orders,
> cooperate, act alone, solve equations, analyze a new problem, pitch manure,
> program a computer, cook a tasty meal, fight efficiently, die gallantly.
> Specialization is for insects."  -- Robert Heinlein
>
>
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to