--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> On 7/1/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> >
> > --- Jef Allbright <[EMAIL PROTECTED]> wrote:
> >
> > > For years I've observed and occasionally
> > > participated in these
> > > discussions of humans (however augmented and/or
> > > organized) vis-à-vis
> > > volitional superintelligent AI, and it strikes
> me as
> > > quite
> > > significant, and telling of our understanding of
> our
> > > own nature, that
> > > rarely if ever is there expressed any
> consideration
> > > of the importance
> > > of the /coherence/ of goals expressed within a
> given
> > > context --
> > > presumably the AI operating within a much wider
> > > context than the
> > > human(s).
> > >
> > > There's a common presumption that agents must
> act to
> > > maximize some
> > > supergoal, but this conception lacks a model for
> the
> > > supercontext
> > > defining the expectations necessary for any such
> > > goal to be
> > > meaningful.
> >
> > "Supercontext"? "Meaningful"? What does that even
> > mean?
> 
> Meaning requires context.

Again, I'm not sure what you mean by "meaning" here.

> Goal / Context : Supergoal / Supercontext.

Why should a supergoal require a different context
than the goal? They're the same kind of structure;
shouldn't they share the same interpreter?

> Duh.  I've gotten the impression that you aren't
> even trying to grasp
> the understandings of others.

I honestly didn't know what you meant. I Googled that
term to see if it was in common usage and I just
wasn't familiar with it. The first result was
pseudoscientific nonsense, and the second one talked
about a philosophy paper. Thus, I decided to ask you
for clarification.

>  Please put down your
> BB gun.
> 
> > >  In the words of Cosmides and Tooby,
> > > [adaptive agents] are
> > > not fitness maximizers, but adaptation
> executors.
> >
> > Yes, but they were referring to evolved organisms,
> not
> > optimization processes in general. There's no
> reason
> > why an AGI has to act like an evolved organism,
> > blindly following pre-written adaptations.
> 
> Again it's a matter of context.  Just as we humans
> feel that we have
> free will, acting toward our goals, but from an
> external context it is
> quite apparent that we are always only executing our
> programming.

Yes, but it would be much easier for an AGI to alter
its programming to adapt to a new situation than it
would be for a human. An AGI has full access to its
own source code; a human does not. And so if an AGI
wants to radically revamp its architecture for
whatever reason, it can.

> 
> > > In
> > > a complex
> > > evolving environment,
> >
> > Do you mean evolving in the Darwinian sense or the
> > "changing over time" sense?
> 
> In the broader than Darwinian sense of changing over
> time as a result
> of interactions within a larger context.

Oh, okay.

> > > prediction fails in proportion
> > > to contextual
> > > precision,
> >
> > Again, what does this even mean?
> 
> Yes, that was overly terse.  Predictive precision
> improves with
> generality and degrades with specificity of the
> applicable principles.

Why should this be? The more general a predictive
statement is, the more possible situations it has to
predict over, and so the more chances there are for
some kind of exception. For example, "things fall
down" is a much more general rule than "things within
gravitational fields fall down", and the first rule is
a weaker predictor because we have to tack on
exceptions for freely falling elevators and the like.

>  We can predict, with very high precision, the
> positions of the
> planets due to the very large context over which our
> understanding of
> gravitation principles applies.

The precision of gravity has nothing to do with the
large number of cases in which it applies- it has to
do with the fact that the Universe's physics
principles are generally consistent across situations.
If you construct a more specific principle, say
"planets will obey these laws of gravitation only when
things are moving at less than .1 c and only when
gravitational fields are less than some given
strength", it will be more specific because it only
applies to a limited subset of situations, but it will
also be more accurate because it won't have to deal
with mathematical breakdowns around black hole exotica
and other weird situations.

> 
> > > so increasing intelligence entails an
> > > increasingly coherent
> > > model of perceived reality,
> > > applied to promotion of
> > > an agent's present
> > > (and evolving) values into the future.
> >
> > Most goal systems are stable under reflection-
> while
> > an agent might modify itself
> 
> It is the incoherence of statements such as "an
> agent might modify
> itself" that I was addressing.

It is possible to get an agent that modifies itself;
eg, you could build an agent that had the goal of
tiling the universe with paperclips for one second and
then committing suicide. It just isn't very likely if
you pick an agent out of a hat.

> 
> > to have different
> > immediate goals, the high-level goal is naturally
> > stable because any modification to it means that
> the
> > new agent will do things that are less desirable
> under
> > the original goal than the current agent.
> >
> > > While I agree with you in regard to decoupling
> > > intelligence and any
> > > particular goals, this doesn't mean goals can be
> > > random or arbitrary.
> >
> > Why not?
> >
> > > To the extent that striving toward goals (more
> > > realistically:
> > > promotion of values) is supportable by
> intelligence,
> > > the values-model
> > > must be coherent.
> >
> > What the heck is a "values-model"? If its goal
> system
> > is incoherent, a self-modifying agent will modify
> > itself until it stumbles upon a coherent goal
> system,
> > at which point the goal system will be stable
> under
> > reflection and so won't have any incentive to
> > self-modify.
> 
> I hope that my response to Stathis might further
> elucidate.

Er, okay. I read this email first.

> - Jef
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;

 - Tom


 
____________________________________________________________________________________
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9031450-7c69db

Reply via email to