Re: [agi] Re: Games for AIs

2002-12-13 Thread Jonathan Standley
Gary Miller wrote:

>> People who have pursued the experience such as myself and have been
> given small tastes of success will tell you unequivocally that if it is
> not endorphins that are being released then there is something even more
> powerful at work within the brain.

I think that it has been fairly well established that endorphins are
involved in these "flow" states; my contention is that concious sensation is
a result of the change in neural activity patterns caused by
neurotransmitters and other factors, not the neurotransmitters themselves.
IMO this is important b/c it generalizes conciousness as a property of
complex dynamic systems such as the brain


> The interesting thing is that while in this state you perceive the
> intellect as being greatly heightened, with thoughts flowing at an
> extremely accelerated pace and the sense of one's self or being separate
> from everything else is eliminated or greatly diminished.  Mystics who
> devote their live to the self-inducement of this state are not
> necessarily doing so for just philosophic or religious reasons.  The
> sense of clarity and pleasure experienced during the state may be very
> addictive and may be the basis for the revelatory experiences that
> inspired all modern day religions.  In many cases the experiences are so
> strong that a single experience has been known to cause people to
> completely change the direction of their lives.

I've experienced this state before, it is very powerful...

> While it is difficult to separate the scientific literature from the
> large body of new age and religious hyperbole, there may be an overdrive
> gear that can be triggered in the mind by practice of meditative
> biofeedback.
>
> Should a FAI have a MetaGoal to maximize it's own perceived pleasure.
> Since the FAI will need a mechanism to prioritize it's internal goal
> states the external trigger for such a state could be used reprioritize
> the FAI's goals states at least during early development to induce it to
> follow positive modes of thought and stay out of areas such as obsessive
> compulsive behavior, antisocial behavior, paranoia, megalomania and
> other states associated with mental illness.

Current research into mental illness does indeed suggest that such disorders
are the result of faulty internal mechanisms which in a "normal" person keep
the mind on an even keel.  the Discovery channel had a program about OCD on
not that long ago that profiled a team of researchers who are testing that
hypothesis


J Standley
http://users.rcn.com/standley/AI/AI.htm

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Re: Games for AIs

2002-12-13 Thread Gary Miller
On December 12th Jonathon Standley said

<< On a practical note, if  the above hypothesis is correct, it would be
relatively easy to identify << the signature patterns of different
emotions (via PET or fMRI) and emotionally "program" an AI's << reward
structure to ensure that it behaves itself

I would particularly be interested in seeing how different states of
consciousness ie. Deep meditation and the state referred to as
enlightenment is reflected at the brain level.  

Enlightenment pursued by many is characterized as a higher state of
consciousness and is sometimes referred to in psychological literature
as a peak experience but I am not sure if they are completely
synonymous.

People who have pursued the experience such as myself and have been
given small tastes of success will tell you unequivocally that if it is
not endorphins that are being released then there is something even more
powerful at work within the brain.  

The interesting thing is that while in this state you perceive the
intellect as being greatly heightened, with thoughts flowing at an
extremely accelerated pace and the sense of one's self or being separate
from everything else is eliminated or greatly diminished.  Mystics who
devote their live to the self-inducement of this state are not
necessarily doing so for just philosophic or religious reasons.  The
sense of clarity and pleasure experienced during the state may be very
addictive and may be the basis for the revelatory experiences that
inspired all modern day religions.  In many cases the experiences are so
strong that a single experience has been known to cause people to
completely change the direction of their lives.

While it is difficult to separate the scientific literature from the
large body of new age and religious hyperbole, there may be an overdrive
gear that can be triggered in the mind by practice of meditative
biofeedback.

Should a FAI have a MetaGoal to maximize it's own perceived pleasure.
Since the FAI will need a mechanism to prioritize it's internal goal
states the external trigger for such a state could be used reprioritize
the FAI's goals states at least during early development to induce it to
follow positive modes of thought and stay out of areas such as obsessive
compulsive behavior, antisocial behavior, paranoia, megalomania and
other states associated with mental illness.  By monitoring the FAI's
long term goal stack is should be possible to view the FAI's basic life
philosophy evolve.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Re: Games for AIs

2002-12-12 Thread Jonathan Standley

Alan,

> [motovation problem].
>
> No, human euphoria is much more than simple neural reenforcement. It is
> a result of special endorphines such as dopomine that are released when
> the midbrain is happy about something.

You're right.  I really should have thought out that post a little more
before writing it
when I said about removing chemistry from the equation what I was sort of
getting at was that the endorphins'(or cocaine or any other
pleasure-generating chemical) presence results in changes in the behavior
and activity of affected neurons, and what we 'feel' is the shifts in
activity patterns.  Without getting too offtopic or philosophical, I was
trying to universalize the phenomena of "feeling" emotions, by saying that
it's not the chemical activity itself we feel.  If one were to stimulate a
given cluster of neurons in a manner that would cause them to act exactly as
if they were being influenced by endorphins, I think the subject would
'feel' the exact same sensation as if the neurons were 'naturally'
stimulated

> You see, the cortex has no oppinion about anything whatsovever. It is
> merely a computational matrix. It receives its programming from exactly
> two sources. External stimuli and the midbrain/brain-steam. (though
> special areas of the cortex are dedicated to doing some of the
> high-level work required by emotional circuits).
>
> In the brain steam there are special neural networks that generate
> special kinds of "decisions" that I will call "oppinions". ;)
>
> When this circuit likes something it gets all happy and sends excitory
> signals... When it is unhappy it sends inhibitory signals. A particular
> disorder that I have (and many other people have) is depression where
> excessive inhibitory signals are generated

I have moderate depression w/ an associated sleep disorder; it's one of the
things that originally got me interested in neurology and cog. science.

> I'm still reading and hopefully I'll have some ideas about emotional
> "qualia" and the like.

I'm looking at your website as I write this, you have some fascinating ideas
on there...

J Standley
http://users.rcn.com/standley/AI/AI.htm

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Re: Games for AIs

2002-12-12 Thread Alan Grimes
[motovation problem].

No, human euphoria is much more than simple neural reenforcement. It is
a result of special endorphines such as dopomine that are released when
the midbrain is happy about something.

You see, the cortex has no oppinion about anything whatsovever. It is
merely a computational matrix. It receives its programming from exactly
two sources. External stimuli and the midbrain/brain-steam. (though
special areas of the cortex are dedicated to doing some of the
high-level work required by emotional circuits). 

In the brain steam there are special neural networks that generate
special kinds of "decisions" that I will call "oppinions". ;)

When this circuit likes something it gets all happy and sends excitory
signals... When it is unhappy it sends inhibitory signals. A particular
disorder that I have (and many other people have) is depression where
excessive inhibitory signals are generated.

I'm still reading and hopefully I'll have some ideas about emotional
"qualia" and the like.

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Re: Games for AIs

2002-12-12 Thread Damien Sullivan
On Thu, Dec 12, 2002 at 01:10:27PM -0500, Michael Roy Ames wrote:

> The idea of putting a baby AI in a simulated world where it might learn
> cognitive skills is appealing.  But I suspect that it will take a huge
> number of iterations for the baby AI to learn the needed lessons in that
> situation.  I think it will be faster to give more constrained and

For calibration, look at how long it takes human babies, with their onboard
superdupercomputers, to learn anything.  Especially if you're not so much of a
Chomskyan, believing Piaget's development track has more to do with the brain
figuring out patterns in the world rather than with a developmental program...
if the brain's figuring out language and physics (and walking) through
advanced statistics and iterated recalibration, well, it's taking quite a
while.  Our piddly little AIs now have their work cut out for them.

-xx- Damien X-) 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Re: Games for AIs

2002-12-12 Thread Jonathan Standley
"The idea of putting a baby AI in a simulated world where it might learn
cognitive skills is appealing.  But I suspect that it will take a huge
number of iterations for the baby AI to learn the needed lessons in that
situation"

This is definitely a serious consideration  - one way to overcome this might
be the inclusion of innate behaviors that steer the new mind towards
activities/actions that engender cognitive and emotional development.
Babies instinctively look at faces, reach for objects, and track moving
things w/ their eyes and eventually head and neck.

An AI's innate behaviors could have a built in reward structure, where, for
example, succesfully tracking a ball rolled across the simulated "floor"
would reinforce the neural network patterns that produced the desired
behavior.

On a related note, what is the nature of pleasure(reward)?  is it simply the
sensation that occurs b/c of the neural activity/reorganization that occurs
when needs are fulfilled or tasks completed successfully?  If so, does
pleasure correlate to increases in neural efficiency?  Neurons and the
networks they make up require a certain amount of reinforcement to maintain
normal functioning (this is a fact, though I wish I had a reference handy to
back up that assertion :).  I'm guessing that pleasure is caused when
reinforcement levels rise above their recent average.  This would account
for the fact that a) practising or doing something you like is pleasurable
b) pleasure is relative to circumstance, and c) all forms of pleasure seem
to be built upon the same core sensation.  IMO this is important because it
takes chemical effects out of the emotion equation, ie chemicals cause
pleasure by activating existing reinforcement mechanisms.  If I'm right,
emotions are (at their most basic level) nothing but patterns in the
activity of neural network type system, which we "feel" b/c we 'are' the
system's activity, not the system itself...
On a practical note, if  the above hypothesis is correct, it would be
relatively easy to identify the signature patterns of different emotions
(via PET or fMRI) and emotionally "program" an AI's reward structure to
ensure that it behaves itself

J Standley
http://users.rcn.com/standley/AI/AI.htm
updated today! see:
http://users.rcn.com/standley/AI/Neural%20Processing.htm
http://users.rcn.com/standley/AI/ISL.htm

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Re: Games for AIs

2002-12-12 Thread Michael Roy Ames
Tony,

Thanks for sharing your ideas (sorry for the erroneous naming of
shape-world).  We seem to agree that the lessons (for an AI) would need to
start of very simple, and gradually build up mental tools & techniques, by
using many different games to build slightly different aspects of cognition.

The idea of putting a baby AI in a simulated world where it might learn
cognitive skills is appealing.  But I suspect that it will take a huge
number of iterations for the baby AI to learn the needed lessons in that
situation.  I think it will be faster to give more constrained and
structured learning first, then when the AI is capable of understanding the
'game world' and the 'game rules' and the 'game interface' it could play
"Video games" with the intention of *discovering* how it all works together.
This is often what humans find interesting about video games: the discovery
aspect.  And this would be a valuable new skill to develop: given this
World, these Rules and this Interface, discover A, B, C.  Where A, B, C
could be many different things.  Eg: Reach the highest level possible; Stay
alive the longest; Rack-up the most points; Rack-up the least points without
dying...  But this kind of excercise is only going to be useful for mental
development once the AI has very significant capabilities.  I might almost
say that, by the time video-game-playing becomes beneficial to mental
development, the AI would be largely self-directed.

"Hey! Alan!  Wanna play Duke Nukem?"
"Okay Michael.  But I'm going to win this time."
"Oh, really?  How do you know."
"Well, its a bit complicated to explain.  Why don't I just show you?"
(gulp)

Michael Roy  Ames


Tony Lofthouse wrote:
> Michael,
>
> You wrote:
>
>> Tony Lofthouse: I've heard you are working on the shape-world
> interface.  >Have you considered what games we might play in it?
> Ideas?
>
> To clarify this point. I am currently developing a 2D input capability
> for Novamente. It is a very crude form of vision that allows the
> presentation of (x, y) time series to the system. This should not be
> confused with the shape-world interface mentioned above. Whilst one
> may lead to the other shape-world is not the current focus.
>
> Having said this I do have a couple of comments relating to AI games.
> Those of you who have had the opportunity to raise children will no
> doubt be well aware of the fact that children don't play TLoZ (or
> contemporary equivalent) until well into their childhood.
>
> There are many stages of learning before a child is capable of this
> level of sophistication. One of the first games that young children
> play is the categorisation game, i.e. What shape is this?, what
> colour is this?, how may sides?, etc. I would expect to use the 2D
> world and Shape-world subsequently for the same purpose. This is
> followed by the comparison game, i.e. is this big?, is this small?,
> which is bigger?, etc. Then you have the counting game (sort of
> obvious). The relationship game, i.e. above, below, inside, outside.
> There are lots of these type games!
>
> Then you move on to the reasoning game, i.e. what comes next?, what is
> missing?, what is the odd one out?, etc.
>
> Now the child is ready to combine learning from these different games
> and moves on to story telling both listening to them and then telling
> them.
>
> Then there are several more years of honing these key skills whilst
> increasing the level of world knowledge and social understanding.
>
> Finally the child is ready to play TLoZ!
>
> So as you can see I think there is a lot to do before you get to play
> TLoZ with your baby AGI. That is the purpose of 2d World and then
> Shape-World.
>
> T
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]