Re: [agi] Re: Games for AIs

2002-12-13 Thread Jonathan Standley
Gary Miller wrote:

>> People who have pursued the experience such as myself and have been
> given small tastes of success will tell you unequivocally that if it is
> not endorphins that are being released then there is something even more
> powerful at work within the brain.

I think that it has been fairly well established that endorphins are
involved in these "flow" states; my contention is that concious sensation is
a result of the change in neural activity patterns caused by
neurotransmitters and other factors, not the neurotransmitters themselves.
IMO this is important b/c it generalizes conciousness as a property of
complex dynamic systems such as the brain


> The interesting thing is that while in this state you perceive the
> intellect as being greatly heightened, with thoughts flowing at an
> extremely accelerated pace and the sense of one's self or being separate
> from everything else is eliminated or greatly diminished.  Mystics who
> devote their live to the self-inducement of this state are not
> necessarily doing so for just philosophic or religious reasons.  The
> sense of clarity and pleasure experienced during the state may be very
> addictive and may be the basis for the revelatory experiences that
> inspired all modern day religions.  In many cases the experiences are so
> strong that a single experience has been known to cause people to
> completely change the direction of their lives.

I've experienced this state before, it is very powerful...

> While it is difficult to separate the scientific literature from the
> large body of new age and religious hyperbole, there may be an overdrive
> gear that can be triggered in the mind by practice of meditative
> biofeedback.
>
> Should a FAI have a MetaGoal to maximize it's own perceived pleasure.
> Since the FAI will need a mechanism to prioritize it's internal goal
> states the external trigger for such a state could be used reprioritize
> the FAI's goals states at least during early development to induce it to
> follow positive modes of thought and stay out of areas such as obsessive
> compulsive behavior, antisocial behavior, paranoia, megalomania and
> other states associated with mental illness.

Current research into mental illness does indeed suggest that such disorders
are the result of faulty internal mechanisms which in a "normal" person keep
the mind on an even keel.  the Discovery channel had a program about OCD on
not that long ago that profiled a team of researchers who are testing that
hypothesis


J Standley
http://users.rcn.com/standley/AI/AI.htm

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Games and Funding

2002-12-13 Thread Shane Legg
An example : the computer/console entertainment industry (estimated at 
over 20 billion USD) exceeds the movie theatre industry and will soon 
eclipse the movie rental industry as well. The impressive growth rate 

I have also read estimates that predict that this will grow to
about $100 billion USD over the next ten years driven in a large
part by rapidly growing Asian markets for computer/console games.

If even just a tiny fraction of this went into developing better game
AI technology, that could still be a significant amount of money.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


[agi] Games and Funding

2002-12-13 Thread A M







--

Hello everyone,

I have been busy the past few weeks hence the lack of input. But I will 
chime in on the funding front. Firstly, I should introduce myself to those 
who don't know me (most of you). My name is Abdul Malik and I work for 1000 
Planets Inc. ( www.1000planets.com ) a space development company, we are 
tyring to develop the necessary infrastructure to enable development of 
space i.e. launchers etc. One area of great benefit and importance (one of 
NASA's great hopes for cheaper, faster, better space missions) is A.I. and 
for us AGI. Now onto the body of the e-mail.

It is fairly obvious that conventional funding for AGI or AGI-related R & D 
is virtually non-existent. Most funds available for A.I. research hail from 
segments of the government i.e. the NSF, military etc. Moreover, such groups 
(with the exception of the military, which leads to ethical and moral 
concerns) are more sympathetic to narrow-based A.I. research. Narrow-based 
A.I. is not as grand as broad-based A.I. but it is also cheaper, relatively 
less challenging, more predictable, more definable etc. than broad-based 
A.I. Incremental stable and specific development is considered to be highly 
advantageous for all would-be sponsors of A.I.

Why invest millions in A.I. development when no one knows what "true"/ 
"real" intelligence let alone what architecture/working definition/attempt 
will lead to it. Hence the proliferation of narrow-A.I. in academia (where 
the funding authorities are ultimate masters) , business (for obvious 
monetary reasons) and government (better uses for tax dollars).

Now some have suggested that for certain applications broad-based A.I. will 
eventually become necessary and the A.I. community will be free to pursue 
the old dream with more resources, more will and more hope.

ItÂ’s a nice dream but it will likely remain a dream. Basically put, 
narrow-A.I. meets and will continue to meet the needs of most applications. 
Applications that require more AGI-type capabilities are few and far between 
and can be potentially be solved jointly by conventional information systems 
and narrow-A.I. and another intelligence : humans. After all, do you really 
require a voice-processing system to have creativity?

It has also been suggested that AGI-level A.I. components can "compete" with 
narrow-A.I. The underlying assumptions with this approach are that AGI-level 
A.I. can be reduced to some definable components and such components can be 
utilized in a predictable, definable manner. Pretty tenuous.

All of this assumes of course that AGIs can be "developed competitively" 
with narrow-A.I. That it is to say : AGIs can be developed within identical 
budgetary and resource constraints. Also highly tenuous.

I believe the realistic option, barring visionary, deep-pocketed 
investors/philanthropists, is to pursue the self-funding option. Moreover, I 
believe that the most efficient self-funding path is through employment of 
the general population as funders.

Industry is more supportive of conservative, immediate-payoff, predictable 
A.I. efforts that can only be, currently, met by narrow-A.I. i.e. the 
ever-present expert systems. Moreover, any attempt to create AGI-level A.I. 
would naturally result in very restrictive arrangements with the developers 
: "real" / "true" artificial intelligence would be considered a competitive 
advantage to be closely guarded.

A relevant example : 1000 Planets Inc. approached a number of insurance 
providers for the satellite industry approximately one year ago in efforts 
to recreate potential investors for an Orbital Maneuvering Vehicle (a "space 
tug boat" with spacecraft servicing capability). Such a spacecraft would 
reduce satellite transportation costs in earth-orbit while reducing risk and 
enable spacecraft repair and upgrading. However, following the interest we 
were bluntly told by one insurer that they would not provide funding as they 
felt that they would be helping their competitors. In a "cut-throat" 
industry where a single satellite loss can cost millions the insurers were 
concerned that by helping to reduce collective risks they were potentially 
helping their competitors. The other possible way we could have secured 
their funding would have revolved around us becoming a strategic partner/an 
"in-house" project/a subsidiary. Inevitably such a course would fail due to 
perceived costs i.e. technological risk, money etc.

The general populous is more supportive of A.I. development efforts - 
including AGI - than both industry and government. Some apprehensions do 
exist in regards to AGI however, they will eventually need to be addressed 
regardless of the funding avenues utilized. In addition, the general 
populous collectively surpasses traditional funders of A.I. in sheer funds.

An example : the computer/console entertainment industry (estimated at over 
20 billion USD) exceeds the movie theatre industry and will soon eclipse the 
movie re

RE: [agi] Re: Games for AIs

2002-12-13 Thread Gary Miller
On December 12th Jonathon Standley said

<< On a practical note, if  the above hypothesis is correct, it would be
relatively easy to identify << the signature patterns of different
emotions (via PET or fMRI) and emotionally "program" an AI's << reward
structure to ensure that it behaves itself

I would particularly be interested in seeing how different states of
consciousness ie. Deep meditation and the state referred to as
enlightenment is reflected at the brain level.  

Enlightenment pursued by many is characterized as a higher state of
consciousness and is sometimes referred to in psychological literature
as a peak experience but I am not sure if they are completely
synonymous.

People who have pursued the experience such as myself and have been
given small tastes of success will tell you unequivocally that if it is
not endorphins that are being released then there is something even more
powerful at work within the brain.  

The interesting thing is that while in this state you perceive the
intellect as being greatly heightened, with thoughts flowing at an
extremely accelerated pace and the sense of one's self or being separate
from everything else is eliminated or greatly diminished.  Mystics who
devote their live to the self-inducement of this state are not
necessarily doing so for just philosophic or religious reasons.  The
sense of clarity and pleasure experienced during the state may be very
addictive and may be the basis for the revelatory experiences that
inspired all modern day religions.  In many cases the experiences are so
strong that a single experience has been known to cause people to
completely change the direction of their lives.

While it is difficult to separate the scientific literature from the
large body of new age and religious hyperbole, there may be an overdrive
gear that can be triggered in the mind by practice of meditative
biofeedback.

Should a FAI have a MetaGoal to maximize it's own perceived pleasure.
Since the FAI will need a mechanism to prioritize it's internal goal
states the external trigger for such a state could be used reprioritize
the FAI's goals states at least during early development to induce it to
follow positive modes of thought and stay out of areas such as obsessive
compulsive behavior, antisocial behavior, paranoia, megalomania and
other states associated with mental illness.  By monitoring the FAI's
long term goal stack is should be possible to view the FAI's basic life
philosophy evolve.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Brain-Mind Cognition Theory: AI4U

2002-12-13 Thread Arthur T. Murray

2002 marks the publication of a joint textbook for neuroscience 
and artificial intelligence (AI).  The thirty-four chapters of 
"AI4U: Mind-1.1 Programmer's Manual" (ISBN 0-595-25922-7) by Arthur
T. Murray correspond with 34 functional mind-modules of the primitive but 
evolving artificial Mind.  A brain-mind diagram at the start of each chapter 
shows the function of an AI software module and its associative relationship 
within the surrounding mindgrid that simulates the human cerebral cortex. 
The AI4U book is the original publication of original work and is therefore 
a primary source document for historians of neuroscience.  AI4U is now at 
http://www.iuniverse.com/bookstore/book_detail.asp?isbn=0595259227 and at 
http://search.barnesandnoble.com/bookSearch/isbnInquiry.asp?ISBN=0595259227 .

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]