Re: [agi] AGI introduction

2007-06-26 Thread YKY (Yan King Yin)

Hi Pei,

I'm giving a presentation to CityU of Hong Kong new week, on AGI in general
and about my project.  Can I use your listing of representative AGIs in
one slide?

Also, if I spend 1 slide to talk about NARS, what phrases would you
recommand? ;)

Thanks a lot!
YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] AGI introduction

2007-06-26 Thread Pei Wang

On 6/26/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


Hi Pei,

I'm giving a presentation to CityU of Hong Kong new week, on AGI in general
and about my project.  Can I use your listing of representative AGIs in
one slide?


Sure --- it is already in public domain.


Also, if I spend 1 slide to talk about NARS, what phrases would you
recommand? ;)


The first two sentences under NARS in the list.

Pei


Thanks a lot!
YKY 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-24 Thread Eliezer S. Yudkowsky

Pei Wang wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


This looks pretty good to me.  My compliments.

(And now the inevitable however...)

However, the distinction you intended between capability and 
principle did not become clear to me until I looked at the very last 
table, which classified AI architectures.  I was initially quite 
surprised to see AIXI listed as principle and Cyc listed as 
capability.


I had read capability - to solve hard problems as meaning the power 
to optimize a utility function, like the sort of thing AIXI does to 
its reward button, which when combined with the unified column would 
designate an AI approach that derived every element by backward 
chaining from the desired environmental impact.  But it looks like you 
meant capability in the sense that the designers had a particular 
hard AI subproblem in mind, like natural language.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-24 Thread Pei Wang

Understood. The distinction isn't explained in the short introduction
at all, and that is why I linked to my paper
http://nars.wang.googlepages.com/wang.AI_Definitions.pdf , which
explains it in a semi-formal manner.

Pei

On 6/24/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Pei Wang wrote:
 Hi,

 I put a brief introduction to AGI at
 http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
 Overview followed by Representative AGI Projects.

This looks pretty good to me.  My compliments.

(And now the inevitable however...)

However, the distinction you intended between capability and
principle did not become clear to me until I looked at the very last
table, which classified AI architectures.  I was initially quite
surprised to see AIXI listed as principle and Cyc listed as
capability.

I had read capability - to solve hard problems as meaning the power
to optimize a utility function, like the sort of thing AIXI does to
its reward button, which when combined with the unified column would
designate an AI approach that derived every element by backward
chaining from the desired environmental impact.  But it looks like you
meant capability in the sense that the designers had a particular
hard AI subproblem in mind, like natural language.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

Thanks for putting this together!  If I were to put myself into your 
theory of AI research, I would probably be roughly included in the 
Structure-AI and Capability-AI (better descriptions of the brain and 
computer programs that have more capabilities).

I haven't heard of a lot of these systems current Capabilities.  A lot of 
them are pretty old--like SOAR and ACT-R.

I tried finding literature on the success of some of these architectures, 
but most of the available literature was in the theory of theories of AI 
category.  The SOAR literature, for example, is massive and mostly focused 
on small independent projects.

Are there large real-world problems that have been solved by these 
systems?  I would find Capability links very useful if they were added.

Bo

On Fri, 22 Jun 2007, Pei Wang wrote:

) Hi,
) 
) I put a brief introduction to AGI at
) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
) Overview followed by Representative AGI Projects.
) 
) It is basically a bunch of links and quotations organized according to
) my opinion. Hopefully it can help some newcomers to get a big picture
) of the idea and the field.
) 
) Pei
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:


Thanks for putting this together!  If I were to put myself into your
theory of AI research, I would probably be roughly included in the
Structure-AI and Capability-AI (better descriptions of the brain and
computer programs that have more capabilities).


It is a reasonable position, though in the long run you may have to
choose between the two, since they often conflict.


I haven't heard of a lot of these systems current Capabilities.  A lot of
them are pretty old--like SOAR and ACT-R.


At the current stage, no AGI system has achieved remarkable
capability. In the list, the ones have most practical applications are
probably Cyc, SOAR, and ACT-R.


I tried finding literature on the success of some of these architectures,
but most of the available literature was in the theory of theories of AI
category.  The SOAR literature, for example, is massive and mostly focused
on small independent projects.


Soar and ACT-R, in their current form, are programming languages and
platforms, in the sense that whoever use them are responsible for
writing models in them. Therefore, to say Soar is general-purpose
is like saying Java is general-purpose --- the system can be applied
in many domains, but each application is indeed a small independent
project. It is already very different from what Newell had in mind at
the beginning of Soar.


Are there large real-world problems that have been solved by these
systems?  I would find Capability links very useful if they were added.


I don't think there is any such solution, though that is not the major
issue they face as AGI projects. As I analyzed in the paper on AI
definitions, they are not designed with Capability as the primary
goal.

Pei


Bo

On Fri, 22 Jun 2007, Pei Wang wrote:

) Hi,
)
) I put a brief introduction to AGI at
) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
) Overview followed by Representative AGI Projects.
)
) It is basically a bunch of links and quotations organized according to
) my opinion. Hopefully it can help some newcomers to get a big picture
) of the idea and the field.
)
) Pei
)
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.


For example, the RL book shows how to integrate planning and reactive
reinforcement at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
)  
)  Thanks for putting this together!  If I were to put myself into your
)  theory of AI research, I would probably be roughly included in the
)  Structure-AI and Capability-AI (better descriptions of the brain and
)  computer programs that have more capabilities).
) 
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog 
in the brain.  I just want to describe the brain in computer language, 
which will require much more advanced programming languages to just get 
computers to simulate things similar to what people can do mentally.

--

)  I haven't heard of a lot of these systems current Capabilities.  A lot of
)  them are pretty old--like SOAR and ACT-R.
) 
) At the current stage, no AGI system has achieved remarkable
) capability. In the list, the ones have most practical applications are
) probably Cyc, SOAR, and ACT-R.

Well, they've been trying to find Capabilities, for example, I'm no ACT-R 
expert at all, but I read a paper about how they are looking for 
correlations between their planner's stack-size and fMRI BOLD signal 
voxels.  This would be a cool Capability in terms of Structural-AI if they 
were able to pull it off.  A simple theory of planning, but slow progress 
toward Structural-AI.

--

)  Are there large real-world problems that have been solved by these
)  systems?  I would find Capability links very useful if they were added.
) 
) I don't think there is any such solution, though that is not the major
) issue they face as AGI projects. As I analyzed in the paper on AI
) definitions, they are not designed with Capability as the primary
) goal.

Hmm..  It seems that even if Capability-AI isn't the primary goal of the 
theory, it must be *one* of the goals.  A Human-Scale thinking system is 
going to have a lot of small milestones of Capability.  If any of these 
systems have reached anything similar to this, which I'm sure many of them 
have because they've been around for 20-30 years.  I'm no expert on any of 
these systems, but I'm just trying to find how successful each has been in 
terms of Capability, which is seems much be at least a distant subgoal of 
all of them.  Even if they are purely theoretical, they must be created 
with the intention of creating other theories that do have Capabilities?!

Bo

) Pei
) 
)  Bo
)  
)  On Fri, 22 Jun 2007, Pei Wang wrote:
)  
)  ) Hi,
)  )
)  ) I put a brief introduction to AGI at
)  ) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
)  ) Overview followed by Representative AGI Projects.
)  )
)  ) It is basically a bunch of links and quotations organized according to
)  ) my opinion. Hopefully it can help some newcomers to get a big picture
)  ) of the idea and the field.
)  )
)  ) Pei
)  )
)  ) -
)  ) This list is sponsored by AGIRI: http://www.agiri.org/email
)  ) To unsubscribe or change your options, please go to:
)  ) http://v2.listbox.com/member/?;
)  )
)  
)  -
)  This list is sponsored by AGIRI: http://www.agiri.org/email
)  To unsubscribe or change your options, please go to:
)  http://v2.listbox.com/member/?;
)  
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 22/06/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.

It is basically a bunch of links and quotations organized according to
my opinion. Hopefully it can help some newcomers to get a big picture
of the idea and the field.

Pei



I like the overview, but I don't think it captures every possible type
of AGI design approach. And may constrain peoples thoughts as to the
possibilities overly.

Mine, I would describe as foundationalist/integrative. That is while
we need to integrate our knowledge of
sensing/planning/natural/reasoning language, this needs to be done in
the correct foundation architecture.

My theory is that the computer architecture has to be more brain-like
than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an architecture that can direct the changing of the
programs by allowing self-directed changes to the stored programs that
are better for following a goal, to persist.

Changes can come from any source (proof, random guess, translations of
external suggestions), so speed of change is not an issue.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Mike Tintner


- Will Pearson: My theory is that the computer architecture has to be 
more brain-like

than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an architecture that can direct the changing of the
programs by allowing self-directed changes to the stored programs that
are better for following a goal, to persist.  Changes can come from any 
source (proof, random guess, translations of

external suggestions), so speed of change is not an issue.


What's the difference between a stored program and the brain's programs that 
allows these self-directed changes to come about? (You seem to be trying to 
formulate something v. fundamental). And what kind of human mental activity 
do you see as evidence of the brain's different kind of programs?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.


I use these two words to distinguish the integration in AGI projects
(e.g., Novamente ...) and the integration in mainstream AI, such as
the works reported in the Integrated Intelligence Special Track of
AAAI, though none of the latter type has reached the level of AGI yet.
Of course, the boundary is not absolute, but the difference is still
quite clear. According to mainstream AI people, all current AI
research may contribute to AGI (since the special-purpose tools can be
integrated), but according to the AGI people, even an integrated
approach should start at the big picture.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:


On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
) 
)  Thanks for putting this together!  If I were to put myself into your
)  theory of AI research, I would probably be roughly included in the
)  Structure-AI and Capability-AI (better descriptions of the brain and
)  computer programs that have more capabilities).
)
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog
in the brain.  I just want to describe the brain in computer language,
which will require much more advanced programming languages to just get
computers to simulate things similar to what people can do mentally.


Sure you can, but this is mostly what I call Structure-AI.
Capability-AI is more about practical problem solving, while whether
the process follows the human-way doesn't matter, as in Deep Blue.


Hmm..  It seems that even if Capability-AI isn't the primary goal of the
theory, it must be *one* of the goals.


Of course. Everyone has practical application in mind, and the
difference is how much priority this goal has, compared with the other
goals.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, William Pearson [EMAIL PROTECTED] wrote:


I like the overview, but I don't think it captures every possible type
of AGI design approach. And may constrain peoples thoughts as to the
possibilities overly.


Of course I didn't claim that, and I'm sorry if it is understood that way.

What I listed under Representative AGI Projects are just
AGI-oriented projects with enough materials to be analyzed and
criticized. I surely know that there are many people working on other
ideas, and at the current time it is way too early to say which one
will work.

I just don't think it is possible to list all the possibilities, so
for beginners, the relatively more mature ones are better places to
start. Even if they don't like the ideas (I don't agree with many of
the ideas myself), at least they should know what have been proposed
and explored to certain depth.

I'll be glad to include more and more projects into the list in the
future, as far as they satisfy the criteria set before the list.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 23/06/07, Mike Tintner [EMAIL PROTECTED] wrote:


- Will Pearson: My theory is that the computer architecture has to be
more brain-like
 than a simple stored program architecture in order to allow resource
 constrained AI to implemented efficiently. The way that I am
 investigating, is an architecture that can direct the changing of the
 programs by allowing self-directed changes to the stored programs that
 are better for following a goal, to persist.  Changes can come from any
 source (proof, random guess, translations of
 external suggestions), so speed of change is not an issue.

What's the difference between a stored program and the brain's programs that
allows these self-directed changes to come about? (You seem to be trying to
formulate something v. fundamental).


I think the brains programs have the ability to protect their own
storage from interference from other programs. The architecture will
only allow programs that have proven themselves better* to be able to
override this protection on other programs if they request it.

If you look at the brain it is fundamentally distributed and messy. To
stop errors propagating as they do in stored program architectures you
need something more decentralised than the current attempted
dictatorial kernel control.

It is instructive to look at how the stored program architectures have
been struggling to secure against buffer overruns, to protect against
code that has been inserted subverting the rest of the machine.
Measures that have been taken include the No execute bits on
non-programmatic memory and randomising where programs are stored in
memory so they can't be overwritten. You are even getting to the stage
in trusted computing where you aren't allowed to access certain
portions of memory unless you have the correct cryptographic
credentials. I would rather go another way. If you have some form of
knowledge of what a program is worth embedded in the architecture,
then you should be able to limit these sorts of problems, and allow
more experimentation.

If you try self-modifying and experimenting code on a simple stored
program system, it will generally cause errors and lots of problems,
when things go wrong, as there are no safeguards to what the program
can do. You can lock the experimental code in a sand box, as in
genetic programming, but then it can't replace older code or change
the methods of experimentation. You can also use formal proof, but
then that limits a lot what sources of information you can use as
inspiration for the experiment.

My approach allows an experimental bit of code, if it proves itself by
being useful, to take the place of other code, if it happens to be
coded to take over the function as well.


And what kind of human mental activity
do you see as evidence of the brain's different kind of programs?


Addiction. Or the general goal optimising behaviour of the various
different parts of the brain. That we notice things more if they are
important to us, which implies that our noticing functionality
improves dependent upon what our goal is. Also the general
pervasiveness of the dopaminergic neural system, that I think has an
important function in determining which programs or neural areas are
being useful.

* I shall now get back to how code is determined to be useful.
Interestingly it is somewhat like the credit attribution for how much
work people have done on the agi projects that some people have been
discussing. My current thinking is something like this. There is a
fixed function, that can recognise manifestly good and bad situations,
it provides a value every so often to all the programs than have
control of an output. If things are going well, some food is found,
the value goes up an injury is sustained the value goes down. Basic
reinforcement learning idea.

The value becomes in the architecture a fungible, distributable, but
conserved, resource.  Analogous to money, although when used to
overwrite something it is removed dependent upon hoe useful the
program overwritten was. The outputting programs pass it back to the
programs that have given them they information they needed to output,
whether that information is from long term memory or processed from
the environment. These second tier programs pass it further back.
However the method of determining who gets the credit doesn't have to
always be a simplistic function, they can have heuristics on how to
distribute the utility based on the information they get from each of
its partners. As these heuristics are just part of each program they
can change as well.

So in the end you get an economy of programs that aren't forced to do
anything. Just those that perform well can overwrite those that don't
do so well. It is a very loose constraint on what the system actually
does. On top of this in order to get an AGI you would integrate
everything we know about language, senses, naive physics, mimicry and
other things yet discovered. Also adding the new knowledge we 

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
) 
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?  
For example, removing small parts of the brainstem result in coma.

) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this 
theory--sounds good?  For example, you're referring to multiple tiers of 
organization, which sound like larger scale organizations that maybe have 
been further discussed elsewhere?

It sounds like there are intricate dependency networks that must be 
maintained, for starters.  A lot of supervision and support code that 
does this--or is that evolved in the system also?

--
Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 24/06/07, Bo Morgan [EMAIL PROTECTED] wrote:


On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
)
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent state to deep sleep. Just one that cannot be


) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this
theory--sounds good?  For example, you're referring to multiple tiers of
organization, which sound like larger scale organizations that maybe have
been further discussed elsewhere?


Sorry. It is pretty much all just me at the moment, and the higher
tiers of organisation are just fragments that I know will need to be
implemented or planned for, but have no concrete ideas for at the
moment. I haven't written up everything at the low level either,
because I am not working on this full time. I hope to start a PhD on
it soon, although I don't know where. It will mainly working on the
trying to get a theory of how to design the systems properly, so that
the system will only reward those programs that do well and won't
encourage defectors to spoil what other programs are doing, based on
game theory and economic theory. That is the level I am mainly
concentrating on right now.


It sounds like there are intricate dependency networks that must be
maintained, for starters.  A lot of supervision and support code that
does this--or is that evolved in the system also?


My rule of thumb is to try to put as much as possible into the
changeable/evolving section, but code it by hand to start with if is
needed for the system to start to do some work. The only reason to
keep it on the outside is if the system would be unstable with it on
the inside, e.g. the functions that give out reward.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

Sorry, sent accidentally while half finished.

Bo wrote:

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic changes in the brain. While the brain stem has
dictatorial control over conciousness and activity it does not
necessarily control all activity in the brain in terms of memory and
how it changes. Which is what I am interested in.

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent brain state to deep sleep. Just one that cannot
be stopped in the usual fashion.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] AGI introduction

2007-06-22 Thread Pei Wang

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.

It is basically a bunch of links and quotations organized according to
my opinion. Hopefully it can help some newcomers to get a big picture
of the idea and the field.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


Thanks! As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has much generality and some theoretical
accomplishment where Cog is (AFAIK) hand-crafted engineering.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Mike Tintner


Pei: I put a brief introduction to AGI at

http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


Very helpful. Thankyou.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Pei Wang

On 6/22/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:


As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has much generality and some theoretical
accomplishment where Cog is (AFAIK) hand-crafted engineering.


In many aspects, I agree that SAIL is more interesting than Cog.

I include Cog in the list, because it is explicitly based on a theory
about intelligence as a whole (see
http://groups.csail.mit.edu/lbr/hrg/1998/group-AAAI-98.pdf), while in
SAIL such a theory is not very clear. Of course, this boundary is
fuzzy, so I may include SAIL in a future version of the list,
depending on the development of the project.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e