Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 23/06/07, Mike Tintner <[EMAIL PROTECTED]> wrote:


- Will Pearson:> My theory is that the computer architecture has to be
more brain-like
> than a simple stored program architecture in order to allow resource
> constrained AI to implemented efficiently. The way that I am
> investigating, is an architecture that can direct the changing of the
> programs by allowing self-directed changes to the stored programs that
> are better for following a goal, to persist.  Changes can come from any
> source (proof, random guess, translations of
> external suggestions), so speed of change is not an issue.

What's the difference between a stored program and the brain's programs that
allows these self-directed changes to come about? (You seem to be trying to
formulate something v. fundamental).


I think the brains programs have the ability to protect their own
storage from interference from other programs. The architecture will
only allow programs that have proven themselves better* to be able to
override this protection on other programs if they request it.

If you look at the brain it is fundamentally distributed and messy. To
stop errors propagating as they do in stored program architectures you
need something more decentralised than the current attempted
dictatorial kernel control.

It is instructive to look at how the stored program architectures have
been struggling to secure against buffer overruns, to protect against
code that has been inserted subverting the rest of the machine.
Measures that have been taken include the No execute bits on
non-programmatic memory and randomising where programs are stored in
memory so they can't be overwritten. You are even getting to the stage
in trusted computing where you aren't allowed to access certain
portions of memory unless you have the correct cryptographic
credentials. I would rather go another way. If you have some form of
knowledge of what a program is worth embedded in the architecture,
then you should be able to limit these sorts of problems, and allow
more experimentation.

If you try self-modifying and experimenting code on a simple stored
program system, it will generally cause errors and lots of problems,
when things go wrong, as there are no safeguards to what the program
can do. You can lock the experimental code in a sand box, as in
genetic programming, but then it can't replace older code or change
the methods of experimentation. You can also use formal proof, but
then that limits a lot what sources of information you can use as
inspiration for the experiment.

My approach allows an experimental bit of code, if it proves itself by
being useful, to take the place of other code, if it happens to be
coded to take over the function as well.


And what kind of human mental activity
do you see as evidence of the brain's different kind of programs?


Addiction. Or the general goal optimising behaviour of the various
different parts of the brain. That we notice things more if they are
important to us, which implies that our noticing functionality
improves dependent upon what our goal is. Also the general
pervasiveness of the dopaminergic neural system, that I think has an
important function in determining which programs or neural areas are
being useful.

* I shall now get back to how code is determined to be useful.
Interestingly it is somewhat like the credit attribution for how much
work people have done on the agi projects that some people have been
discussing. My current thinking is something like this. There is a
fixed function, that can recognise manifestly good and bad situations,
it provides a value every so often to all the programs than have
control of an output. If things are going well, some food is found,
the value goes up an injury is sustained the value goes down. Basic
reinforcement learning idea.

The value becomes in the architecture a fungible, distributable, but
conserved, resource.  Analogous to money, although when used to
overwrite something it is removed dependent upon hoe useful the
program overwritten was. The outputting programs pass it back to the
programs that have given them they information they needed to output,
whether that information is from long term memory or processed from
the environment. These second tier programs pass it further back.
However the method of determining who gets the credit doesn't have to
always be a simplistic function, they can have heuristics on how to
distribute the utility based on the information they get from each of
its partners. As these heuristics are just part of each program they
can change as well.

So in the end you get an economy of programs that aren't forced to do
anything. Just those that perform well can overwrite those that don't
do so well. It is a very loose constraint on what the system actually
does. On top of this in order to get an AGI you would integrate
everything we know about language, senses, naive physics, mimicry and
other things yet discovered. Also adding the new know

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
) 
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?  
For example, removing small parts of the brainstem result in coma.

) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this 
theory--sounds good?  For example, you're referring to multiple tiers of 
organization, which sound like larger scale organizations that maybe have 
been further discussed elsewhere?

It sounds like there are intricate dependency networks that must be 
maintained, for starters.  A lot of supervision and support code that 
does this--or is that evolved in the system also?

--
Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 24/06/07, Bo Morgan <[EMAIL PROTECTED]> wrote:


On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
)
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent state to deep sleep. Just one that cannot be


) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this
theory--sounds good?  For example, you're referring to multiple tiers of
organization, which sound like larger scale organizations that maybe have
been further discussed elsewhere?


Sorry. It is pretty much all just me at the moment, and the higher
tiers of organisation are just fragments that I know will need to be
implemented or planned for, but have no concrete ideas for at the
moment. I haven't written up everything at the low level either,
because I am not working on this full time. I hope to start a PhD on
it soon, although I don't know where. It will mainly working on the
trying to get a theory of how to design the systems properly, so that
the system will only reward those programs that do well and won't
encourage defectors to spoil what other programs are doing, based on
game theory and economic theory. That is the level I am mainly
concentrating on right now.


It sounds like there are intricate dependency networks that must be
maintained, for starters.  A lot of supervision and support code that
does this--or is that evolved in the system also?


My rule of thumb is to try to put as much as possible into the
changeable/evolving section, but code it by hand to start with if is
needed for the system to start to do some work. The only reason to
keep it on the outside is if the system would be unstable with it on
the inside, e.g. the functions that give out reward.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

Sorry, sent accidentally while half finished.

Bo wrote:

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic changes in the brain. While the brain stem has
dictatorial control over conciousness and activity it does not
necessarily control all activity in the brain in terms of memory and
how it changes. Which is what I am interested in.

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent brain state to deep sleep. Just one that cannot
be stopped in the usual fashion.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e