Re: [agi] structure of the mind

2007-03-20 Thread YKY (Yan King Yin)

On 3/20/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

There is one way you can form a coherent, working system from a congeries

of

random agents: put them in a marketplace. This has a fairly rigorous
discipline of its own and most of them will not survive... and of course

the

system has to have some way of coming up with new ones that will.
[...]


This is assuming that you have a *massive* number of agents who participate
in said market.  In reality I don't think there are a massive number of
narrow AI projects wanting to plug into a large AI ecosystem.  There are
many, but not massively many, narrow AI projects out there.

For example, I can believe there are 100s of face-recognition systems
world-wide, but definitely not  10,000.

Can you clarify:  are those agents all engineered by one group of
programmers, or are they recruited externally, eg from the internet?

In many ways, my rule-based production system cum truth maintenance
system can be viewed as a market place (of production rules or beliefs).
The beliefs in such a system depend on its experience, is unpredictable,
and is therefore emergent.  In this sense, *any* AGI would display emergent
behavior.

It all goes back to my original analysis:  everyone wants to start their own
marketplace and get other people to participate in it.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eric Baum

YKY On 3/20/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
 There is one way you can form a coherent, working system from a
 congeries
YKY of
 random agents: put them in a marketplace. This has a fairly
 rigorous discipline of its own and most of them will not
 survive... and of course
YKY the
 system has to have some way of coming up with new ones that will.
 [...]

YKY This is assuming that you have a *massive* number of agents who
YKY participate in said market.  In reality I don't think there are a
YKY massive number of narrow AI projects wanting to plug into a large
YKY AI ecosystem.  There are many, but not massively many, narrow
YKY AI projects out there.

YKY For example, I can believe there are 100s of face-recognition
YKY systems world-wide, but definitely not  10,000.

YKY Can you clarify: are those agents all engineered by one group
YKY of programmers, or are they recruited externally, eg from the
YKY internet?

I think what Josh had in mind was a system like my Hayek system,
where the agents were actually evolved. But there is no reason why
engineered agents couldn't participate also.

The point of the market is to provide feedback to the agents.
If the economy is set up right, then agents that earn money are
contributing to the performance of the overall system, and agents
that lose money are harming it. Thus designer of the agent,
whether it be an evolutionary algorithm or a programmer or a team
of programmers, can pay attention to a local signal, earning money.

YKY In many ways, my rule-based production system cum truth
YKY maintenance system can be viewed as a market place (of production
YKY rules or beliefs).  The beliefs in such a system depend on its
YKY experience, is unpredictable, and is therefore emergent.  In this
YKY sense, *any* AGI would display emergent behavior.

YKY It all goes back to my original analysis: everyone wants to start
YKY their own marketplace and get other people to participate in
YKY it.

The caveat is that this only works if your economy is set up
correctly. It has to obey principles conservation of money and
property rights. Numerous AI systems that believed they were
invoking economic like organizations-- such as Eurisko and Classifier
Systems-- ran into pathologies because they didn't construct the
economy properly. (And related pathologies can be observed in the real
economy and ecosystem, where these principles are violated.)
I recommend Chapter 10 of What is Thought? for more extended explanation.

YKY YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eric Baum
This response will cover points raised by several previous posts in
the emergence/agenda/structure of mind threads, by Goertzel, Hall,
Wallace, etc.

What makes an intelligence general, to the extent that is possible,
is that it does the right thing on new tasks or new situations, which
it hadn't seen before. That's not going to happen unless the system
is built in a very constrained way to respond to previous situations,
say by being produced by very compact (or constrained) code. If you
just keep adding new modules or features for each new task, you may
solve that task, but you won't solve generalizations. 

This is the problem with Wallace's complaints. You actually want the
machine [to do] something unpredicted, namely the right thing in
unpredicted circumstances. Its true that its hard and expensive to
engineer/find an underlying compact explanation, but it is precisely
the fact that this very constrained/compact underlying program is
so improbable that makes it work! The arguments for its working
in fact *rest exactly* on the fact that it is so improbable, it
wouldn't exist unless it generalized to new experiences. So while
its hard to engineer this, which might be called emergence,
you will IMO be forced to if you want to succeed. That is the 
reason why AGI is hard.

As has been pointed out in this thread (I believe by Goertzel and Hall)
Minsky's approach in Society of Mind et seq of adding large numbers 
of systems then begs the question: how will these things ever work
together, and why should the system generalize? I criticized it
from this point of view in What is Thought? One way to try to handle
the organization then is an economic framework.

Hayek doesn't directly scale from random start
to an AGI architecture in as much as the
learning is too slow. But the same is true of any other means of
EC or learning that doesn't start with some huge  head start.
It seems entirely reasonable to merge a Hayek like architecture with
scaffolds and hand-coded chunks and other stuff (maybe whatever is in
Novamente) to get it a head
start. An advantage of having the economic system then is to impose
coherence and constrainedness-- parts that don't in fact work
effectively with others will be seen to be dying, forcing you to fix
the problems. Without the economic discipline, you are likely to have
subsystems (and sub-subsystems) you think are positive but are failing
in some way through interaction effects.

The brain was not developed exactly through a Hayek system, but that
doesn't mean it does not exploit one (for example, mediated by
endorphins or whatever) nor that one might not be very useful to
impose on an AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Ben Goertzel


Eric Baum wrote:

Hayek doesn't directly scale from random start
to an AGI architecture in as much as the
learning is too slow. But the same is true of any other means of
EC or learning that doesn't start with some huge  head start.
It seems entirely reasonable to merge a Hayek like architecture with
scaffolds and hand-coded chunks and other stuff (maybe whatever is in
Novamente) to get it a head
start.

This does seem reasonable in principle, and is something worth exploring.

We use some economic ideas in the Novamente design, but those aspects of 
the design
have not been implemented yet except in crude prototype form; and in the 
current version
of the design they are more simplistic than (and much faster than) the 
sort of stuff in

Hayek...

 An advantage of having the economic system then is to impose
coherence and constrainedness-- parts that don't in fact work
effectively with others will be seen to be dying, forcing you to fix
the problems. Without the economic discipline, you are likely to have
subsystems (and sub-subsystems) you think are positive but are failing
in some way through interaction effects.

  
True.  However, to get the economic system to work effectively enough to 
identify problems
in a general and accurate way, requires significant computational 
resources to be devoted to
the economics aspect.  So the system as a whole must make a tradeoff 
between more accurate
economic regulation, and having more processor time for things other 
than economic

regulation...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Russell Wallace

On 3/20/07, Eric Baum [EMAIL PROTECTED] wrote:


This is the problem with Wallace's complaints. You actually want the
machine [to do] something unpredicted, namely the right thing in
unpredicted circumstances. Its true that its hard and expensive to
engineer/find an underlying compact explanation, but it is precisely
the fact that this very constrained/compact underlying program is
so improbable that makes it work! The arguments for its working
in fact *rest exactly* on the fact that it is so improbable, it
wouldn't exist unless it generalized to new experiences. So while
its hard to engineer this, which might be called emergence,
you will IMO be forced to if you want to succeed. That is the
reason why AGI is hard.



It's one reason why AGI is hard, and there is truth in what you say.

However, ab initio search for compact explanation is so hard that we humans
mostly don't do it because we can't. When we do have to bite the bullet and
explicitly attempt it, it often takes entire communities of geniuses working
for decades to produce a result that can be boiled down to a few lines.
Newton, Darwin, Einstein et al were by no means the only ones working on
their various problems. Koza has an example of the invention of a simple
circuit, I think it was the negative feedback amplifier or somesuch, you
could draw it on the back of a cigarette pack, it took a very bright
engineer months or years of thinking before he cracked it, and there were
lots of others trying at the same time.

What we mostly do is use existing solutions and blends thereof, that were
developed by our predecessors over millions of lifetimes. Even when I'm
programming, apparently writing new code, I'm really mostly using concepts I
learned from other people, tweaking and blending them to fit the current
context.

And an AGI will have to do the same. Yes, it will have to be able to bite
the bullet and run a full-blown search for a compact solution when
necessary. But that's just plain too hard to be doing all the time, so an
AGI will have to, like humans, mostly rely on existing concepts developed by
other people.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread rooftop8000

 
 As has been pointed out in this thread (I believe by Goertzel and Hall)
 Minsky's approach in Society of Mind et seq of adding large numbers 
 of systems then begs the question: how will these things ever work
 together, and why should the system generalize? 

How does adding auditory modules to our brain generalize anything? How
does addinga new inference algorithm generalize anything? Because 
you have extra ways to process information, you can extract new
information and build new modules around it. 

I don't see how adding information and code can be a bad thing (if you have
enough cpu power), it will just make it more likely for the right subset
to be part of your system


I criticized it
 from this point of view in What is Thought? One way to try to handle
 the organization then is an economic framework.
 

I thought the obvious equivalent of {economy and money} is  information
spreading. If you are a big player, a lot of other modules will
take your outputs (information) and process it, giving you more influence
overall. Useless information won't be further processed and will be
a dead end in the system




 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eugen Leitl
On Tue, Mar 20, 2007 at 06:34:25PM +, Russell Wallace wrote:

  wouldn't exist unless it generalized to new experiences. So while
  its hard to engineer this, which might be called emergence,

It's not emergence, but rather failing gracefully and doing the
right thing.

  you will IMO be forced to if you want to succeed. That is the
  reason why AGI is hard.

There are many reasons why AGI is hard. This is only one of them.

Folks, please use the right quoting style. Not posting HTML-only
is a good start. Levels of whitespace indentation don't cust
the mustard. You have to use .
 
It's one reason why AGI is hard, and there is truth in what you say.
However, ab initio search for compact explanation is so hard that we
humans mostly don't do it because we can't. When we do have to bite

Exhaustive searches are intractable, but if the fitness space has high
diversity in a small ball at each given point of genotype space and
a neutral fitness network though which individua can percolate through
without suffering dire consequences you can reach pretty good solutions
without doing the impossible. 

And, of course, the systems reshaping their fitness landscape in above
way is the hardest trick they have to do, because they have to effectively
(statistically) brute force that initial threshold. It's pretty easy sailing
afterwards.

the bullet and explicitly attempt it, it often takes entire
communities of geniuses working for decades to produce a result that
can be boiled down to a few lines. Newton, Darwin, Einstein et al were
by no means the only ones working on their various problems. Koza has
an example of the invention of a simple circuit, I think it was the
negative feedback amplifier or somesuch, you could draw it on the back
of a cigarette pack, it took a very bright engineer months or years of
thinking before he cracked it, and there were lots of others trying at
the same time.

Evolutionary designs typically produce networks with both positive and
negative feedback loops. Miraculously, these are not only stable, but rather
robust. Notice that a mix of positive and negative feedback loops is an
earmark of nonlinear dynamics systems. That evolutionary algorithms produce
just these is not a coincidence. It indicates nonlinear systems are damn
good solutions. 

Notice that human designers routinely miss these, and don't even have the
analytical tools to understand these when plunked down in front of their
very noses. What you described is not an isolated occurence. It is a typical
case.

What we mostly do is use existing solutions and blends thereof, that
were developed by our predecessors over millions of lifetimes. Even
when I'm programming, apparently writing new code, I'm really mostly
using concepts I learned from other people, tweaking and blending them
to fit the current context.

I don't view programming as programming, but as state and state 
transformations. Everything else is just semantics and syntactic sugar.
And once you realize that you're dealing with a lot of state, and
quite nonlinear transformations, then immediately the source of the state
(somebody typing it in? I don't think so) and the kind of transformations
(written down explicitly? I don't think so) come in.

And an AGI will have to do the same. Yes, it will have to be able to
bite the bullet and run a full-blown search for a compact solution

Why bite the bullet? Optimisations is where it's all at.

when necessary. But that's just plain too hard to be doing all the
time, so an AGI will have to, like humans, mostly rely on existing
concepts developed by other people.

People, as not bipedal primate people. And of course this assumes that
everything is zero diversity, so you can just drop in modules, and 
expect them to make sense.

Just for the record of any future readers: not all of us are quite that
silly.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eric Baum

Russell On 3/20/07, Eric Baum [EMAIL PROTECTED] wrote:
 This is the problem with Wallace's complaints. You actually want
 the machine [to do] something unpredicted, namely the right thing
 in unpredicted circumstances. Its true that its hard and expensive
 to engineer/find an underlying compact explanation, but it is
 precisely the fact that this very constrained/compact underlying
 program is so improbable that makes it work! The arguments for its
 working in fact *rest exactly* on the fact that it is so
 improbable, it wouldn't exist unless it generalized to new
 experiences. So while its hard to engineer this, which might be
 called emergence, you will IMO be forced to if you want to
 succeed. That is the reason why AGI is hard.
 

Russell It's one reason why AGI is hard, and there is truth in what
Russell you say.

Russell However, ab initio search for compact explanation is so hard
Russell that we humans mostly don't do it because we can't. When we
Russell do have to bite the bullet and explicitly attempt it, it
Russell often takes entire communities of geniuses working for
Russell decades to produce a result that can be boiled down to a few
Russell lines.  Newton, Darwin, Einstein et al were by no means the
Russell only ones working on their various problems. Koza has an
Russell example of the invention of a simple circuit, I think it was
Russell the negative feedback amplifier or somesuch, you could draw
Russell it on the back of a cigarette pack, it took a very bright
Russell engineer months or years of thinking before he cracked it,
Russell and there were lots of others trying at the same time.

Russell What we mostly do is use existing solutions and blends
Russell thereof, that were developed by our predecessors over
Russell millions of lifetimes.

Don't forget the investment of effort by evolution, which was even
far greater still.

 Even when I'm programming, apparently
Russell writing new code, I'm really mostly using concepts I learned
Russell from other people, tweaking and blending them to fit the
Russell current context.

Russell And an AGI will have to do the same. Yes, it will have to be
Russell able to bite the bullet and run a full-blown search for a
Russell compact solution when necessary. But that's just plain too
Russell hard to be doing all the time, so an AGI will have to, like
Russell humans, mostly rely on existing concepts developed by other
Russell people.

Oh absolutely. What's hard, and has to be faced, is designing the AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eric Baum

 As has been pointed out in this thread (I believe by Goertzel and
 Hall) Minsky's approach in Society of Mind et seq of adding large
 numbers of systems then begs the question: how will these things
 ever work together, and why should the system generalize?

rooftop How does adding auditory modules to our brain generalize
rooftop anything? How does addinga new inference algorithm generalize
rooftop anything? Because you have extra ways to process information,
rooftop you can extract new information and build new modules around
rooftop it.

rooftop I don't see how adding information and code can be a bad
rooftop thing (if you have enough cpu power), it will just make it
rooftop more likely for the right subset to be part of your system

An AGI does something specific. Adding a new inference algorithm to
it, aside from slowing it down, can also make it do the wrong thing.
Especially when its a program for something as complicated as cognition.

Its not that I'm against modules.

rooftop I criticized it
 from this point of view in What is Thought? One way to try to
 handle the organization then is an economic framework.
 

rooftop I thought the obvious equivalent of {economy and money} is
rooftop information spreading. If you are a big player, a lot of
rooftop other modules will take your outputs (information) and
rooftop process it, giving you more influence overall. Useless
rooftop information won't be further processed and will be a dead end
rooftop in the system

Well, I'm not sure what particular algorithm you are referring to
here. Something has to decide who is a big
player and which information is useless. Minsky doesn't describe 
how that's done anywhere that I know. 

The Soviet Union tried to run the economy by central management,
and it didn't work in part because the center doesn't have
the information to decide what is needed to be done. That information
is carried in a free market economy by prices, and its basically
unavailable to the central planners, who therefor can't readily get
the economy right. 
In the mind you might imagine a central unit which effectively
understands what the price structure should be because it was created
by evolution, but its by no means clear how to get it right in an AI
unless you have something like prices.



 
rooftop 

rooftop Now that's room service!  Choose from over 150,000 hotels in
rooftop 45,000 destinations on Yahoo! Travel to find your fit.
rooftop http://farechase.yahoo.com/promo-generic-14795097

rooftop - This list is sponsored by AGIRI:
rooftop http://www.agiri.org/email To unsubscribe or change your
rooftop options, please go to:
rooftop http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Mark Waser

   I think that the concept that many of you are struggling to voice is
   Credit attribution is a really hard problem in AGI.  Market 
economies solve that problem (with various difficulties, but . . . . :-) 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] structure of the mind

2007-03-19 Thread J. Storrs Hall, PhD.
On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
 My own view these days is that a wild combination of agents is
 probably not the right approach, in terms of building AGI.

 Novamente consists of a set of agents that have been very carefully
 sculpted to work together in such a way as to (when fully implemented
 and tuned) give rise to the right overall emergent structures.

There is one way you can form a coherent, working system from a congeries of 
random agents: put them in a marketplace. This has a fairly rigorous 
discipline of its own and most of them will not survive... and of course the 
system has to have some way of coming up with new ones that will. 

And before any of that happens, the agents have to be able to communicate, if 
not in a unified language, a language with one universal concept: money. (It 
doesn't matter if I don't understand your ad -- I'll just do business with 
whomever I do understand. But the money I pay them has to be good for them to 
pay the next guy, whom I don't understand.)

One metaphor I've found helpful thinking about the mind is to realize that it 
was formed by evolution and thus probably has many of the same higher-level 
architectural properties as the body. The body has a relatively 
straightforward higher-level structure (e.g. bones and muscles in the 
hundreds) that gets very hairy as you get down to the biochemical level. I'm 
sure the brain does too. What we're all hoping here is that we can capture 
the essence of intelligence at the high level, like simulating the skeleton 
and muscles in a walking robot, while being able to substitute the hairy 
substrate of current-day computers for the hairy parts of the neural 
architecture. 

The body has major organs that are very special-purpose -- heart; lungs, liver 
-- but also ones that are as general-purpose as anything nature ever made 
(hands). A big chunk of the brain clearly goes to forming and maintaining a 
world model, Brooks and the robo-know-nothings to the contrary 
notwithstanding. Think of that as like legs -- special-purpose in one sense, 
but general in a higher one (they're mainly for locomotion, but they'll take 
you anywhere). 

There's also a hands part of the mind -- that which lets us grasp ideas. 
Most the special-purpose parts can be finessed one way or another: wheels for 
legs, wires for veins, what have you. But it needs hands that are like hands.
Show me a robot that can do cat's cradle, make a whistle of its fists, do 
shadow pictures on the wall, and make an omelette from whole eggs and you've 
got something that's not just a toy.

Same with the AGI. You can finesse most of the special-purpose parts -- just 
build something to do the function, which is basically what engineering is 
about. But universal machines are an arcane and little-studied phenomenon -- 
one can't even say field. There's no design methodology for a machine that 
can do anything. That's what hands are, and we've got something like them in 
our minds.

Josh

  Greenspun's Tenth Rule of Programming: any sufficiently complicated C or 
Fortran program contains an ad hoc informally-specified bug-ridden slow 
implementation of half of Common Lisp.

- Philip Greenspun

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-19 Thread Ben Goertzel

J. Storrs Hall, PhD. wrote:

On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
  

My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.

Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to (when fully implemented
and tuned) give rise to the right overall emergent structures.



There is one way you can form a coherent, working system from a congeries of 
random agents: put them in a marketplace. This has a fairly rigorous 
discipline of its own and most of them will not survive... and of course the 
system has to have some way of coming up with new ones that will. 


In principle, yeah, this can work.  

But we have to remember that the biggest problem of AGI is dealing with 
severe computational
resource limitations (and, the brain's resources are also to be 
considered severely limited, compared
to what naive computational algorithms could easily consume, 
mathematically speaking).


The question is whether a virtual marketplace is a viable approach to 
AGI, in terms of computational

expense...

For instance, Baum's Hayek is an innovative and exciting use of 
economics in an AI learning context,
yet the approach seems not to be scalable into anything resembling an 
AGI architecture.


Novamente uses economic ideas in some aspects, but mainly just for 
allocation of attention (system

resources) among different internal processes.

My strong intuitive feeling is that using a virtual marketplace to 
originate a coherent working system from a
congerie of random agents would not be computationally feasible.  This, 
to me, falls into the same
general category as build a primordial soup and let Alife and then AI 
evolve from it.  Yes, these
things can work given enough resources.  But the resource requirements 
are way higher than for

more direct engineering-oriented approaches.

The brain may well involve some economics-ish dynamics.  Energy 
minimization and energy
conservation certainly share some common factors with profit 
maximization and money conservation.
However, I really doubt the brain relies on emergent market dynamics to 
enable interoperation of
its various components.  The interoperation of the components was 
originated via evolution, and is merely
tuned and minorly adjusted by brain dynamics during the life of the 
organism (quasi-economic or

otherwise).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303