Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-15 Thread Joshua Fox יהושע פוקס

I'll second that.

I'd love to have the many fields necessary for AGI neatly summarized for me
-- or should I say "spoon-fed to me" :-)

This can come in the form of a book, a good website, an online course, or an
"onlined" course with video and lecture notes.

Joshua



2007/4/13, Ryan McCall <[EMAIL PROTECTED]>:


would it be online?   You have my interest...

On 4/13/07, Richard Loosemore < [EMAIL PROTECTED]> wrote:
>
>
>
>
> I wonder...
>
> How many people on this list would actually go to the trouble, if they
> could, of signing up for a truly comprehensive course in the foundations
> of AI/CogPsy/Neuroscience, which would give them a grounding in all of
> these fields and put them in a position where they had a unified picture
>
> of what kind of skills would be needed to build an AGI.
>
> I am sure the people who already have established careers would not be
> interested, but what of the people who are burning with passion to get
> some real progress to happen in the AGI, and don't know where to put
> their energies.
>
> What if I organised a summer school to do such a thing?
>
> This is just my spontaneous first thought about the idea.  If there was
> enough initial interest, I would be happy to scope it out more
> thoroughly.
>
>
>
>
>
> Richard Loosemore
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
> http://v2.listbox.com/member/?&;
>



--
Ryan J. McCall
--

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

[agi] Low I.Q. AGI

2007-04-15 Thread Eric B. Ramsay
There is an easy assumption of most writers on this board that once the AGI 
exists, it's route to becoming a singularity is a sure thing. Why is that? In 
humans there is a wide range of "smartness" in the population. People face 
intellectual thresholds that they cannot cross because they just do not have 
enough of this smartness thing. Although as a physicist I understand General 
Relativity, I really doubt that if it had been left up to me that it would ever 
have been discovered - no matter how much time I was given. Do neuroscientists 
know where this talent difference comes from in terms of brain structure? Where 
in the designs for other AGI (Ben's for example) is the smartness of the AGI 
designed in? I can see how an awareness may bubble up from a design but this 
diesn't mean a system smart enough to move itself towards being a singularity. 
Even if you feed the system all the information in the world, it would know a 
lot but not be any smarter or even know how to make
 itself smarter. How many years of training will we give a brand new AGI before 
we decide it's retarded?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:

There is an easy assumption of most writers on this board that once the AGI
exists, it's route to becoming a singularity is a sure thing.


I'm not sure whether this assumption is really shared by most of the
people here. At least I don't think AGI leads to singularity, though
my reason is not the same as yours.

Pei


Why is that?
In humans there is a wide range of "smartness" in the population. People
face intellectual thresholds that they cannot cross because they just do not
have enough of this smartness thing. Although as a physicist I understand
General Relativity, I really doubt that if it had been left up to me that it
would ever have been discovered - no matter how much time I was given. Do
neuroscientists know where this talent difference comes from in terms of
brain structure? Where in the designs for other AGI (Ben's for example) is
the smartness of the AGI designed in? I can see how an awareness may bubble
up from a design but this diesn't mean a system smart enough to move itself
towards being a singularity. Even if you feed the system all the information
in the world, it would know a lot but not be any smarter or even know how to
make itself smarter. How many years of training will we give a brand new AGI
before we decide it's retarded?
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


RE: [agi] Low I.Q. AGI

2007-04-15 Thread Peter Voss
Pei, what are yours?

-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 15, 2007 8:32 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Low I.Q. AGI

On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> There is an easy assumption of most writers on this board that once the
AGI
> exists, it's route to becoming a singularity is a sure thing.

I'm not sure whether this assumption is really shared by most of the
people here. At least I don't think AGI leads to singularity, though
my reason is not the same as yours.

Pei

> Why is that?
> In humans there is a wide range of "smartness" in the population. People
> face intellectual thresholds that they cannot cross because they just do
not
> have enough of this smartness thing. Although as a physicist I understand
> General Relativity, I really doubt that if it had been left up to me that
it
> would ever have been discovered - no matter how much time I was given. Do
> neuroscientists know where this talent difference comes from in terms of
> brain structure? Where in the designs for other AGI (Ben's for example) is
> the smartness of the AGI designed in? I can see how an awareness may
bubble
> up from a design but this diesn't mean a system smart enough to move
itself
> towards being a singularity. Even if you feed the system all the
information
> in the world, it would know a lot but not be any smarter or even know how
to
> make itself smarter. How many years of training will we give a brand new
AGI
> before we decide it's retarded?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

Peter,

Many of the arguments of "AGI will necessarily lead to singularity"
that I've seen are based on either simple extrapolation of history or
wrong conception of intelligence (i.e., the assumption that an AGI
will necessarily know how to design a better AGI).

Of course I cannot prove that singularity is impossible. However, I
think we have too little evidence to talk about it seriously, and to
bundle the notion of singularity with AGI will not serve us well as
the current time.

Therefore, I'd rather discuss AGI than singularity. ;-)

Pei

On 4/15/07, Peter Voss <[EMAIL PROTECTED]> wrote:

Pei, what are yours?

-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Sunday, April 15, 2007 8:32 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Low I.Q. AGI

On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> There is an easy assumption of most writers on this board that once the
AGI
> exists, it's route to becoming a singularity is a sure thing.

I'm not sure whether this assumption is really shared by most of the
people here. At least I don't think AGI leads to singularity, though
my reason is not the same as yours.

Pei

> Why is that?
> In humans there is a wide range of "smartness" in the population. People
> face intellectual thresholds that they cannot cross because they just do
not
> have enough of this smartness thing. Although as a physicist I understand
> General Relativity, I really doubt that if it had been left up to me that
it
> would ever have been discovered - no matter how much time I was given. Do
> neuroscientists know where this talent difference comes from in terms of
> brain structure? Where in the designs for other AGI (Ben's for example) is
> the smartness of the AGI designed in? I can see how an awareness may
bubble
> up from a design but this diesn't mean a system smart enough to move
itself
> towards being a singularity. Even if you feed the system all the
information
> in the world, it would know a lot but not be any smarter or even know how
to
> make itself smarter. How many years of training will we give a brand new
AGI
> before we decide it's retarded?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Richard Loosemore

Eric B. Ramsay wrote:
There is an easy assumption of most writers on this board that once the 
AGI exists, it's route to becoming a singularity is a sure thing. Why is 
that? In humans there is a wide range of "smartness" in the population. 
People face intellectual thresholds that they cannot cross because they 
just do not have enough of this smartness thing. Although as a 
physicist I understand General Relativity, I really doubt that if it had 
been left up to me that it would ever have been discovered - no matter 
how much time I was given. Do neuroscientists know where this talent 
difference comes from in terms of brain structure? Where in the designs 
for other AGI (Ben's for example) is the smartness of the AGI designed 
in? I can see how an awareness may bubble up from a design but this 
diesn't mean a system smart enough to move itself towards being a 
singularity. Even if you feed the system all the information in the 
world, it would know a lot but not be any smarter or even know how to 
make itself smarter. How many years of training will we give a brand new 
AGI before we decide it's retarded?


Eric,

I am going to address your question, as well as Pei's response that 
there should not really be a direct relationship between AGI and the 
Singularity.


In the course of building an AGI, we (the designers of the AGI) will 
have to understand a great deal about what makes an intelligence tick. 
By the time we get anything working at all, we will know a lot more 
about the workings of intelligence than we do now.


Now, our first attempts to build a full intelligence will very probably 
result in many test systems that have a "low IQ" -- a system that is not 
capable of being as smart as its designers.


If we were standing in front of a human with that kind of low IQ, we 
would face a long, hard job (and in some cases, an impossible job) to 
improve their intelligence.  But that is most emphatically not the case 
with a low-IQ AGI prototype.  At the very least, we would be able to 
inspect the system during actual thinking episodes, in order to get 
clues about what goes right and what goes wrong.


So, combining the knowledge we will have acquired during the design 
phase with the vast amount of performance data available during 
prototype phase, there are ample opportunities for us to improve the 
design.  Specifically, we will try to find out what ingredients are 
needed to make the system extremely creative.  (As well as extremely 
balanced and friendly, of course).


By this means, I believe there would be no substantial obstacles to our 
getting the system up to the average human level of performance.  I 
cannot guarantee this, of course, but there are no in-principle reasons 
why not.  In fact, there are no reasons why we should not be able to get 
it up to a superhuman level of performance just by our own R&D efforts 
(some people seem to think that there is something inherently impossible 
about a human being able to design something smarter than itself, but 
that idea is really just science-fiction hearsay, not grounded in any 
real limitations).


Okay, so if we assume that we can build a roughly-human-level 
intelligence, what next?  The next phase is again very different to the 
case of having a human genius hanging around.  [Aside.  By 'genius' I 
just mean 'very bright compared with average' - I don't mean 'person 
with magically superhuman powers of intelligence and creativity'].  This 
system will be capable of being augmented in a number of ways that are 
simply not possible with humans.  Pure physical technology advances will 
promise the uploading of the original system into faster hardware ... so 
even if we and it NEVER did another stroke of work to improve its 
intelligence, we might find that it would get faster every time an 
electronic hardware upgrade became available.  After a few years, it 
might be able to operate a thousand times faster than humans purely 
because of this factor.


Second factor:  Duplication.  The original AGI (with full adult 
intelligence) could be duplicated in such a way that, for every genius 
machine we produce, we could build a thousand copies and get them 
(persuade them) to work together as a team.  That is significant:  human 
geniuses are rare, so what would happen if we could take an adult 
Einstein and quickly make a thousand similar brains?  Never possible 
with humans:  entirely feasible with a smart AGI.


Third factor:  Communication bandwidth.  This huge team of genius AGIs 
would be able to talk to each other at rates that we can hardly even 
imagine.  Human teams tend to suffer from problems when they become too 
large:  some of those problems could be overcome because the AGI team 
would all (effectively) be in 'telepathic' contact with one another ... 
able to exchange ideas and inspiration without having to go through 
managers and committee meetings.  Result:  the AGI team of a thousand 
geniuses would be able to work at

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Benjamin Goertzel

I find that I agree with nearly all Loosemore's comments in his reply...

I certainly agree with Pei that, in terms of spreading the AGI meme
among researchers in academia and industry, focusing on the
Singularity aspect is not good marketing.

And, as a matter of pragmatic time-management, I am spending most of
my "AGI R&D time" working on actually getting to the point of
achieving advanced artificial cognition, rather than thinking about
how to make an advanced AGI yet more advanced.  (Though I do agree
with Eliezer Yudkowsky and others that it is important to think about
the ethics of advanced AGI's now, in advance of constructing them; and
that one wants to think very deeply before creating an AGI that has
significant potential to rapidly accelerate its own intelligence
beyond the human level.)

But, all these issues aside, I am close to certain that once we have a
near-human-level AGI, then -- if we choose to effect a transition to
superhuman-level AI -- it won't be a huge step to do so.

And, I am close to certain that once we have a superhuman-level AGI, a
host of other technologies like strong nanotech, genetic engineering,
quantum computing etc. etc. will follows.

Of course, this is all speculation and plenty of unknown things could
go wrong.  But, to me, the logic in favor of the above conclusions
seems pretty solid.

-- Ben G

On 4/15/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Eric B. Ramsay wrote:
> There is an easy assumption of most writers on this board that once the
> AGI exists, it's route to becoming a singularity is a sure thing. Why is
> that? In humans there is a wide range of "smartness" in the population.
> People face intellectual thresholds that they cannot cross because they
> just do not have enough of this smartness thing. Although as a
> physicist I understand General Relativity, I really doubt that if it had
> been left up to me that it would ever have been discovered - no matter
> how much time I was given. Do neuroscientists know where this talent
> difference comes from in terms of brain structure? Where in the designs
> for other AGI (Ben's for example) is the smartness of the AGI designed
> in? I can see how an awareness may bubble up from a design but this
> diesn't mean a system smart enough to move itself towards being a
> singularity. Even if you feed the system all the information in the
> world, it would know a lot but not be any smarter or even know how to
> make itself smarter. How many years of training will we give a brand new
> AGI before we decide it's retarded?

Eric,

I am going to address your question, as well as Pei's response that
there should not really be a direct relationship between AGI and the
Singularity.

In the course of building an AGI, we (the designers of the AGI) will
have to understand a great deal about what makes an intelligence tick.
By the time we get anything working at all, we will know a lot more
about the workings of intelligence than we do now.

Now, our first attempts to build a full intelligence will very probably
result in many test systems that have a "low IQ" -- a system that is not
capable of being as smart as its designers.

If we were standing in front of a human with that kind of low IQ, we
would face a long, hard job (and in some cases, an impossible job) to
improve their intelligence.  But that is most emphatically not the case
with a low-IQ AGI prototype.  At the very least, we would be able to
inspect the system during actual thinking episodes, in order to get
clues about what goes right and what goes wrong.

So, combining the knowledge we will have acquired during the design
phase with the vast amount of performance data available during
prototype phase, there are ample opportunities for us to improve the
design.  Specifically, we will try to find out what ingredients are
needed to make the system extremely creative.  (As well as extremely
balanced and friendly, of course).

By this means, I believe there would be no substantial obstacles to our
getting the system up to the average human level of performance.  I
cannot guarantee this, of course, but there are no in-principle reasons
why not.  In fact, there are no reasons why we should not be able to get
it up to a superhuman level of performance just by our own R&D efforts
(some people seem to think that there is something inherently impossible
about a human being able to design something smarter than itself, but
that idea is really just science-fiction hearsay, not grounded in any
real limitations).

Okay, so if we assume that we can build a roughly-human-level
intelligence, what next?  The next phase is again very different to the
case of having a human genius hanging around.  [Aside.  By 'genius' I
just mean 'very bright compared with average' - I don't mean 'person
with magically superhuman powers of intelligence and creativity'].  This
system will be capable of being augmented in a number of ways that are
simply not possible with humans.  Pure physical t

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

I actually agree with most of what Richard and Ben said, that is, we
can create AI that is "more intelligent", in some sense, than human
beings --- that is also what I've been working on.

However, to me "Singularity" is a stronger claim than "superhuman
intelligence". It implies that the intelligence of AI will increase
exponentially, to a point that is shorter than what we can perceive or
understand. That is what I'm not convinced.

Pei

On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

I find that I agree with nearly all Loosemore's comments in his reply...

I certainly agree with Pei that, in terms of spreading the AGI meme
among researchers in academia and industry, focusing on the
Singularity aspect is not good marketing.

And, as a matter of pragmatic time-management, I am spending most of
my "AGI R&D time" working on actually getting to the point of
achieving advanced artificial cognition, rather than thinking about
how to make an advanced AGI yet more advanced.  (Though I do agree
with Eliezer Yudkowsky and others that it is important to think about
the ethics of advanced AGI's now, in advance of constructing them; and
that one wants to think very deeply before creating an AGI that has
significant potential to rapidly accelerate its own intelligence
beyond the human level.)

But, all these issues aside, I am close to certain that once we have a
near-human-level AGI, then -- if we choose to effect a transition to
superhuman-level AI -- it won't be a huge step to do so.

And, I am close to certain that once we have a superhuman-level AGI, a
host of other technologies like strong nanotech, genetic engineering,
quantum computing etc. etc. will follows.

Of course, this is all speculation and plenty of unknown things could
go wrong.  But, to me, the logic in favor of the above conclusions
seems pretty solid.

-- Ben G

On 4/15/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Eric B. Ramsay wrote:
> > There is an easy assumption of most writers on this board that once the
> > AGI exists, it's route to becoming a singularity is a sure thing. Why is
> > that? In humans there is a wide range of "smartness" in the population.
> > People face intellectual thresholds that they cannot cross because they
> > just do not have enough of this smartness thing. Although as a
> > physicist I understand General Relativity, I really doubt that if it had
> > been left up to me that it would ever have been discovered - no matter
> > how much time I was given. Do neuroscientists know where this talent
> > difference comes from in terms of brain structure? Where in the designs
> > for other AGI (Ben's for example) is the smartness of the AGI designed
> > in? I can see how an awareness may bubble up from a design but this
> > diesn't mean a system smart enough to move itself towards being a
> > singularity. Even if you feed the system all the information in the
> > world, it would know a lot but not be any smarter or even know how to
> > make itself smarter. How many years of training will we give a brand new
> > AGI before we decide it's retarded?
>
> Eric,
>
> I am going to address your question, as well as Pei's response that
> there should not really be a direct relationship between AGI and the
> Singularity.
>
> In the course of building an AGI, we (the designers of the AGI) will
> have to understand a great deal about what makes an intelligence tick.
> By the time we get anything working at all, we will know a lot more
> about the workings of intelligence than we do now.
>
> Now, our first attempts to build a full intelligence will very probably
> result in many test systems that have a "low IQ" -- a system that is not
> capable of being as smart as its designers.
>
> If we were standing in front of a human with that kind of low IQ, we
> would face a long, hard job (and in some cases, an impossible job) to
> improve their intelligence.  But that is most emphatically not the case
> with a low-IQ AGI prototype.  At the very least, we would be able to
> inspect the system during actual thinking episodes, in order to get
> clues about what goes right and what goes wrong.
>
> So, combining the knowledge we will have acquired during the design
> phase with the vast amount of performance data available during
> prototype phase, there are ample opportunities for us to improve the
> design.  Specifically, we will try to find out what ingredients are
> needed to make the system extremely creative.  (As well as extremely
> balanced and friendly, of course).
>
> By this means, I believe there would be no substantial obstacles to our
> getting the system up to the average human level of performance.  I
> cannot guarantee this, of course, but there are no in-principle reasons
> why not.  In fact, there are no reasons why we should not be able to get
> it up to a superhuman level of performance just by our own R&D efforts
> (some people seem to think that there is something inherently impossible
> about a human being able

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Eugen Leitl
On Sun, Apr 15, 2007 at 02:06:52PM -0400, Pei Wang wrote:

> I actually agree with most of what Richard and Ben said, that is, we
> can create AI that is "more intelligent", in some sense, than human
> beings --- that is also what I've been working on.

Ability to instantiate vanilla experts at the drop of the hat (without
having to spend some 30 years, megabucks and a failure rate of 98%)
is a major advantage already. In fact, automation about as smart as
an insect can completely transform transportation, manufacturing, military,
and a few other disciplines, leaving people to deal with with more
worthwhile tasks (specialization is for insects). 
 
> However, to me "Singularity" is a stronger claim than "superhuman
> intelligence". It implies that the intelligence of AI will increase
> exponentially, to a point that is shorter than what we can perceive or

A worm can 0wn the entire planetary infrastructure within minutes.
http://www.caida.org/publications/papers/2003/sapphire/sapphire.html

"The Sapphire Worm was the fastest computer worm in history. As it 
began spreading throughout the Internet, it doubled in size every 
8.5 seconds. It infected more than 90 percent of vulnerable hosts 
within 10 minutes."

An initial AI will be maladapted. Meaning, it will take a lot less
resources to run than to bootstrap. Additionally, annecting online
resources by remote exploits of known and unknown vulnerabilities
allows very short doubling rates (even considered the agent size).
There's not a lot of hardware online right now, and the bandwidth/latency
is rather negligible, especially at the edges, but in 50, 100 years...
I see no reason why a distinct successor of a game console won't have
enough resources for a human equivalent. There will be many billions
of such nodes on the network, maybe trillions.

It's quite easy to see that dedicated hardware could do in ~ns
what biology does in ~ms, which is six orders of magnitude apart.
A million days is about three kiloyears. Even if one is not
religious in way of linear semi-log plots, at the current rate of progress
acceleration some megayear times a few billions human equivalents
is nothing to sneeze at. 

Things are slower on the physical layer, but not that much slower.
I could see hardware doubling rates in hours to days, as in: built
from scratch, not by remote annection. It might be slow if speed
of light is about the speed of sound subjectively, but for us normal
folks it could well be a) terrifying b) terminal.

> understand. That is what I'm not convinced.

Do you think a dog has a good understanding of your daily activities?
How about a field mouse? A cyanobacterium?

Why should the current status quo be the crown of evolutionary
infoprocessing achievement? 
 
> Pei
> 
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Eugen Leitl
On Sun, Apr 15, 2007 at 07:40:03AM -0700, Eric B. Ramsay wrote:

>There is an easy assumption of most writers on this board that once
>the AGI exists, it's route to becoming a singularity is a sure thing.

The singularity is just an rather arbitrary cutoff on the advancing
horizont of predictability. We're soaking in a process with multiple
positive feedback loops right now. You'll never notice you've
passed Schwarzschild's radius when falling into Sagittarius A either. 

>Why is that? In humans there is a wide range of "smartness" in
>the population. People face intellectual thresholds that they cannot

But you can't pick up the smart ones, and make a few million copies 
of them, if you have a nice personal project.

>cross because they just do not have enough of this smartness thing.
>Although as a physicist I understand General Relativity, I really
>doubt that if it had been left up to me that it would ever have been
>discovered - no matter how much time I was given. Do neuroscientists

The dog run for a million years never discovering GR is a more canonical
example. But it takes a minimal intelligence in order to start manipulating
intelligence.

>know where this talent difference comes from in terms of brain

You could scan a vitrified brain of a freshly dead and cryonically
suspended expert with an arbitrary resolution. The information is certainly in 
there.

>structure? Where in the designs for other AGI (Ben's for example) is
>the smartness of the AGI designed in? I can see how an awareness may

Let's say I give you a knob which would slowly mushroom your neocortex. It would
just insert a new neuron between the other ones. Do you think you would
notice something, after a few years?

>bubble up from a design but this diesn't mean a system smart enough to
>move itself towards being a singularity. Even if you feed the system

Evolution is dumb as a rock, yet it produced you who is capable of producing
that symbol of strings, distributed across a planet and understood by similiarly
constructed systems. We certainly can do what evolution did, and maybe a bit
more.

>all the information in the world, it would know a lot but not be any
>smarter or even know how to make itself smarter. How many years of
>training will we give a brand new AGI before we decide it's retarded?

How about a self-selecting population of a few trillions. The cybervillage
idiots will never even be a single screen blip.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

On 4/15/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:

Do you think a dog has a good understanding of your daily activities?
How about a field mouse? A cyanobacterium?


I don't think so. However, when we understand "intelligence" well
enough to build an AGI, we will be able to understand in principle how
a superhuman intelligence works, though we cannot predict or explain
its individual actions.


Why should the current status quo be the crown of evolutionary
infoprocessing achievement?


Did I suggest that it should be the case? I thought I said the
opposite in my message. However, to say "intelligence will continue to
evolve" and "there will be a moment after which things will completely
go beyond our understanding" are not the same.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Jef Allbright

On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote:

I actually agree with most of what Richard and Ben said, that is, we
can create AI that is "more intelligent", in some sense, than human
beings --- that is also what I've been working on.

However, to me "Singularity" is a stronger claim than "superhuman
intelligence". It implies that the intelligence of AI will increase
exponentially, to a point that is shorter than what we can perceive or
understand. That is what I'm not convinced.


I too generally agree with the improving intelligence scenario Richard
described, but would like to point out a rarely appreciated aspect:

While such a machine intelligence will quickly far exceed human
capabilities, from its own perspective it will rapidly hit a wall due
to having exhausted all opportunities for effective interaction with
its environment.  It could then explore an open-ended possibility
space à la schmidhuber, but such increasingly detached exploration
will be increasingly detached from "intelligence" in an effective
sense.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Benjamin Goertzel

Pei,

A key point is that, unlike a human, a well-architected AGI should be able
to easily increase its intelligence via adding memory, adding faster
processors, adding more processors, and so forth.  As well as by analyzing
its own processes and their flaws with far more accuracy than any near-term
brain scan...

However, to say "intelligence will continue to


evolve" and "there will be a moment after which things will completely
go beyond our understanding" are not the same.




True, they're not the same

It is a reasonable hypothesis that AGIs created by humans will find
themselves unable -- even after a lot of self-study and a lot of hardware
improvement augmentation -- to dramatically transcend the human level of
intelligence. I.e., the idea of human-created algorithms bootstrapping
beyond the human level could be infeasible.  This seems highly unlikely to
me, but I can't see it's an idiotic hypothesis.

Is the above the hypothesis you're making?

Or are you doubting that a massively superhuman intelligence would be beyond
the scope of understanding of ordinary, unaugmented humans?

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:


Pei,

A key point is that, unlike a human, a well-architected AGI should be able
to easily increase its intelligence via adding memory, adding faster
processors, adding more processors, and so forth.  As well as by analyzing
its own processes and their flaws with far more accuracy than any near-term
brain scan...


Sure, these factors will increase the system's capability, though not
change its working principle.


 However, to say "intelligence will continue to
> evolve" and "there will be a moment after which things will completely
> go beyond our understanding" are not the same.


True, they're not the same

It is a reasonable hypothesis that AGIs created by humans will find
themselves unable -- even after a lot of self-study and a lot of hardware
improvement augmentation -- to dramatically transcend the human level of
intelligence. I.e., the idea of human-created algorithms bootstrapping
beyond the human level could be infeasible.  This seems highly unlikely to
me, but I can't see it's an idiotic hypothesis.

Is the above the hypothesis you're making?


Not exactly.

My points are:

(1) AGI can be more intelligent than human in certain sense, but it
should still be understandable in principle.

(2) Intelligence in AGI will continue to improve, both by human and by
AGI, but it will still take time. There is no reason to believe that
the time will be infinitely short.


Or are you doubting that a massively superhuman intelligence would be beyond
the scope of understanding of ordinary, unaugmented humans?


It depends on what you mean by "understanding" --- the general
principle or concrete behaviors.

Pei


Ben





 

 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Benjamin Goertzel



My points are:

(1) AGI can be more intelligent than human in certain sense, but it
should still be understandable in principle.




The AGI systems humans create will be understandable by humans in principle.

Agreed.

But let's call these AGI0

Then, AGI0 will create AGI1, which will be understandable by AGI0 in
principle...

And, AGI1 will create AGI2, which will be understandable by AGI1 in
principle...

etc.

At what point will AGI_n be no longer understandable by humans in
principle?? --
where by "understand in principle", I mean "understand in principle, given
realistic
bounds on the time and memory resources used to
carry out this understanding"


(2) Intelligence in AGI will continue to improve, both by human and by

AGI, but it will still take time. There is no reason to believe that
the time will be infinitely short.




Not infinitely short, unless current physics is badly wrong in certain
relevant
respects.

But if AGI1 can think 1000 times faster than a human,
maybe AGI2 will be able to think 1 times as fast, etc.

Infinite rate is not necessary for the result to be incomprehensibly rapid
as
compared to the human brain.



> Or are you doubting that a massively superhuman intelligence would be
beyond
> the scope of understanding of ordinary, unaugmented humans?

It depends on what you mean by "understanding" --- the general
principle or concrete behaviors.



My hypothesis is that for large n, AGI_n as defined above will likely obey
general
principles that humans are not able to understand assuming reasonable time
and memory constraints on their understanding process.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:


>
> My points are:
>
> (1) AGI can be more intelligent than human in certain sense, but it
> should still be understandable in principle.


The AGI systems humans create will be understandable by humans in principle.
Agreed.

But let's call these AGI0

Then, AGI0 will create AGI1, which will be understandable by AGI0 in
principle...

And, AGI1 will create AGI2, which will be understandable by AGI1 in
principle...

etc.

At what point will AGI_n be no longer understandable by humans in
principle?? --
where by "understand in principle", I mean "understand in principle, given
realistic
bounds on the time and memory resources used to
carry out this understanding"


According to my belief, the way to create AGI is to have a general
theory of intelligence, which should cover the common principle under
all kinds of intelligent systems, including human intelligence,
computer intelligence, etc., even alien intelligence and superhuman
AGI. Therefore, this theory should also cover your AGI0 to AGIn.

Pei


> (2) Intelligence in AGI will continue to improve, both by human and by
> AGI, but it will still take time. There is no reason to believe that
> the time will be infinitely short.


Not infinitely short, unless current physics is badly wrong in certain
relevant
respects.

But if AGI1 can think 1000 times faster than a human,
maybe AGI2 will be able to think 1 times as fast, etc.

Infinite rate is not necessary for the result to be incomprehensibly rapid
as
compared to the human brain.

> > Or are you doubting that a massively superhuman intelligence would be
beyond
> > the scope of understanding of ordinary, unaugmented humans?
>
> It depends on what you mean by "understanding" --- the general
> principle or concrete behaviors.
>

My hypothesis is that for large n, AGI_n as defined above will likely obey
general
principles that humans are not able to understand assuming reasonable time
and memory constraints on their understanding process.

-- Ben G

 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Benjamin Goertzel



According to my belief, the way to create AGI is to have a general
theory of intelligence, which should cover the common principle under
all kinds of intelligent systems, including human intelligence,
computer intelligence, etc., even alien intelligence and superhuman
AGI. Therefore, this theory should also cover your AGI0 to AGIn.



Ahhh

Well, this gets at the crux of our disagreement.

I have my doubts that such a theory is possible.  I think it may be
possible to create a general theory of "roughly human level intelligence"
...
just as Hutter and colleagues seem to be hot on the trail of a general
theory of "near infinite computational power intelligence".

But I suspect that computer systems with processing power and memory
vastly greater than humans but vastly less than is needed for algorithms
like Hutter's AIXItl to be possible, will display forms of intelligence that
aren't covered by either the Hutter-type theories, nor the theories covering
roughly-human-level intelligence...

True, there may be commonalities between sub-AIXItl-level superhuman, human
level, and AIXItl level intelligences.  But, I suspect the differences will
be
at least as dramatic...

As examples, I think that the sorts of self, awareness and "perceived free
will" that characterize human mind may not apply to all superhuman
intelligences.

I can certainly imagine a superhuman AGI whose cognition is governed by
complex, emergent patterns that are beyond human comprehension.  I might
be able to understand in some general sense that its behaviors are guided
by emergent patterns, that it seems to be engaged in calculating
probabilities,
etc. -- but the main structures and dynamics guiding its cognition might be
new principles, which apply only to minds vastly smarter than humans and
can't be grokked by mere human brains...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

Well, I surely don't mean AIXI type of theory.

I believe that all kinds of intelligence can be explained as the
capability of adaptation with insufficient knowledge and resources. I
understand that you don't share this understanding of intelligence.

Pei

On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:



>
>
> According to my belief, the way to create AGI is to have a general
> theory of intelligence, which should cover the common principle under
> all kinds of intelligent systems, including human intelligence,
> computer intelligence, etc., even alien intelligence and superhuman
> AGI. Therefore, this theory should also cover your AGI0 to AGIn.
>

Ahhh

Well, this gets at the crux of our disagreement.

I have my doubts that such a theory is possible.  I think it may be
possible to create a general theory of "roughly human level intelligence"
...
just as Hutter and colleagues seem to be hot on the trail of a general
theory of "near infinite computational power intelligence".

But I suspect that computer systems with processing power and memory
vastly greater than humans but vastly less than is needed for algorithms
like Hutter's AIXItl to be possible, will display forms of intelligence that
aren't covered by either the Hutter-type theories, nor the theories covering
roughly-human-level intelligence...

True, there may be commonalities between sub-AIXItl-level superhuman, human
level, and AIXItl level intelligences.  But, I suspect the differences will
be
at least as dramatic...

As examples, I think that the sorts of self, awareness and "perceived free
will" that characterize human mind may not apply to all superhuman
intelligences.

I can certainly imagine a superhuman AGI whose cognition is governed by
complex, emergent patterns that are beyond human comprehension.  I might
be able to understand in some general sense that its behaviors are guided
by emergent patterns, that it seems to be engaged in calculating
probabilities,
etc. -- but the main structures and dynamics guiding its cognition might be
new principles, which apply only to minds vastly smarter than humans and
can't be grokked by mere human brains...

-- Ben



 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


[agi] AGI != Singularity

2007-04-15 Thread Russell Wallace

On 4/16/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

The AGI systems humans create will be understandable by humans in

principle.

Agreed.



But let's call these AGI0



Then, AGI0 will create AGI1, which will be understandable by AGI0 in

principle...


And, AGI1 will create AGI2, which will be understandable by AGI1 in

principle...

I'll toss in my two cents worth as to why I believe in AGI, but not in
AGI-driven Singularity.

My first reason is that we will never have human-level AGI.

Okay, having thrown down the flame-bait, I'll explain :) I think we will
never have human-level AGI in the same way that we will never have
bird-level flight - because there is no scalar "level".

Does an F-22 have bird-level flight? In one way the answer is yes and much
more - it flies far faster than any bird.

But flight in the real world includes refueling, maintenance and
manufacturing. And an F-22's performance in these areas is infinitely
inferior to that of a bird; it is entirely dependent on humans to
manufacture, refuel and maintain it.

At this point some readers will be thinking that these gaps might someday be
filled in given sufficiently advanced nanotechnology. And indeed there is no
known law of physics that forbids this.

But when you're working with machine phase rather than living cells, even if
you _can_ burden a combat aircraft with the cost and overhead of these
capabilities there is no practical reason to do so. The F-22 was built by
professional engineers for a practical purpose. If bird-level flight is ever
created in centuries to come, it will be done by hobbyists for the coolness
factor, and only long after it is of no practical relevance. That is what I
mean when I say we will never have bird-level flight in the practical sense:
it will never be done by anyone working in their capacity as professional
engineers, because the _shape_ of capabilities implied by machine phase is
so different from that implied by biology.

The same applies to intelligence. It is not a scalar quantity, but possesses
a complex shape. We already have computers that outperform humans in
arithmetic by a factor of a quadrillion, yet underperform in almost all
other tasks by a factor of infinity. That's a difference in shape of
capabilities that implies a completely different path. It will be no more
feasible or necessary for AGI to duplicate all the abilities of a human than
it is feasible or necessary for an F-22 to duplicate all the abilities of a
bird. (Again, I'm not saying an AGI with the shape of a human mind can't
ever be created, in a thousand or a million years or whatever from now - but
if so, it will be done for the coolness factor, not by professional
engineers who want it to solve a practical problem. It will never be cutting
edge.)

Furthermore, even if you postulate AGI0 that could create AGI1 unaided in a
vacuum, there remains the fact that AGI0 won't be in a vacuum, nor if it
were would it have any motive for creating AGI1, nor any reason to prefer
one bit stream rather than another as a design for AGI1. There is after all
no such function as:

float intelligence(program p)

There is, however, a family of functions (albeit incomputable in the general
case):

float intelligence(program p, job j)

In other words, intelligence is useful - and can be said to even exist -
only in the context of the jobs the putatively intelligent agent is doing.
And jobs are supplied by the real world - which is run by humans. Even in
the absence of technical issues about the shape of capabilities, this alone
would suffice to require humans to stay in the loop.

The point of all this isn't to pour cold water on people's ideas, it's to
point out that we will make more progress if we stop thinking of AGI as a
human child. It's a completely different kind of thing, and more akin to
existing software in that it must function as an extension of, rather than
replacement for, the human mind. That means we have to understand it in
order to continue improving it - black box methods have to be confined to
isolated modules. It means user interface will continue to be of central
importance, just as it is today. It means the Lamarckian evolutionary path
of AGI will have to be based, just as current software is, on increased
usefulness to humans at each step.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Russell Wallace

On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote:


According to my belief, the way to create AGI is to have a general
theory of intelligence, which should cover the common principle under
all kinds of intelligent systems, including human intelligence,
computer intelligence, etc., even alien intelligence and superhuman
AGI. Therefore, this theory should also cover your AGI0 to AGIn.



Indeed we do have some such theories already.

Thing is, any theory which covers such different things must necessarily say
very little about any of them in particular.

To again make use of the flight analogy, is there a theory that covers both
a bird and an F-22? Well yes, aerodynamics.

However, if you look at what you actually need to know to design an F-22,
aerodynamics is only the tiniest fraction of it. You need to know (or the
team collectively needs to know - it's too much for any one person) a vast
amount about engines and fuels, metallurgy, electronics, manufacturability,
operational procedures and a hundred other things I don't even know enough
to list - all of which are peculiar to man-made aircraft and do not apply to
birds.

Nor is this state of affairs peculiar to flight - it applies to every
complex artifact. It undoubtedly also applies to AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Pei Wang

A general theory of intelligence will not give us a detailed AGI
design, but it will provide the assumptions and restrictions that such
a design should follow, no matter how the implementation details are
determined. Also, it will tell us why the traditional AI approaches
failed. For these reasons, it is not trivial or vacuum.

Pei

On 4/15/07, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote:
> According to my belief, the way to create AGI is to have a general
> theory of intelligence, which should cover the common principle under
> all kinds of intelligent systems, including human intelligence,
> computer intelligence, etc., even alien intelligence and superhuman
> AGI. Therefore, this theory should also cover your AGI0 to AGIn.

Indeed we do have some such theories already.

Thing is, any theory which covers such different things must necessarily say
very little about any of them in particular.

To again make use of the flight analogy, is there a theory that covers both
a bird and an F-22? Well yes, aerodynamics.

However, if you look at what you actually need to know to design an F-22,
aerodynamics is only the tiniest fraction of it. You need to know (or the
team collectively needs to know - it's too much for any one person) a vast
amount about engines and fuels, metallurgy, electronics, manufacturability,
operational procedures and a hundred other things I don't even know enough
to list - all of which are peculiar to man-made aircraft and do not apply to
birds.

Nor is this state of affairs peculiar to flight - it applies to every
complex artifact. It undoubtedly also applies to AGI.
 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Russell Wallace

On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote:


A general theory of intelligence will not give us a detailed AGI
design, but it will provide the assumptions and restrictions that such
a design should follow, no matter how the implementation details are
determined. Also, it will tell us why the traditional AI approaches
failed. For these reasons, it is not trivial or vacuum.



Absolutely - it's necessary for us to know e.g. cognitive science for the
reasons you point out. I merely observe that while necessary, it is not
close to being sufficient.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Benjamin Goertzel

On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote:


Well, I surely don't mean AIXI type of theory.

I believe that all kinds of intelligence can be explained as the
capability of adaptation with insufficient knowledge and resources. I
understand that you don't share this understanding of intelligence.



I don't disagree with this statement, but I suspect that different
QUANTITATIVE levels of insufficiency may lead to QUALITATIVELY
different principles of intelligence...

For the moment, however, it will be sufficient for us humans to understand
the qualitative principles leading to human-level AGI.   This is what I have
tried to do in my own theoretical work.  And, I think your work
has been very valuable in advancing our understanding of these principles...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Eugen Leitl
On Sun, Apr 15, 2007 at 06:41:39PM -0400, Benjamin Goertzel wrote:

>A key point is that, unlike a human, a well-architected AGI should be
>able to easily increase its intelligence via adding memory, adding
>faster processors, adding more processors, and so forth.  As well as

I see why a human wouldn't profit from enhancements, it's just most
of them would require germline manipulation, or technology quite
beyond of what is available today.

>by analyzing its own processes and their flaws with far more accuracy
>than any near-term brain scan...

Of course a point could be made that reconstructing function from
structure (which in principle can be obtained from vitrified
brain sections at arbitrary resolution) is less far off than AI bootstrap.
 
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936