Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Nikolay Ognyanov

IMHO :

The stated expected benefit of AGI development is overly ambitious on the
science&technology side and not ambitious enough on the social&economy
side. For AGI to become the Next Big Thing it does not really have to come
up with the best medical researcher. Nor would a great medical researcher
have as much impact on the way current civilization works as replacement
of human workers in the sector of services.  Impact of previous technology
revolutions can be described in a very fundamental way as freeing 
(liberating?
discharging?) people from engagement in hunting and similar, then 
agriculture
and similar, then industry and similar. Well, industry in still in the 
working
and AGI could help there too but the direction is clear. Services are 
next area
of human social&economic activity to benefit and suffer at same scale as 
others
did earlier from technology. This  is the most obvious general social 
role and
"selling point" of AGI at least until/unless it becomes true deux ex 
machina ;).
To liberate (but also : discharge, which is going to be a huge 
adoption/penetration
problem) humans from engagement in providing economically significant 
services
to other humans. What such roles and how does AGI address/fulfill should 
be the
key metric if it is to be "sold" outside a community which is motivated 
by the

intellectual challenge alone.

So IMHO if you want to sell AGI to investors you better start with replacing
travel agents, brokers, receptionists, personal assistants etc. etc. 
rather than

researchers.

Regards
Nikolay

Richard Loosemore wrote:


I have stuck my neck out and written an Open Letter to AGI (Artificial 
General Intelligence) Investors on my website at http://susaro.com.


All part of a campaign to get this field jumpstarted.

Next week I am going to put up a road map for my own development project.




Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&; 


Powered by Listbox: http://www.listbox.com



--

*Nikolay Ognyanov, PhD*
Chief Technology Officer
*TravelStoreMaker.com Inc.* 
Phone: +359 2 933 3832
Fax: +359 2 983 6475

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Richard Loosemore

Nikolay Ognyanov wrote:

IMHO :

The stated expected benefit of AGI development is overly ambitious on the
science&technology side and not ambitious enough on the social&economy
side. For AGI to become the Next Big Thing it does not really have to come
up with the best medical researcher. Nor would a great medical researcher
have as much impact on the way current civilization works as replacement
of human workers in the sector of services.  Impact of previous technology
revolutions can be described in a very fundamental way as freeing 
(liberating?
discharging?) people from engagement in hunting and similar, then 
agriculture
and similar, then industry and similar. Well, industry in still in the 
working
and AGI could help there too but the direction is clear. Services are 
next area
of human social&economic activity to benefit and suffer at same scale as 
others
did earlier from technology. This  is the most obvious general social 
role and
"selling point" of AGI at least until/unless it becomes true deux ex 
machina ;).
To liberate (but also : discharge, which is going to be a huge 
adoption/penetration
problem) humans from engagement in providing economically significant 
services
to other humans. What such roles and how does AGI address/fulfill should 
be the
key metric if it is to be "sold" outside a community which is motivated 
by the

intellectual challenge alone.

So IMHO if you want to sell AGI to investors you better start with replacing
travel agents, brokers, receptionists, personal assistants etc. etc. 
rather than

researchers.


I'm sorry, but this makes no sense at all:  this is a complete negation 
of what "AGI" means.


If you could build a (completely safe, I am assuming) system that could 
think in *every* way as powerfully as a human being, what would you 
teach it to become:


1) A travel Agent.

2) A medical researcher who could learn to be the world's leading 
specialist in a particular field, and then be duplicated so that you 
instantly had 1,000 world-class specialists in that field.


3) An expert in AGI system design, who could then design a faster 
generation of AGI systems, so that, as a researcher in any scientific 
field, these second-generation systems could generate new knowledge 
faster than all the human scientists and engineers on the planet.


?

To say to an investor that AGI would be useful because we could use them 
to build travel agents and receptionists is to utter something 
completely incoherent.


This is the "Everything Just The Same, But With Robots" fallacy.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Bob Mottram
On 17/04/2008, Nikolay Ognyanov <[EMAIL PROTECTED]>
wrote:
>
>  Impact of previous technology
> revolutions can be described in a very fundamental way as freeing
> (liberating?
> discharging?) people from engagement in hunting and similar, then
> agriculture
> and similar, then industry and similar. Well, industry in still in the
> working
> and AGI could help there too but the direction is clear. Services are next
> area
> of human social&economic activity to benefit and suffer at same scale as
> others
> did earlier from technology.
>


Yes this is exactly right.  In fact digital computers were originally
invented precisely to automate "white collar" office workers.  At present
although some aspects of industry are highly automated we're still in the
process of moving towards full industrial automation.  The next frontier
after that, as you say, is automating office and management work which is
most typically found in the service sectors of the economy.  To some small
degree this "white collar" work is already feeling the effects of
automation, via desktop computers, various productivity tools and of course
internet search engines, but there's much more automation to come in this
area.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
> If you could build a (completely safe, I am assuming) system that could 
> think in *every* way as powerfully as a human being, what would you 
> teach it to become:
> 
> 1) A travel Agent.
> 
> 2) A medical researcher who could learn to be the world's leading 
> specialist in a particular field,...

Travel agent. Better yet, housemaid. I can teach it to become these things 
because I know how to do them. Early AGIs will be more likely to be 
successful at these things because they're easier to learn. 

This is sort of like Orville Wright asking, "If I build a flying machine, 
what's the first use I'll put it to: 
1) Carrying mail.
2) A manned moon landing."

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Nikolay Ognyanov
Well, first of all - what exactly human being? Einstein or my poor 
self?  Then why
actually human? Why not deux ex machina (super-human intelligence) since 
day 1?

IMHO because AGI as well as any other technology will not happen in one big
singular blast. 

In the specific context of investors awareness and attitude it may be a 
good idea to
earn some credit by replacing travel agent before you ask them to credit 
you with
replacing Einstein. Starting off with Einstein may IMHO damage AGI in 
the same
way in which similar claims did damage AI. Of course it is your right to 
believe
that replacing Einstein will require same level of effort (and 
investment) as
replacing travel agent. I wish (even though I do not believe) that you 
are right.
It is just some experience with investors that I have which makes me 
doubt that they
will buy into the idea of funding Einstein before you have proven you 
can deal with

the travel agent.

Regards
Nikolay

Richard Loosemore wrote:

Nikolay Ognyanov wrote:

IMHO :

The stated expected benefit of AGI development is overly ambitious on 
the

science&technology side and not ambitious enough on the social&economy
side. For AGI to become the Next Big Thing it does not really have to 
come
up with the best medical researcher. Nor would a great medical 
researcher

have as much impact on the way current civilization works as replacement
of human workers in the sector of services.  Impact of previous 
technology
revolutions can be described in a very fundamental way as freeing 
(liberating?
discharging?) people from engagement in hunting and similar, then 
agriculture
and similar, then industry and similar. Well, industry in still in 
the working
and AGI could help there too but the direction is clear. Services are 
next area
of human social&economic activity to benefit and suffer at same scale 
as others
did earlier from technology. This  is the most obvious general social 
role and
"selling point" of AGI at least until/unless it becomes true deux ex 
machina ;).
To liberate (but also : discharge, which is going to be a huge 
adoption/penetration
problem) humans from engagement in providing economically significant 
services
to other humans. What such roles and how does AGI address/fulfill 
should be the
key metric if it is to be "sold" outside a community which is 
motivated by the

intellectual challenge alone.

So IMHO if you want to sell AGI to investors you better start with 
replacing
travel agents, brokers, receptionists, personal assistants etc. etc. 
rather than

researchers.


I'm sorry, but this makes no sense at all:  this is a complete 
negation of what "AGI" means.


If you could build a (completely safe, I am assuming) system that 
could think in *every* way as powerfully as a human being, what would 
you teach it to become:


1) A travel Agent.

2) A medical researcher who could learn to be the world's leading 
specialist in a particular field, and then be duplicated so that you 
instantly had 1,000 world-class specialists in that field.


3) An expert in AGI system design, who could then design a faster 
generation of AGI systems, so that, as a researcher in any scientific 
field, these second-generation systems could generate new knowledge 
faster than all the human scientists and engineers on the planet.


?

To say to an investor that AGI would be useful because we could use 
them to build travel agents and receptionists is to utter something 
completely incoherent.


This is the "Everything Just The Same, But With Robots" fallacy.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&; 


Powered by Listbox: http://www.listbox.com



--

*Nikolay Ognyanov, PhD*
Chief Technology Officer
*TravelStoreMaker.com Inc.* 
Phone: +359 2 933 3832
Fax: +359 2 983 6475

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
If you could build a (completely safe, I am assuming) system that could 
think in *every* way as powerfully as a human being, what would you 
teach it to become:


1) A travel Agent.

2) A medical researcher who could learn to be the world's leading 
specialist in a particular field,...


Travel agent. Better yet, housemaid. I can teach it to become these things 
because I know how to do them. Early AGIs will be more likely to be 
successful at these things because they're easier to learn. 


Yes, that shows deep analysis and insight into the problem.

I can just see the first AGI corporation now, having spent a hundred 
million dollars in development money, deciding to make a profit by 
selling a housemaid robot that will replace the cheap, almost-slave 
labor coming across the border from Mexico.


Of course, it would not occur to that company to develop their systems 
just a litle more and get the AGI to do high-value intellectual work.





Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Mark Waser
So IMHO if you want to sell AGI to investors you better start with 
replacing
travel agents, brokers, receptionists, personal assistants etc. etc. 
rather than

researchers.


I'm sorry, but this makes no sense at all:  this is a complete negation of 
what "AGI" means.


Actually . . . . sorry, Richard . . . . but why does it matter what AGI 
means?  You are trying to sell a product for money.  Why do you insist on 
attempting to sell someone something that they don't want just because *you* 
believe that it's better than what they do want?  Why not just sell them 
what they want (since they get it for free with what you want) and be happy 
that they're willing to fund you?


If you could build a (completely safe, I am assuming) system that could 
think in *every* way as powerfully as a human being, what would you teach 
it to become:

1) A travel Agent.
2) A medical researcher 3) An expert in AGI system design,


4) All of the above.  But I'd just market it as a travel agent to the people 
who want a travel agent and a medical researcher to the drug companies (the 
AGI expert would have it figured out but would have no spare cash :-)..


To say to an investor that AGI would be useful because we could use them 
to build travel agents and receptionists is to utter something completely 
incoherent.


Not at all.  It is catering to their desires and refraining from forcibly 
educating them.  Where is the harm?  It's certainly better than getting the 
door slammed in your face.



This is the "Everything Just The Same, But With Robots" fallacy.


No, it's not because you're not saying that everything is going to be the 
same.  All you're saying is that travel agents *can* be replaced without 
insisting on pointing out that *EVERYTHING* is likely to be replaced.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
Well, I haven't seen any intelligent responses to this so I'll answer it 
myself:

On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
> On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
> > If you could build a (completely safe, I am assuming) system that could 
> > think in *every* way as powerfully as a human being, what would you 
> > teach it to become:
> > 
> > 1) A travel Agent.
> > 
> > 2) A medical researcher who could learn to be the world's leading 
> > specialist in a particular field,...
> 
> Travel agent. Better yet, housemaid. I can teach it to become these things 
> because I know how to do them. Early AGIs will be more likely to be 
> successful at these things because they're easier to learn. 
> 
> This is sort of like Orville Wright asking, "If I build a flying machine, 
> what's the first use I'll put it to: 
> 1) Carrying mail.
> 2) A manned moon landing."

Q: You've got to be kidding. There's a huge difference between a mail-carrying 
fabric-covered open-cockpit biplane and the Apollo spacecraft. It's not 
comparable at all.

A: It's only about 50 years' development. More time elapsed between railroads 
and biplanes. 

Q: Do you think it'll take 50 years to get from travel agents to medical 
researchers?

A: No, the pace of development has speeded up, and will speed up more so with 
AGI. But as in the mail/moon example, the big jump will be getting off the 
ground in the first place.

Q: So why not just go for the researcher? 

A: Same reason Orville didn't go for the moon rocket. We build Rosie the 
maidbot first because:
1) we know very well what it's actually supposed to do, so we know if it's 
learning it right
2) we even know a bit about how its internal processing -- vision, motion 
control, recognition, navigation, etc -- works or could work, so we'll have 
some chance of writing programs that can learn that kind of thing.
3) It's easier to learn to be a housemaid. There are lots of good examples. 
The essential elements of the task are observable or low-level abstractions. 
While the robot is learning to wash windows, we the AGI researchers are going 
to learn how to write better learning algorithms by watching how it learns.
4) When, not if, it screws up, a natural part of the learning process, 
there'll be broken dishes and not a thalidomide disaster.

The other issue is that the hard part of this is the learning. Say it takes a 
teraop to run a maidbot well, but petaop to learn to be a maidbot. We run the 
learning on our one big machine and sell the maidbots cheap with 0.1% the 
cpu. But being a researcher is all learning -- so each one would need the 
whole shebang for each copy. A decade of Moore's Law ... and at least that of 
AGI research.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
>  We may well see a variety of proto-AGI applications in different
>  domains, sorta midway between narrow-AI and human-level AGI, including
>  stuff like
>
>  -- maidbots
>
>  -- AI financial traders that don't just execute machine learning
>  algorithms, but grok context, adapt to regime changes, etc.
>
>  -- NL question answering systems that grok context and piece together
>  info from different sources
>
>  -- artificial scientists capable of formulating nonobvious hypotheses
>  and validating them via data analysis, including doing automated data
>  preprocessing, etc.

And not to forget, of course, smart virtual pets and avatars in games
and virtual worlds ;-))

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
Hmmm...

It's pretty hard to project the timing of different early-stage AGI
applications, as this depends on the particular route taken to AGI,
and there are many possible routes...

We may well see a variety of proto-AGI applications in different
domains, sorta midway between narrow-AI and human-level AGI, including
stuff like

-- maidbots

-- AI financial traders that don't just execute machine learning
algorithms, but grok context, adapt to regime changes, etc.

-- NL question answering systems that grok context and piece together
info from different sources

-- artificial scientists capable of formulating nonobvious hypotheses
and validating them via data analysis, including doing automated data
preprocessing, etc.

Then, after this phase, we may finally see the emergence of unified
AGI systems with true human-level AGI.

**Or**, it could happen that one of the above apps (or something not
on my list) advances way faster than the others, for fundamental AI
reasons or simply for practical economic reasons ... or due to luck...

**Or**, it could well happen that someone gets all the way to
human-level AGI before any of the above proto-AGI applications really
becomes feasible and economically viable.  In that case the answer
will indeed be: Duh, the AGI can do anything...

Which of these alternatives will happen is not obvious to me.  It's
not even obvious to me under the hypothetical assumption that the
Novamente/OpenCog approach is gonna be the one that gets us to
human-level AGI ... let alone if I drop that assumption and think
about the problem from the perspective of the broad scope of possible
AGI architectures.

So I am a bit perplexed that some folks on this list are so
surpassingly **confident** as to which route is going to unfold  I
don't want to get all Eliezer on you, but really, some reflection on
the human brain's tendency toward overconfidence might be in order ;-O

-- Ben G



On Thu, Apr 17, 2008 at 10:30 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> Well, I haven't seen any intelligent responses to this so I'll answer it
>  myself:
>
>
>  On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
>  > On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
>  > > If you could build a (completely safe, I am assuming) system that could
>  > > think in *every* way as powerfully as a human being, what would you
>  > > teach it to become:
>  > >
>  > > 1) A travel Agent.
>  > >
>  > > 2) A medical researcher who could learn to be the world's leading
>  > > specialist in a particular field,...
>  >
>  > Travel agent. Better yet, housemaid. I can teach it to become these things
>  > because I know how to do them. Early AGIs will be more likely to be
>  > successful at these things because they're easier to learn.
>  >
>  > This is sort of like Orville Wright asking, "If I build a flying machine,
>  > what's the first use I'll put it to:
>  > 1) Carrying mail.
>  > 2) A manned moon landing."
>
>  Q: You've got to be kidding. There's a huge difference between a 
> mail-carrying
>  fabric-covered open-cockpit biplane and the Apollo spacecraft. It's not
>  comparable at all.
>
>  A: It's only about 50 years' development. More time elapsed between railroads
>  and biplanes.
>
>  Q: Do you think it'll take 50 years to get from travel agents to medical
>  researchers?
>
>  A: No, the pace of development has speeded up, and will speed up more so with
>  AGI. But as in the mail/moon example, the big jump will be getting off the
>  ground in the first place.
>
>  Q: So why not just go for the researcher?
>
>  A: Same reason Orville didn't go for the moon rocket. We build Rosie the
>  maidbot first because:
>  1) we know very well what it's actually supposed to do, so we know if it's
>  learning it right
>  2) we even know a bit about how its internal processing -- vision, motion
>  control, recognition, navigation, etc -- works or could work, so we'll have
>  some chance of writing programs that can learn that kind of thing.
>  3) It's easier to learn to be a housemaid. There are lots of good examples.
>  The essential elements of the task are observable or low-level abstractions.
>  While the robot is learning to wash windows, we the AGI researchers are going
>  to learn how to write better learning algorithms by watching how it learns.
>  4) When, not if, it screws up, a natural part of the learning process,
>  there'll be broken dishes and not a thalidomide disaster.
>
>  The other issue is that the hard part of this is the learning. Say it takes a
>  teraop to run a maidbot well, but petaop to learn to be a maidbot. We run the
>  learning on our one big machine and sell the maidbots cheap with 0.1% the
>  cpu. But being a researcher is all learning -- so each one would need the
>  whole shebang for each copy. A decade of Moore's Law ... and at least that of
>  AGI research.
>
>  Josh
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.

Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Benjamin Johnston


I have stuck my neck out and written an Open Letter to AGI (Artificial 
General Intelligence) Investors on my website at http://susaro.com.


All part of a campaign to get this field jumpstarted.

Next week I am going to put up a road map for my own development project.



Hi Richard,

If I were a potential investor, I don't think I'd find your letter 
convincing.


AI was first coined some 50 years ago: before I was born, and therefore 
long before I entered the field of AI. Naturally, I can't speak with 
personal experience on the matter, but when I read the early literature 
on AI or when I read about field's pioneers reminiscing on the early 
days, I get the distinct impression that this was an incredibly 
passionate and excited group. I would feel comfortable calling them a 
"gang of hot-headed revolutionaries" - even today, 50 years after 
inventing the term "AI" and at the age of 80, McCarthy writes about AI 
and the possibility of strong AI with passion and excitement. Yet, in 
spite of all the hype, excitement and investment that was apparently 
around during that time (or, more likely, as a result of the hype and 
excitement) the field crashed in the "AI winter" of the 80s without 
finding that "dramatic breakthrough".


There's the Japanese Fifth Generation Computer Systems project that I 
understand to be a massive billion dollar investment during the 80s into 
parallel machines and artificial intelligence; an "investment" that is 
today largely considered to be a huge failure.


And of course, there's Cyc; formed with an inspiring aim to capture all 
commonsense knowledge, but still remains in development some 20 years later.


And in addition to these, there are the many many early research papers 
on AI problem solving systems that show early promise and cause the 
authors to make wild predictions and claims in their "Future Work"... 
predictions that time has reliably proven to be false.


So, why would I want to invest now? When I track down the biographies of 
several of the regulars on this list, I find that they entered the field 
during or after the AI Winter and never experienced the early optimism 
as an "insider". How can you convince an investor that the passion today 
isn't just the unfounded optimism of researchers who don't remember the 
past? How can you convince an investor that AGI isn't also going to 
devolve again into an emphasis on publications rather than quality (as 
you claim AI has devolved) or into a new kind of "weak AGI" with no 
"dramatic breakthrough"?


I think a better argument would be to point to a fundamental 
technological or methodological change that makes AGI finally credible. 
I'm not convinced that being "lean, mean, hungry and hellbent on getting 
results" is enough. If I believe in AGI, maybe my best bet is to invest 
my money elsewhere and wait until the fundamental attitudes have changed 
so each dollar will have a bigger impact, rather than squandered on a 
bad dead-end idea. Alternately, my best bet may be to invest in weak AI 
because it will give me a short-term profit (that can be reinvested) AND 
has a plausible case for eventually developing into strong AI. If you 
can offer no good reason to invest in AGI today (given all its past 
failures), aside from a renewed passion of its researchers, then a sane 
reader would have to conclude that AGI is probably a bad investment.



Personally, I'm not sure what I feel about AGI (though, I wouldn't be 
here if I didn't think it was valuable and promising). However, in this 
email I'm trying to play the "devil's advocate" in response to your open 
letter to investors.


-Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-18 Thread Bob Mottram
If you're a potential AGI investor you'll probably want to hold out
until some kind of "proof of concept" system is produced.  That is,
some system which shows interesting behavior above and beyond previous
narrow AI systems, and which looks like it will scale (the scaling
issue is perhaps the most important aspect).  In my opinion such a
system does not yet exist, but it could appear at any time - maybe as
a Second Life virtual agent, or some other manifestation.



On 18/04/2008, Benjamin Johnston <[EMAIL PROTECTED]> wrote:
>
>
> > I have stuck my neck out and written an Open Letter to AGI (Artificial
> General Intelligence) Investors on my website at http://susaro.com.
> >
> > All part of a campaign to get this field jumpstarted.
> >
> > Next week I am going to put up a road map for my own development project.
> >
>
>
>  Hi Richard,
>
>  If I were a potential investor, I don't think I'd find your letter
> convincing.
>
>  AI was first coined some 50 years ago: before I was born, and therefore
> long before I entered the field of AI. Naturally, I can't speak with
> personal experience on the matter, but when I read the early literature on
> AI or when I read about field's pioneers reminiscing on the early days, I
> get the distinct impression that this was an incredibly passionate and
> excited group. I would feel comfortable calling them a "gang of hot-headed
> revolutionaries" - even today, 50 years after inventing the term "AI" and at
> the age of 80, McCarthy writes about AI and the possibility of strong AI
> with passion and excitement. Yet, in spite of all the hype, excitement and
> investment that was apparently around during that time (or, more likely, as
> a result of the hype and excitement) the field crashed in the "AI winter" of
> the 80s without finding that "dramatic breakthrough".
>
>  There's the Japanese Fifth Generation Computer Systems project that I
> understand to be a massive billion dollar investment during the 80s into
> parallel machines and artificial intelligence; an "investment" that is today
> largely considered to be a huge failure.
>
>  And of course, there's Cyc; formed with an inspiring aim to capture all
> commonsense knowledge, but still remains in development some 20 years later.
>
>  And in addition to these, there are the many many early research papers on
> AI problem solving systems that show early promise and cause the authors to
> make wild predictions and claims in their "Future Work"... predictions that
> time has reliably proven to be false.
>
>  So, why would I want to invest now? When I track down the biographies of
> several of the regulars on this list, I find that they entered the field
> during or after the AI Winter and never experienced the early optimism as an
> "insider". How can you convince an investor that the passion today isn't
> just the unfounded optimism of researchers who don't remember the past? How
> can you convince an investor that AGI isn't also going to devolve again into
> an emphasis on publications rather than quality (as you claim AI has
> devolved) or into a new kind of "weak AGI" with no "dramatic breakthrough"?
>
>  I think a better argument would be to point to a fundamental technological
> or methodological change that makes AGI finally credible. I'm not convinced
> that being "lean, mean, hungry and hellbent on getting results" is enough.
> If I believe in AGI, maybe my best bet is to invest my money elsewhere and
> wait until the fundamental attitudes have changed so each dollar will have a
> bigger impact, rather than squandered on a bad dead-end idea. Alternately,
> my best bet may be to invest in weak AI because it will give me a short-term
> profit (that can be reinvested) AND has a plausible case for eventually
> developing into strong AI. If you can offer no good reason to invest in AGI
> today (given all its past failures), aside from a renewed passion of its
> researchers, then a sane reader would have to conclude that AGI is probably
> a bad investment.
>
>
>  Personally, I'm not sure what I feel about AGI (though, I wouldn't be here
> if I didn't think it was valuable and promising). However, in this email I'm
> trying to play the "devil's advocate" in response to your open letter to
> investors.
>
>  -Ben
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-18 Thread Richard Loosemore

Benjamin Johnston wrote:


I have stuck my neck out and written an Open Letter to AGI (Artificial 
General Intelligence) Investors on my website at http://susaro.com.


All part of a campaign to get this field jumpstarted.

Next week I am going to put up a road map for my own development project.



Hi Richard,

If I were a potential investor, I don't think I'd find your letter 
convincing.


AI was first coined some 50 years ago: before I was born, and therefore 
long before I entered the field of AI. Naturally, I can't speak with 
personal experience on the matter, but when I read the early literature 
on AI or when I read about field's pioneers reminiscing on the early 
days, I get the distinct impression that this was an incredibly 
passionate and excited group. I would feel comfortable calling them a 
"gang of hot-headed revolutionaries" - even today, 50 years after 
inventing the term "AI" and at the age of 80, McCarthy writes about AI 
and the possibility of strong AI with passion and excitement. Yet, in 
spite of all the hype, excitement and investment that was apparently 
around during that time (or, more likely, as a result of the hype and 
excitement) the field crashed in the "AI winter" of the 80s without 
finding that "dramatic breakthrough".


There's the Japanese Fifth Generation Computer Systems project that I 
understand to be a massive billion dollar investment during the 80s into 
parallel machines and artificial intelligence; an "investment" that is 
today largely considered to be a huge failure.


And of course, there's Cyc; formed with an inspiring aim to capture all 
commonsense knowledge, but still remains in development some 20 years 
later.


And in addition to these, there are the many many early research papers 
on AI problem solving systems that show early promise and cause the 
authors to make wild predictions and claims in their "Future Work"... 
predictions that time has reliably proven to be false.


So, why would I want to invest now? When I track down the biographies of 
several of the regulars on this list, I find that they entered the field 
during or after the AI Winter and never experienced the early optimism 
as an "insider". How can you convince an investor that the passion today 
isn't just the unfounded optimism of researchers who don't remember the 
past? How can you convince an investor that AGI isn't also going to 
devolve again into an emphasis on publications rather than quality (as 
you claim AI has devolved) or into a new kind of "weak AGI" with no 
"dramatic breakthrough"?


I think a better argument would be to point to a fundamental 
technological or methodological change that makes AGI finally credible. 
I'm not convinced that being "lean, mean, hungry and hellbent on getting 
results" is enough. If I believe in AGI, maybe my best bet is to invest 
my money elsewhere and wait until the fundamental attitudes have changed 
so each dollar will have a bigger impact, rather than squandered on a 
bad dead-end idea. Alternately, my best bet may be to invest in weak AI 
because it will give me a short-term profit (that can be reinvested) AND 
has a plausible case for eventually developing into strong AI. If you 
can offer no good reason to invest in AGI today (given all its past 
failures), aside from a renewed passion of its researchers, then a sane 
reader would have to conclude that AGI is probably a bad investment.



Personally, I'm not sure what I feel about AGI (though, I wouldn't be 
here if I didn't think it was valuable and promising). However, in this 
email I'm trying to play the "devil's advocate" in response to your open 
letter to investors.


Ben,

Thanks for the thoughtful comments.

I have three responses.

First, I think there is a world of difference between passionate 
researchers at the beginning of the field, in 1956, and passionate 
researchers in 2008 who have a half-century of other people's mistakes 
to learn from.  The secret of success is to try and fail, then to try 
again with a fresh outlook.  That exactly fits the new AGI crowd.


Second, when you say that "a better argument would be to point to a 
fundamental technological or methodological change that makes AGI 
finally credible" I must say that I could not agree more.  That is 
*exactly* what I have tried to do in my project, because I have pointed 
out a problem with the way that old-style AI has been carried out, and 
that problem is capable of neatly explaining why the early optimism 
produced nothing.  I have also suggested a solution to that problem, 
pointed out that the solution has never been tried before (so it has the 
virtue of not being a failure yet!), and also pointed out that the 
proposed solution resembles some previous approaches that did have 
sporadic, spectacular success (the early work in connectionism).


However, in my Open Letter post, I did not want to emphasize my own work 
(I will do that elsewhere on the website), but instead point out some 
gene

Re: [agi] An Open Letter to AGI Investors

2008-04-18 Thread Richard Loosemore

Mark Waser wrote:

Richard Loosemore wrote:
To say to an investor that AGI would be useful because we could use 
them to build travel agents and receptionists is to utter something 
completely incoherent.


Not at all.  It is catering to their desires and refraining from 
forcibly educating them.  Where is the harm?  It's certainly better than 
getting the door slammed in your face.


I think this is a mistake.  Selling investors the idea of replacement
travel agents and housemaids is something that they know, in their gut,
is a stupid idea IN THIS CONTEXT.  The context is that you are saying
that you will build something with the completely general powers of
thought that a person has.  If you can build such a thing, then claiming
that it will be used for a trivial task after (e.g.) $100 million of
development money would make no business sense whatsoever.

A big part of being coherent in front of an investor is being able
to think your idea through to its logical conclusion.  Trying to
soft-pedal the idea and pretend that it will be less useful than it
really is is considered to be just as bad as overselling the idea -
this is "thinking too small".  Either way, you look as if you haven't
really thought it through.

Here is what I would call thinking it through.

The definition of AGI is that it has all the powers of thought that we
have, rather than being able to answer questions about a blocks world
perfectly, but be completely incapable of talking about the weather.  We
all agree on this, no?

With that understood, there are some obvious consequences to building an
AGI.  One is that we will be able to duplicate a machine that has
acquired expert-level knowledge in its field.  This is a stupendous
advance on the situation today, obviously, because it means that if an
AGI can be taught to reach expert level in some field, it can be
duplicated manyfold and suddenly we have a vast army of people pushing
back the frontiers together.

Now the question is whether it will be so much harder to produce a
housemaid than a medical expert.  It is not at all obvious that the
housemaid or travel agent will be a step on the road.  If we can
understand how to make something think, why would our efforts happen to
land on the intelligence-point that equates to travel agent?  Just
because this is the kind of work that a human is forced to do when they
cannot get anything better, does not mean that this is a natural level
of intellectual capacity.  The first AGI could just as easily be a
blithering idiot, an idiot-savant, a rocket scientist or an 
unsurpassable genius.  To ask it to be a travel agent is to assume that 
what you build will have a very particular level of intelligence, and be 
incapable of improvement, so it would beg the question "Why would it 
only reach that level?".


I think, in truth, that this talk of using the first AGIs as travel 
agents and housemaids is based on a weak analysis of what it would mean 
to produce an early prototype or a step-on-the-road to full AGI. 
Because we have in our minds this picture of human beings and the way 
they develop, some people are automatically assuming that an early AGI 
would be equivalent to a housemaid.  What I am saying here is that this 
is by no means obvious, at the very least.


I think that if we can build such thinking machines, we would
surely by that stage have come to understand the dynamics of
intellectual development in ways that we have no hope of doing today:
we will be able to look inside the developing mind and see what factors
enable some thinkers to have trouble getting their thoughts together
while others zoom on to great heights of achievement.  Given that we
will be able to do that, we will have much greater chance of being able
to produce something that can continue to develop without hitting a
roadblock of some kind.  In my opinion, what makes a travel agent a
travel agent is not a lack of horsepower, but a complicated interaction
of drives and social interactions (as well as some contribution from
lack of horsepower).  A travel agent, in other words, is more like a
genius who got stopped along the way, than a person whose brain simply
did not have the right design.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-20 Thread Benjamin Johnston


First, I think there is a world of difference between passionate 
researchers at the beginning of the field, in 1956, and passionate 
researchers in 2008 who have a half-century of other people's mistakes 
to learn from.  The secret of success is to try and fail, then to try 
again with a fresh outlook.  That exactly fits the new AGI crowd.



Are you suggesting that the early researchers (not just the 50s, but 
also the 60s, 70s and 80s) weren't learning from each other's mistakes? 
If so, I think you need to address why this generation is able to learn 
from past mistakes, when prior generations couldn't learn from each other?


Second, when you say that "a better argument would be to point to a 
fundamental technological or methodological change that makes AGI 
finally credible" I must say that I could not agree more.  That is 
*exactly* what I have tried to do in my project, because I have 
pointed out a problem with the way that old-style AI has been carried 
out, and that problem is capable of neatly explaining why the early 
optimism produced nothing.  I have also suggested a solution to that 
problem, pointed out that the solution has never been tried before (so 
it has the virtue of not being a failure yet!), and also pointed out 
that the proposed solution resembles some previous approaches that did 
have sporadic, spectacular success (the early work in connectionism).


However, in my Open Letter post, I did not want to emphasize my own 
work (I will do that elsewhere on the website), but instead point out 
some general facts about all AGI projects.  Perhaps I should have also 
said "And I have an approach that is dramatically new", but I felt 
that that that would have weakened the points that I was tryingt to make.



I don't think you need to say that.

Hypothetically, if you were to believe that your own project is the only 
one with a chance of success, then encouraging investment in AGI would, 
given such beliefs, surely only discredit the field when money is later 
found to have been wasted with no results.



In contrast, I think that you believe there are many people with good 
ideas and that there is "change in the air": researchers really are 
starting to tackle AGI in interesting and promising new ways; and that 
you are just one of many groups with fresh and plausible ideas for 
building an AGI.


So, why not try to pinpoint and express such changes (and their cause) 
in your letter?


Finally, I do have to point out that you made no comment on the second 
part of the post, where I explained that *any* investor who put money 
into any project in the AGI arena would be injecting a shot of 
adrenalin into the whole field, causing it to attract attention and so 
stimulate further investment.  That is a very important point:  in any 
other field of investment the last thing you want to do is to provoke 
other investors into funding rivals to your own investment, but in AGI 
nothing could be better.  If that shot of adrenalin into the AGI field 
caused one of the projects to succeed, the result would be a massive 
technology surge that would benefit everyone, not just the individual 
investor. There is no other investment opportunity where so much 
"trickle-down" could be generated by the success of one company.


That argument lessens the risk of the investor's money being wasted.



I didn't comment on the second part of the email because I don't want to 
get involved in arguments about singularity.


Since you bring up the second part, I will add some comments about your 
claims that an investor wouldn't/shouldn't care if they back a "bunch of 
idiots... [that] burn all the cash and produce nothing", because it will 
create buzz and lead to many projects. You say that if "just one of 
them" succeeds then all AGI investors will personally benefit (even if 
they weren't investing in the particular company that succeeds).


Your assumption here is that the chance of "just one of them" succeeding 
is quite good. This brings us right back to my comments on the first 
half of your letter: you haven't really offered an investor a convincing 
argument for why the chances are good that *somebody* will succeed this 
time, when talented passsionate people have been trying for years and 
failing.


You're also bringing up themes related to singularity, and I don't think 
this is necessary. I (and many others here have expressed a similar 
opinion) think that AGI can have many benefits and applications even if 
it takes a very long time to get to super-intelligence. If you can 
justify investment in AGI even with a pessimistic outlook (but the 
possibility of radical change), then you would have a much better case.


And finally, I really don't think it is good advice to say "oh, just 
throw money about, and create buzz, without a care for who receives it". 
If AGI researchers were to receive huge amounts of money (and buzz) but 
end up failing again, the reputation of the field and the chan

Re: [agi] An Open Letter to AGI Investors

2008-04-21 Thread Richard Loosemore

Benjamin Johnston wrote:


First, I think there is a world of difference between passionate 
researchers at the beginning of the field, in 1956, and passionate 
researchers in 2008 who have a half-century of other people's mistakes 
to learn from.  The secret of success is to try and fail, then to try 
again with a fresh outlook.  That exactly fits the new AGI crowd.



Are you suggesting that the early researchers (not just the 50s, but 
also the 60s, 70s and 80s) weren't learning from each other's mistakes? 
If so, I think you need to address why this generation is able to learn 
from past mistakes, when prior generations couldn't learn from each other?


When something is wrong in a research field, and the thing that is wrong 
is fairly small, all you need to do is propose a fix and people will 
realize the value of the fix and adopt it.


But when something deeper is wrong, a person who spots the issue cannot 
simply propose a fix, because the fix will require the established 
people to downgrade their expertise and go back to school, to some extent.


Now, in the last five decades I think that people have looked for small 
fixes (being reluctant to think that there could be anything drastically 
wrong with what they were doing), and as a result there has been a 
tendency to *not* learn from mistakes.  To use an extreme metaphor, they 
have chosen to see the tilt on the upper deck of the Titanic as a 
reasons to rearrange the deck chairs.


I believe that the AGI community has become impatient with the deckchair 
reconfiguration and is looking for more substantial things to fix.  That 
is what I am trying to convey when I say that they will be able to learn 
more from past mistakes.



Second, when you say that "a better argument would be to point to a 
fundamental technological or methodological change that makes AGI 
finally credible" I must say that I could not agree more.  That is 
*exactly* what I have tried to do in my project, because I have 
pointed out a problem with the way that old-style AI has been carried 
out, and that problem is capable of neatly explaining why the early 
optimism produced nothing.  I have also suggested a solution to that 
problem, pointed out that the solution has never been tried before (so 
it has the virtue of not being a failure yet!), and also pointed out 
that the proposed solution resembles some previous approaches that did 
have sporadic, spectacular success (the early work in connectionism).


However, in my Open Letter post, I did not want to emphasize my own 
work (I will do that elsewhere on the website), but instead point out 
some general facts about all AGI projects.  Perhaps I should have also 
said "And I have an approach that is dramatically new", but I felt 
that that that would have weakened the points that I was tryingt to make.



I don't think you need to say that.

Hypothetically, if you were to believe that your own project is the only 
one with a chance of success, then encouraging investment in AGI would, 
given such beliefs, surely only discredit the field when money is later 
found to have been wasted with no results.



In contrast, I think that you believe there are many people with good 
ideas and that there is "change in the air": researchers really are 
starting to tackle AGI in interesting and promising new ways; and that 
you are just one of many groups with fresh and plausible ideas for 
building an AGI.


So, why not try to pinpoint and express such changes (and their cause) 
in your letter?


Oh, I don't believe everyone is on the right track, not by any means: I 
really do believe that the problem I have described and that everyone 
needs to take it seriously.


The reason I would encourage investors to embrace all the different 
projects is that I will eventually get through to other people and make 
them understand the situation, and I think that at that stage they will 
all adopt the methodology I have proposed and start making much better 
progress on their own projects.


Having said that, I will be taking steps to explain why an investor 
should back my project in particular. It is just that that letter was 
aiming at a more general point, which is the inability of investors to 
tell which approach is better, and whether any of the approaches is 
better than what has come before.




Finally, I do have to point out that you made no comment on the second 
part of the post, where I explained that *any* investor who put money 
into any project in the AGI arena would be injecting a shot of 
adrenalin into the whole field, causing it to attract attention and so 
stimulate further investment.  That is a very important point:  in any 
other field of investment the last thing you want to do is to provoke 
other investors into funding rivals to your own investment, but in AGI 
nothing could be better.  If that shot of adrenalin into the AGI field 
caused one of the projects to succeed, the result would be a massive 
technology 

Re: [agi] An Open Letter to AGI Investors

2008-04-21 Thread Matt Mahoney
The value of AGI is the human labor it replaces, which is worth US $2 to $5
quadrillion over the next 30 years.  A major conceptual difficulty seems to be
that we can solve the problem for just $1 million or $1 billion or $1
trillion.  Sorry, no.  That doesn't mean it won't be solved.  It just means
you aren't going to be the one to solve it.

A common argument is that if we can build one human brain and educate it, then
we can make billions of copies very cheaply.  Sorry, it doesn't work that way.
 People form organizations to solve problems that they cannot solve
individually.  Within an organization, people have specialized tasks because
it is more efficient than if everyone had the same knowledge.  This requires
training and on-the-job learning customized to each member.  General
intelligences are going to have to compete with organizations of specialized
systems, each of which is optimized for a narrow task.  Moore's Law doesn't
apply to software and training.

The good news is that AGI will arrive even if you don't do anything.  The
value of AGI is just too high to stop it.  The internet is getting smarter
under your nose, whether you notice it or not.  People today just expect
Google to answer natural language questions, or to recognize an address and
show you a satellite map.  People forget 15 years ago when they had to use
Archie and the only way to find something was to know the name of the file.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-21 Thread Stephen Reed
Matt said:

General intelligences are going to have to compete with organizations of 
specialized systems, each of which is optimized for a narrow task. 


Interesting observation.  I envision Texai as a multitude of specialized agents 
arranged in hierarchical control system, and acting in concert.  One good way 
to provide mentors for such agents is to apply Texai to human organizations 
such that each human member has one or more Texai agents as proxies for the 
various roles the human fills in the organization.  Therefore, an AGI according 
to my design will not compete with organizations of specialized systems, but 
because it will be such an (artificial) organization itself, and presumably 
beneficial, it follows that it will be embraced by, and extend, human 
organizations.

 Moore's Law doesn't apply to software and training.


This is an observation bias because we do not yet have the opportunity to 
witness an AGI compose software, or witness an AGI being taught.   I plan for 
Texai to compose software, once it is taught that skill.  Generally I think an 
aspect of intelligence is the ability to comprehend and to employ new 
knowledge, i.e. smarter agents are easier to train.An evolving AGI may be 
exponentially easier to train (I hope).
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860







  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com