Re: [agi] Funding AGI research

2007-11-24 Thread Linas Vepstas
On 18/11/2007, Mike Tintner <[EMAIL PROTECTED]> wrote:

>
> I might be getting confused -  or rather, I am quite consciously bearing
> that in mind. Let me just say then: I have not heard a *creative* new idea
> here that directly addresses and shows the power to solve even in part the
> problem of creating general intelligence.


I do not get the impression that  most promising ideas have even been
explored,
much less worked over to the point of abandonment. I don't see people saying
"oh yeah, well we already tried that, and it didn't work". Instead,  I see
people
saying "gee I have a good idea, I wish I could explore it."

So, you can say "I don't beleive it",  but you can't say "it won't work",
until
multiple research groups have actually tried it, and shown that its a dead
end.

--linas


phasis
> on the "creative" part. It's not enough to be new and different, or to
> have
> an incredibly detailed plan, you have to have ideas that directly address
> &
> start to solve the problem and are radical.   I have heard a great deal
> though from various sources about how it's *not* necessary to be that
> creative or revolutionary - about how just adapting existing techniques
> will
> lead to the promised land - which, frankly, is a joke.
>
> The only discussion here that I can remember even starting to suggest a
> creative idea directly addressing the problem was with Ben - he claims
> that
> his pet is capable of general analogy - certainly one if not the basis of
> general intelligence - that, having learned to fetch a ball, his pet
> spontaneously learned to play hide-and-seek. Great, I said, if you can
> demonstrate that, you've got a major creative breakthrough - you can and
> should go public right now. You can bet Hawkins would. No reply. No
> exposition of  his idea for producing such analogies. No comments or
> interest from anyone else.
>
> There are a lot of discussions here about *tangential* matters - but when
> it
> comes to the central problem(s) - the hard, creative problem -  how does
> you
> agent move into *new* domains? - discussion evaporates.
>
> And I was glad to see Bob expressing something I have often thought -  how
> often people in this field *gesture* at ideas, which are too awesome to be
> declared publicly. Now that might be partly justified in other creative
> fields. In many fields of invention, a creative idea about, say, using
> some
> new material or preparing it in a new way might, if expressed, be
> immediately stolen. But not here. Here any creative idea will be totally
> dependent on a massive amount of implementation. Hawkins had a fairly big
> creative idea with his HTM - even if it's a flawed idea. But no one can
> walk
> away and immediately implement such an idea.
>
> So actually, in this field,  it's in your and everyone's interest to
> declare
> your main ideas publicly and get as much feedback as pos. - and incentive
> and opportunity to refine those ideas.  (By all means, BTW point to a
> creative idea of yours that directly addresses the problem of creating
> general intelligence - or to anyone else's).
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68365365-babd74

Re: [agi] Funding AGI research

2007-11-24 Thread Mike Tintner
Linas,

I'm not quite sure what you mean. I'm not asking for much more than brief 
exposition of ideas in this forum, that just begin to show some promise. I'm 
not demanding or expecting something fully worked through. The fact remains 
that I don't think I've heard any in any form that fit the critera below - and 
you don't actually mention any.

Linas:
MT:I might be getting confused -  or rather, I am quite consciously bearing
that in mind. Let me just say then: I have not heard a *creative* new idea
here that directly addresses and shows the power to solve even in part the 
problem of creating general intelligence. 
  Linas:I do not get the impression that  most promising ideas have even been 
explored,
  much less worked over to the point of abandonment. I don't see people saying 
  "oh yeah, well we already tried that, and it didn't work". Instead,  I see 
people
  saying "gee I have a good idea, I wish I could explore it." 

  So, you can say "I don't beleive it",  but you can't say "it won't work", 
until 
  multiple research groups have actually tried it, and shown that its a dead 
end.

  --linas



phasis
on the "creative" part. It's not enough to be new and different, or to have
an incredibly detailed plan, you have to have ideas that directly address &
start to solve the problem and are radical.   I have heard a great deal 
though from various sources about how it's *not* necessary to be that
creative or revolutionary - about how just adapting existing techniques will
lead to the promised land - which, frankly, is a joke.

The only discussion here that I can remember even starting to suggest a
creative idea directly addressing the problem was with Ben - he claims that
his pet is capable of general analogy - certainly one if not the basis of 
general intelligence - that, having learned to fetch a ball, his pet
spontaneously learned to play hide-and-seek. Great, I said, if you can
demonstrate that, you've got a major creative breakthrough - you can and 
should go public right now. You can bet Hawkins would. No reply. No
exposition of  his idea for producing such analogies. No comments or
interest from anyone else.

There are a lot of discussions here about *tangential* matters - but when 
it 
comes to the central problem(s) - the hard, creative problem -  how does you
agent move into *new* domains? - discussion evaporates.

And I was glad to see Bob expressing something I have often thought -  how 
often people in this field *gesture* at ideas, which are too awesome to be
declared publicly. Now that might be partly justified in other creative
fields. In many fields of invention, a creative idea about, say, using some 
new material or preparing it in a new way might, if expressed, be
immediately stolen. But not here. Here any creative idea will be totally
dependent on a massive amount of implementation. Hawkins had a fairly big 
creative idea with his HTM - even if it's a flawed idea. But no one can walk
away and immediately implement such an idea.

So actually, in this field,  it's in your and everyone's interest to 
declare 
your main ideas publicly and get as much feedback as pos. - and incentive
and opportunity to refine those ideas.  (By all means, BTW point to a
creative idea of yours that directly addresses the problem of creating 
general intelligence - or to anyone else's).
















-
This list is sponsored by AGIRI: http://www.agiri.org/email 
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; 



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.16.5/1149 - Release Date: 11/24/2007 
10:06 AM

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68366174-d70e79

Re: Re[6]: [agi] Funding AGI research

2007-11-24 Thread Linas Vepstas
On 20/11/2007, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
>
>
> How much funding is "massive" varies from domain to domain.  E.g. it's
> hard to
> do anything in nanotech without really expensive machinery.  For AGI, $10M
> is a lot of money, because the main cost is staff salaries, plus commodity
>
> hardware.


Clearly, you aren't thinking of buying any supercomputers :-) OK, I'm
joking...
The problem with  AGI algo's today is that they haven't even yet gotten to
the point
of feeling a need for supercomputers yet. I  find this odd, because  verious
narrow AI problems, such as data mining of today, as well as chess-playing
of
yesterday, clearly demonstrate and consume large supercomputer resources.
I find it telling that no one is saying "I've got the code, I just need to
scale it up
1000-fold to make it impressive ..."

  For nanotech, $10M isn't all that much, since specialized hardware is
> needed
> to do many kinds of serious work.


There's also a psychological component. There's something deeply impressive
about a lab with lots of wires and  whirring devices ... that gut-feel "wow"
factor
just sort-of  evaporates when you walk by theoreticians offices, even  those
of Nobel prize winners.  There is a definite, subliminal "wow factor" bias
in
science funding  that favors expensive whizzy machines over theory.

And, what counts as a prototype often depends on one's theoretical
> framework.  Do you
> consider there to have been a prototype for the first atom bomb?  I don't
> think there was,
> but there were preliminary experiments that, given the context of the
> framework of theoretical
> physics, made the workability of the atom bomb seem plausible.


In a certain sense, there were many prototypes: arguably hundreds, as they
knew
they had to get the core compressed with a certain shape charge, and had a
lot of
trouble getting it  right.  In the RaLa tests, they blew up radioactive
lanthanum so
that they could x-ray the insides of an exploding bomb to make sure it was
sufficiently
compressed and symmetrical to work.

But, in a certiain sense, these prototypes are no different than, say,
running a parser
against wikipedia: you know its one of the steps that needs to be taken on
the road
to AGI; taking these steps doesn't  "prove" that the final atom bomb would
actually work.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68370297-32718e

Re: Re[8]: [agi] Funding AGI research

2007-11-24 Thread Linas Vepstas
Hi,

On 20/11/2007, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
>
> Benjamin,
>
> > That's massive amount of work, but most AGI research and development
> > can be shared with narrow AI research and development.
>
> > There is plenty overlap btw AGI and narrow AI but not as much as you
> suggest...
>
> That's only because that some narrow AI products are not there yet.
>
> Could you describe a piece of technology that simultaneously:
> - Is required for AGI.
> - Cannot be required part of any useful narrow AI.


Oh please. That's like saying  that  the theory of areodynamics is the same
for fast
cars and for airplanes (it is), so lets build a fast car, and we'll probably
have an airplane
come out of it as a side-effect.

If you really want to build something, focus on building that thing.  As
long as you
focus on something else, you will fail to take the needed steps to get to
the objective
 you want.

To be more direct: a common example of "narrow AI" are cruise missles, or
the
darpa challange. We've put tens of millions into the darpa challange (which
I applaud)
but the result is maybe an inch down the road to AGI.  Another narrow AI
example
is data mining, and by now, many of the Fortune 500 have invested at least
tens,
if not hundreds of millions of dollars into that .. yet we are hardly closer
to AGI as
a result (although this business does bring in billions for high-end
expensive
computers from Sun, HP and IBM, andd so does encourage one component
needed for agi). But think about it ... billions are being spent on narrow
AI today,
and how did that help AGI, exactly?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68370711-0257fe

Re: [agi] Funding AGI research

2007-11-24 Thread Linas Vepstas
On 24/11/2007, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Linas,
>
>  I'm not asking for much more than brief exposition of ideas in this
> forum, that just begin to show some promise. I'm not demanding or expecting
> something fully worked through. The fact remains that I don't think I've
> heard any in any form that fit the critera below - and you don't actually
> mention any.
>

What, a breif expositition of workable ideas that haven't yet been explored?
I dunno, I've
read of various on the net, I've discussed some on this mailing list.

Let's not rehash old conversations. Clearly some people here like to shooot
down ideas.
That doesn't mean that they're bad ideas. I've received emails in private,
from folks held
up as "leading lights in AGI", who said  things like "you're on the right
track" (and no, that
wasn't Ben).  So I think the academic side of AGI has some mutual respect
for one-another,
and they think they have promsing ideas (even if you don't) and they're not
done with thier
researches, and they haven't given up.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68371477-3a8f89