Re: [agi] Finding analogies

2007-11-21 Thread Mike Tintner
Dennis:  I just want to note, that Google does exactly that: it finds 
analogies

to your search queries in any context

Er, really? Some examples?

If you ask Google to find *existing* analogies, yes. If you ask it for cool 
as a ---  it may come up with various known analogies - cucumber/ ice etc. 
But the point of AGI and human intelligence is they should/can seek and find 
*new* analogies. Will Google be able to come up with as cool as cold steel 
on your penis? I think not.


P.S. Almost totally O/T - why are there no women on this forum or, it would 
seem, in AGI? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67520209-846edb


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread William Pearson
On 21/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Benjamin,

  That's massive amount of work, but most AGI research and development
  can be shared with narrow AI research and development.

  There is plenty overlap btw AGI and narrow AI but not as much as you 
  suggest...

 That's only because that some narrow AI products are not there yet.

 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

My theory of intelligence is something like this. Intelligence
requires the changing of programmatic-structures in an arbitrary
fashion, so that we can learn, and learn how to learn. This is because
I see intelligence as the means to solve the problem solving problem.
It does not solve one problem but changes and reconfigures itself to
solve whatever problems it faces, within its limited hardware/software
and energy constraints.

This arbitrary change can result in the equivalent of bugs and
viruses, this means there needs to be ways for these to be removed and
prevented from spreading. This requires there be a way to distinguish
good programs from bad, so that the good programs are allowed to
remove bugs from others, and the bad programs prevented from being
able to alter other programs. Solving this problem is non-trivial and
requires thinking about computer systems in a different way to other
weak AI problems.

Narrow AI is generally solving a single problem, and so does not need
to change so drastically and so does not need the safeguards. It can
just concentrate on solving its problem.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67564879-97ae32


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Mike Tintner


William P: My theory of intelligence is something like this. Intelligence

requires the changing of programmatic-structures in an arbitrary
fashion, so that we can learn, and learn how to learn.


Well, you're getting v. close. But be careful, because you'll upset Ben and 
Pei not to mention cog sci.


The moment you make a mechanical mind arbitrary to any extent, it ceases 
to be deterministic. Tch tch. And the moment you make the application of 
programs arbitrary, well, they cease to be programs in any true sense. 
Shock, horror.


Perhaps the only way such a mind could function is if it only had a rough 
idea rather than a precise set of programmed instructions for how to get 
from A to Z and conduct any activity - a precis rather than a program of 
what to do - and would have to freely/arbitrarily combine steps and 
sub-routes to see/learn what worked and reached the goal. As scientists do. 
And technologists do. And computer programmers in writing their programs do. 
And human beings do period.  Yes, that would require intelligence in the 
full sense.


P.S. And, as you indicate, such a machine would only have a rough idea of 
how to *learn* as well as directly conduct an activity - it wouldn't have 
any preprogrammed set of instructions for learning and correcting mistakes, 
either.


'What then, I thought myself, if I [Robot Daneel Olivaw] were utterly 
without laws as humans are? What if I could make no clear decision as to 
what response to make to some given set of conditions? It would be 
unbearable, and I do not willingly think of it.

Isaac Asimov, Robots and Empire


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67599979-79e74d


Re: [agi] Funding AGI research

2007-11-21 Thread Richard Loosemore

Dennis Gorelik wrote:

Richard,


specific technical analysis of the AGI problem that I have made
indicates that nothing like a 'prototype' is even possible until
after a massive amount of up-front effort. 


I probably misunderstand you first time.
I thought you meant that this massive amount of up-front efforts must
be made in single project.
But you probably don't mean that, right?

I agree that there is massive amount of up-front effort required for
delivering AGI.
But this amount can be split into separate pieces.
All these pieces can be done in separate projects (weak AI projects).
Every such project can have its own business sense and would be able
to pay for themselves. Good example of such weak AI project would be
Google.

That's why I claim that huge up-front investment can be avoided, even
though there is massive amount of up-front efforts.

Do you agree?


Not quite.  I had something very specific in mind when I said that, 
because I was meaning that in a complex systems AGI project, there is 
a need to do a massive, parallel search of a space of algorithms.  This 
is what you might call a data collection phase.  It is because of the 
need for this data collection (*before* a prototype can be built).


It could be done by many groups working in parallel, but that would 
still have to be coordinated (not separate companies all trying to 
develop separate projects).


So, alas, it really would need massive effort in one place.




Billions of dollars would be exactly what I need:  I have a need for a
large bank of parallelized exploration machines, and I have a need for
large numbers of research assistants to undertake specific tasks.


That's what you need, but would that guarantee AGI delivery?


Nobody can ever guarantee such a thing.  But on the other hand, I see in 
my plan a more systematic, structured and predictable plan to reach AGI 
than any other approach that I am aware of.  I think it is as near to 
guaranteed as it is possible to get because it reduces the unknowns to 
a set of structured attacks.




Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67685200-098d21


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Matt Mahoney
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

A one million CPU cluster.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67809185-13d25e


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Russell Wallace
On Nov 22, 2007 12:59 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Dennis Gorelik [EMAIL PROTECTED] wrote:
  Could you describe a piece of technology that simultaneously:
  - Is required for AGI.
  - Cannot be required part of any useful narrow AI.

 A one million CPU cluster.

Is a required part of Google, which is very useful narrow AI.

The main piece of technology I reckon is required to make more general
progress is a software framework, which would be useful for narrow AI
but is only essential if you want to go beyond that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67839995-f106a4


Re: Re[4]: [agi] Danger of getting what we want [from AGI]

2007-11-21 Thread Matt Mahoney

--- Dennis Gorelik [EMAIL PROTECTED] wrote:
  As for the analogies, my point is that AGI will quickly evolve to
 invisibility
  from a human-level intelligence.
 
 I think you underestimate how quickly performance deteriorates with the
 growth of complexity.
 AGI systems would have lots of performance problems in spite of fast
 hardware.

No, I was not aware of that.  What is the relationship?

 Unmodified humans, on the other hand would be considerably more
 advanced than now just because all AGI civilization technologies will
 be available for humans as well.
 So the gap won't be really that big.
 
 To visualize potential differences try to compare income of humans
 with IQ 100 and humans with IQ 150.
 The difference is not really that big.

Try to visualize an Earth turned into computronium with an IQ of 10^38.  The
problem is that we can't.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67828293-015f33


Re: Re[4]: [agi] Funding AGI research

2007-11-21 Thread Mike Dougherty
On Nov 20, 2007 8:27 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Start with weak AI programs. That would push technology envelope
 further and further and in the end AGI will be possible.

Yeah - because weak AI is so simple.  Why not just make some
run-of-the-mill narrow AI with a single goal of Build AGI?  You can
just relax while it does all the work.

It's turtles all the way down

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67862446-cc7a80