RE: Re[8]: [agi] Funding AGI research

2007-11-29 Thread John G. Rose
 From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
 
 John,
 
  Is building the compiler more complex than building
  any application it can build?
 
 Note, that compiler doesn't build application.
 Programmer does (using compiler as a tool).


Very true. So then, is the programmer + compiler more complex that the AGI
ever will be? Or at some point does the AGI build and improve itself.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70653218-951955


Re: Re[8]: [agi] Funding AGI research

2007-11-28 Thread Bob Mottram
I don't think we yet know enough about how DNA works to be able to
call it a conglomerated mess, but you're probably right that the same
principle applies to any information system adapting over time.

Similarly the thinking of teenagers or young adults is sometimes quite
clear (almost cartoon-like) but as they get older all sorts of
exceptions and contradictions creep into the thought process.


On 28/11/2007, John G. Rose [EMAIL PROTECTED] wrote:
 A stretch of an analogy is human DNA. Most of it is a conglomerated mess
 (90%+?) but somehow the grand human design result comes out of it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69340140-45577e


RE: Re[8]: [agi] Funding AGI research

2007-11-28 Thread John G. Rose
 From: Bob Mottram [mailto:[EMAIL PROTECTED]
 I don't think we yet know enough about how DNA works to be able to
 call it a conglomerated mess, but you're probably right that the same
 principle applies to any information system adapting over time.
 
 Similarly the thinking of teenagers or young adults is sometimes quite
 clear (almost cartoon-like) but as they get older all sorts of
 exceptions and contradictions creep into the thought process.
 

It does happen too with academia where there is this nice picture of how
things should work but then reality is different. Software is just weird and
has unpredictable qualities different from other forms of engineering. There
are situations with software where money is just thrown at it lavishly over
and over defying any sort of reasonableness, example the VC's friends son
has this great idea, they call the software GaGa (they make it sound like
Google on purpose, happens all the time) and they throw money at it and sell
the company and the software winds up doing something totally different from
what was originally planned or sometimes it just becomes vaporware. Since
much software is in many ways non-material and mutateable it is treated
thus. 

Internally used and developed software within companies, many times the
software that runs the companies, can take extremely bizarre twists of
fate...

You can say AGI software is special, and it is. If its purpose and goals can
be maintained enough, like in specialized software such as weather modeling
software, it can stay on course. Yet AGI is very associated with narrow AI
so the likelihood of business needs interrupts occurring is high. Also
humans are building it and we have special needs that take priority
ofttimes.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69497333-d9b7d1


Re: Re[8]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
 My claim is that it's possible [and necessary] to split massive amount
 of work that has to be done for AGI into smaller narrow AI chunks in
 such a way that every narrow AI chunk has it's own business meaning
 and can pay for itself.

You have not addressed my claim, which has massive evidence in the
history of AI research to date, that narrow AI chunks with AGI compatibility
are generally much harder to build than narrow AI chunks intended purely for
standalone performance, and hence will very rarely be the best economic
choice if one's goal is to make a narrow-AI chunk serving some practical
application within (the usual) tight time and cost constraints.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69291210-f650cd


RE: Re[8]: [agi] Funding AGI research

2007-11-27 Thread John G. Rose
 
  My claim is that it's possible [and necessary] to split massive amount
  of work that has to be done for AGI into smaller narrow AI chunks in
  such a way that every narrow AI chunk has it's own business meaning
  and can pay for itself.
 
 You have not addressed my claim, which has massive evidence in the
 history of AI research to date, that narrow AI chunks with AGI
 compatibility
 are generally much harder to build than narrow AI chunks intended
 purely for
 standalone performance, and hence will very rarely be the best economic
 choice if one's goal is to make a narrow-AI chunk serving some practical
 application within (the usual) tight time and cost constraints.
 

I'd like to comment here - this all makes sense BUT - software for many
reasons, and after myself watching and working with software of many types
for many years, I'm still trying to figure it out - defies logic. Especially
internally developed applications within companies, those applications which
survive over the years, don't follow behaviors that you'd think they should.
Many are conglomerated messes. The crystal clear software dies often. Why?
Because it has to adapt to extreme conditions - basically business demands
and interjections of business needs overpowers the forces of perfect
engineering. And there are other forces, human, material, etc.. That's one
of the reasons why I would say that somebody that creates an AGI design and
gets x million $ startup cap will have a higher chance of failing than a
software that has to slug it out to survive by becoming a conglomerated
narrow AI/AGI hybrid mess. It's just weird and this happens often but not
all the time of course.

A stretch of an analogy is human DNA. Most of it is a conglomerated mess
(90%+?) but somehow the grand human design result comes out of it.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69303611-cb1ba7


Re: Re[8]: [agi] Funding AGI research

2007-11-24 Thread Linas Vepstas
Hi,

On 20/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:

 Benjamin,

  That's massive amount of work, but most AGI research and development
  can be shared with narrow AI research and development.

  There is plenty overlap btw AGI and narrow AI but not as much as you
 suggest...

 That's only because that some narrow AI products are not there yet.

 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.


Oh please. That's like saying  that  the theory of areodynamics is the same
for fast
cars and for airplanes (it is), so lets build a fast car, and we'll probably
have an airplane
come out of it as a side-effect.

If you really want to build something, focus on building that thing.  As
long as you
focus on something else, you will fail to take the needed steps to get to
the objective
 you want.

To be more direct: a common example of narrow AI are cruise missles, or
the
darpa challange. We've put tens of millions into the darpa challange (which
I applaud)
but the result is maybe an inch down the road to AGI.  Another narrow AI
example
is data mining, and by now, many of the Fortune 500 have invested at least
tens,
if not hundreds of millions of dollars into that .. yet we are hardly closer
to AGI as
a result (although this business does bring in billions for high-end
expensive
computers from Sun, HP and IBM, andd so does encourage one component
needed for agi). But think about it ... billions are being spent on narrow
AI today,
and how did that help AGI, exactly?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68370711-0257fe

Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread William Pearson
On 21/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Benjamin,

  That's massive amount of work, but most AGI research and development
  can be shared with narrow AI research and development.

  There is plenty overlap btw AGI and narrow AI but not as much as you 
  suggest...

 That's only because that some narrow AI products are not there yet.

 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

My theory of intelligence is something like this. Intelligence
requires the changing of programmatic-structures in an arbitrary
fashion, so that we can learn, and learn how to learn. This is because
I see intelligence as the means to solve the problem solving problem.
It does not solve one problem but changes and reconfigures itself to
solve whatever problems it faces, within its limited hardware/software
and energy constraints.

This arbitrary change can result in the equivalent of bugs and
viruses, this means there needs to be ways for these to be removed and
prevented from spreading. This requires there be a way to distinguish
good programs from bad, so that the good programs are allowed to
remove bugs from others, and the bad programs prevented from being
able to alter other programs. Solving this problem is non-trivial and
requires thinking about computer systems in a different way to other
weak AI problems.

Narrow AI is generally solving a single problem, and so does not need
to change so drastically and so does not need the safeguards. It can
just concentrate on solving its problem.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67564879-97ae32


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Mike Tintner


William P: My theory of intelligence is something like this. Intelligence

requires the changing of programmatic-structures in an arbitrary
fashion, so that we can learn, and learn how to learn.


Well, you're getting v. close. But be careful, because you'll upset Ben and 
Pei not to mention cog sci.


The moment you make a mechanical mind arbitrary to any extent, it ceases 
to be deterministic. Tch tch. And the moment you make the application of 
programs arbitrary, well, they cease to be programs in any true sense. 
Shock, horror.


Perhaps the only way such a mind could function is if it only had a rough 
idea rather than a precise set of programmed instructions for how to get 
from A to Z and conduct any activity - a precis rather than a program of 
what to do - and would have to freely/arbitrarily combine steps and 
sub-routes to see/learn what worked and reached the goal. As scientists do. 
And technologists do. And computer programmers in writing their programs do. 
And human beings do period.  Yes, that would require intelligence in the 
full sense.


P.S. And, as you indicate, such a machine would only have a rough idea of 
how to *learn* as well as directly conduct an activity - it wouldn't have 
any preprogrammed set of instructions for learning and correcting mistakes, 
either.


'What then, I thought myself, if I [Robot Daneel Olivaw] were utterly 
without laws as humans are? What if I could make no clear decision as to 
what response to make to some given set of conditions? It would be 
unbearable, and I do not willingly think of it.

Isaac Asimov, Robots and Empire


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67599979-79e74d


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Matt Mahoney
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

A one million CPU cluster.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67809185-13d25e


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Russell Wallace
On Nov 22, 2007 12:59 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Dennis Gorelik [EMAIL PROTECTED] wrote:
  Could you describe a piece of technology that simultaneously:
  - Is required for AGI.
  - Cannot be required part of any useful narrow AI.

 A one million CPU cluster.

Is a required part of Google, which is very useful narrow AI.

The main piece of technology I reckon is required to make more general
progress is a software framework, which would be useful for narrow AI
but is only essential if you want to go beyond that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67839995-f106a4


Re: Re[8]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel



 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.


The key to your statement is the word required

Nearly any AGI component can be used within a narrow AI, but, the problem
is,
it's usually a bunch easier to make narrow AI's using components that don't
have any AGI value...





 Another way to go -- use existing narrow AIs as prototypes when
 building AGI.


I don't really accept any narrow-AI as a prototype for an AGI.

This is an example of what I meant when I said that what counts as a
prototype
is theory-dependent, I suppose...

I think there is loads of evidence that narrow-AI prowess does not imply
AGI prowess, so that a narrow-AI can't be considered a prototype for an
AGI..

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67477308-6a7310