Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 5:26 PM, Ben Goertzel wrote:
The problem is that investors are generally pretty unwilling to eat   
perceived
technology risk.  Exceptions arise all the time, and AGI has not yet  
been one.



There have been exceptions, just ill-advised ones.  :-)

But yes, most investors are actually looking for a Killer Demo(tm) or  
unimpeachable credibility, the latter not to be construed as referring  
to anyone with an academic AI background in this particular case.



Absent a Killer Demo, my observation is that people with  
"unimpeachable credibility" in this case and the genuine technical  
ability to plausibly produce results are essentially sets that very  
rarely intersect for these purposes.  No one on the investment side is  
really looking for an AI academic of any type per se when they  
consider investing in these kinds of things, but there are few others  
in the field (discounting cranks).  For better or worse, you need to  
be a J. Hawkins or similar.  Such is the world we live in.


Cheers,

J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 6:55 PM, Ben Goertzel wrote:

I wonder why some people think there is "one true path" to AGI ... I
strongly suspect there are many...



Like I stated at the beginning, *most* models are at least  
theoretically valid.  Of course, tractable engineering of said models  
is another issue. :-)  Engineering tractability in the context of  
computer science and software engineering is almost purely an applied  
mathematics effort to the extent there is any "theory" to it, and  
science has a very limited capacity to inform it.


If someone could describe, specifically, how to science is going to  
inform this process given the existing body of theoretical work, I  
would have no problem with the notion.  My objections were pragmatic.


Cheers,

J. Andrew Rogers



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:

J. Andrew Rogers wrote:
The fact that the vast majority of AGI theory is pulled out of 
/dev/ass notwithstanding, your above characterization would appear to 
reflect your limitations which you have chosen to project onto the 
broader field of AGI research.  Just because most AI researchers are 
misguided fools and you do not fully understand all the relevant 
theory does not imply that this is a universal (even if it were).


Ad hominem.  Shameful.



Ad hominem?  Well, of sorts I suppose, but in this case it is the 
substance of the argument so it is a reasonable device.  I think I have 
met more AI cranks with hare-brained pet obsessions with respect to the 
topic or academics that are beating a horse that died thirty years ago 
than AI researchers that are actually keeping current with the subject 
matter.  Pointing out the embarrassing foolishness of the vast number of 
those that claim to be "AI researchers" and how it colors the 
credibility of the entire field is germane to the discussion.


As for you specifically, assertions like "Artificial Intelligence 
research does not have a credible science behind it" in the absence of 
substantive support (now or in the past) can only lead me to believe 
that you either are ignorant of relevant literature (possible) or you do 
not understand all the relevant literature and simply assume it is not 
important.   As far as I have ever been able to tell, theoretical 
psychology re-heats a very old idea while essentially ignoring or 
dismissing out of hand more recent literature that could provide 
considerable context when (re-)evaluating the notion.  This is a fine 
example of part of the problem we are talking about.




AGI *is* mathematics?



Yes, applied mathematics.  Is there some other kind of non-computational 
AI?  The mathematical nature of the problem does not disappear when you 
wrap it in fuzzy abstractions it just gets, well, fuzzy.  At best the 
science can inform your mathematical model, but in this case the 
relevant mathematics is ahead of the science for most purposes and the 
relevant science is largely working out the specific badly implemented 
wetware mapping to said mathematics.



I'm sorry, but if you can make a statement such as this, and if you 
are already starting to reply to points of debate by resorting to ad 
hominems, then it would be a waste of my time to engage.



Probably a waste of my time as well if you think this is primarily a 
science problem in the absence of a discernible reason to characterize 
it as such.



I will just note that if this point of view is at all widespread - if 
there really are large numbers of people who agree that "AGI is 
mathematics, not science"  -  then this is a perfect illustration of 
just why no progress is being made in the field.



Assertions do not manufacture fact.

J. Andrew Rogers


Let's come to the point then.

You have taken the view that when I make a statement like "Artificial 
Intelligence research does not have a credible science behind it" I am 
doing so because I am purely ignorant of the science that is actually 
obvious to anyone who has been keeping up with the field.


Putting aside the fact that this is (as I said) quite insulting at a 
personal level, what exactly is the "science" behind artificial 
intelligence research?


Science is the study of something.  It involves building theoretical 
models of the phenomena under study, then a comparison of the 
predictions from those models with the phenomena.  It should also make 
new, non-obvious predictions that can be confirmed, to demonstrate the 
effectiveness of the theories.


What, in this case, was studied?  What theories?  What confirmations? 
And then, in what ways was this science applied to the engineering 
endeavor that is called Artificial Intelligence?


You seem to find the existence of this science so obvious that the very 
obviousness justifies you in calling someone ignorant for questioning 
it:  please outline this obvious thing.





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Ben Goertzel wrote:

Funny dispute ... "is AGI about mathematics or science"

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is "one true path" to AGI ... I
strongly suspect there are many...


Actually, the discussion had nothing to do with the rather bizarre 
interpretation you put on it above.




Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Funny dispute ... "is AGI about mathematics or science"

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is "one true path" to AGI ... I
strongly suspect there are many...

-- Ben


On Sun, Apr 6, 2008 at 9:16 PM, J. Andrew Rogers
<[EMAIL PROTECTED]> wrote:
>
>  On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:
>
>
> > J. Andrew Rogers wrote:
> >
> > > The fact that the vast majority of AGI theory is pulled out of /dev/ass
> notwithstanding, your above characterization would appear to reflect your
> limitations which you have chosen to project onto the broader field of AGI
> research.  Just because most AI researchers are misguided fools and you do
> not fully understand all the relevant theory does not imply that this is a
> universal (even if it were).
> > >
> >
> > Ad hominem.  Shameful.
> >
>
>
>  Ad hominem?  Well, of sorts I suppose, but in this case it is the substance
> of the argument so it is a reasonable device.  I think I have met more AI
> cranks with hare-brained pet obsessions with respect to the topic or
> academics that are beating a horse that died thirty years ago than AI
> researchers that are actually keeping current with the subject matter.
> Pointing out the embarrassing foolishness of the vast number of those that
> claim to be "AI researchers" and how it colors the credibility of the entire
> field is germane to the discussion.
>
>  As for you specifically, assertions like "Artificial Intelligence research
> does not have a credible science behind it" in the absence of substantive
> support (now or in the past) can only lead me to believe that you either are
> ignorant of relevant literature (possible) or you do not understand all the
> relevant literature and simply assume it is not important.   As far as I
> have ever been able to tell, theoretical psychology re-heats a very old idea
> while essentially ignoring or dismissing out of hand more recent literature
> that could provide considerable context when (re-)evaluating the notion.
> This is a fine example of part of the problem we are talking about.
>
>
>
> > AGI *is* mathematics?
> >
>
>
>  Yes, applied mathematics.  Is there some other kind of non-computational
> AI?  The mathematical nature of the problem does not disappear when you wrap
> it in fuzzy abstractions it just gets, well, fuzzy.  At best the science can
> inform your mathematical model, but in this case the relevant mathematics is
> ahead of the science for most purposes and the relevant science is largely
> working out the specific badly implemented wetware mapping to said
> mathematics.
>
>
>
>
> > I'm sorry, but if you can make a statement such as this, and if you are
> already starting to reply to points of debate by resorting to ad hominems,
> then it would be a waste of my time to engage.
> >
>
>
>  Probably a waste of my time as well if you think this is primarily a
> science problem in the absence of a discernible reason to characterize it as
> such.
>
>
>
>
> > I will just note that if this point of view is at all widespread - if
> there really are large numbers of people who agree that "AGI is mathematics,
> not science"  -  then this is a perfect illustration of just why no progress
> is being made in the field.
> >
>
>
>  Assertions do not manufacture fact.
>
>
>  J. Andrew Rogers
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:

J. Andrew Rogers wrote:
The fact that the vast majority of AGI theory is pulled out of /dev/ 
ass notwithstanding, your above characterization would appear to  
reflect your limitations which you have chosen to project onto the  
broader field of AGI research.  Just because most AI researchers  
are misguided fools and you do not fully understand all the  
relevant theory does not imply that this is a universal (even if it  
were).


Ad hominem.  Shameful.



Ad hominem?  Well, of sorts I suppose, but in this case it is the  
substance of the argument so it is a reasonable device.  I think I  
have met more AI cranks with hare-brained pet obsessions with respect  
to the topic or academics that are beating a horse that died thirty  
years ago than AI researchers that are actually keeping current with  
the subject matter.  Pointing out the embarrassing foolishness of the  
vast number of those that claim to be "AI researchers" and how it  
colors the credibility of the entire field is germane to the discussion.


As for you specifically, assertions like "Artificial Intelligence  
research does not have a credible science behind it" in the absence of  
substantive support (now or in the past) can only lead me to believe  
that you either are ignorant of relevant literature (possible) or you  
do not understand all the relevant literature and simply assume it is  
not important.   As far as I have ever been able to tell, theoretical  
psychology re-heats a very old idea while essentially ignoring or  
dismissing out of hand more recent literature that could provide  
considerable context when (re-)evaluating the notion.  This is a fine  
example of part of the problem we are talking about.




AGI *is* mathematics?



Yes, applied mathematics.  Is there some other kind of non- 
computational AI?  The mathematical nature of the problem does not  
disappear when you wrap it in fuzzy abstractions it just gets, well,  
fuzzy.  At best the science can inform your mathematical model, but in  
this case the relevant mathematics is ahead of the science for most  
purposes and the relevant science is largely working out the specific  
badly implemented wetware mapping to said mathematics.



I'm sorry, but if you can make a statement such as this, and if you  
are already starting to reply to points of debate by resorting to ad  
hominems, then it would be a waste of my time to engage.



Probably a waste of my time as well if you think this is primarily a  
science problem in the absence of a discernible reason to characterize  
it as such.



I will just note that if this point of view is at all widespread -  
if there really are large numbers of people who agree that "AGI is  
mathematics, not science"  -  then this is a perfect illustration of  
just why no progress is being made in the field.



Assertions do not manufacture fact.

J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
>
>  I would think an investor would want a believable specific answer to the
> following question:
>
>  "When and how will I get my money back?"
>
>  It can be uncertain (risk is part of the game), but you can't just wave
> your hands around on that point.

This is not the problem ... regarding Novamente, we have an extremely
specific business plan and details regarding how we would provide return
on investment.

The problem is that investors are generally pretty unwilling to eat  perceived
technology risk.  Exceptions arise all the time, and AGI has not yet been one.

It is an illusion that VC or angel investors are fond of risk ...
actually they are
quite risk-averse in nearly all cases...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science 
behind it.  There is no clear definition of what intelligence is, 
there is only the living example of the human mind that tells us that 
some things are "intelligent".



The fact that the vast majority of AGI theory is pulled out of /dev/ass 
notwithstanding, your above characterization would appear to reflect 
your limitations which you have chosen to project onto the broader field 
of AGI research.  Just because most AI researchers are misguided fools 
and you do not fully understand all the relevant theory does not imply 
that this is a universal (even if it were).


Ad hominem.  Shameful.


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to 
an agreement that intelligence is X, and so, starting from that 
position we are able to do some engineering to build a system that 
satisfies the criteria inherent in X, so we can build an intellgence.



I do not need anyone's "agreement" to prove that system Y will have 
property X, nor do I have to accommodate pet theories to do so.  AGI is 
mathematics, not science.


AGI *is* mathematics?

Oh dear.

I'm sorry, but if you can make a statement such as this, and if you are 
already starting to reply to points of debate by resorting to ad 
hominems, then it would be a waste of my time to engage.


I will just note that if this point of view is at all widespread - if 
there really are large numbers of people who agree that "AGI is 
mathematics, not science"  -  then this is a perfect illustration of 
just why no progress is being made in the field.



Richard Loosemore


Plenty of people can agree on what X is and 
are satisfied with the rigor of whatever derivations were required.  
There are even multiple X out there depending on the criteria you are 
looking to satisfy -- the label of "AI" is immaterial.


What seems to have escaped you is that there is nothing about an 
agreement on X that prescribes a real-world engineering design.  We have 
many examples of tightly defined Xs in theory that took many decades of 
R&D to reduce to practice or which in some cases have never been reduced 
to real-world practice even though we can very strictly characterize 
them in the mathematical abstract.  There are many AI researchers who 
could be accurately described as having no rigorous framework or 
foundation for their implementation work, but conflating this group with 
those stuck solving the implementation theory problems of a 
well-specified X is a category error.


There are two unrelated difficult problems in AGI: choosing a rigorous X 
with satisfactory theoretical properties and designing a real-world 
system implementation that expresses X with satisfactory properties.  
There was a time when most credible AGI research was stuck working on 
the former, but today an argument could be made that most credible AGI 
research is stuck working on the latter.  I would question the 
credibility of opinions offered by people who cannot discern the 
difference.



And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook...



I'm not interested in lame classical AI, so this is essentially a 
strawman.  To the extent I am personally in a "theory camp", I have been 
in the broader algorithmic information theory camp since before it was 
on anyone's radar.



It is not that these investors understand the abstract ideas I just 
described, it is that they have a gut feel for the rate of progress 
and the signs of progress and the type of talk that they should be 
encountering if AGI had mature science behind it.  Instead, what they 
get is a feeling from AGI researchers that each one is doing the 
following:


1)  Resorting to a bottom line that amounts to "I have a really good 
personal feeling that my project really will get there", and


2)  Examples of progress that look like an attempt to dress a doughnut 
up as a wedding cake.



Sure, but what does this have to do with the topic at hand?  The problem 
is that investors lack any ability to discern a doughnut from a wedding 
cake.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Rolf Nelson
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:
> As to why sympathetic rich people are
> apparently not willing to toss this consideration aside, it doesn't make
> much sense to me unless they simply don't think specific approaches are
> feasible -- although there's also a disconnect between sympathies and
> checkbooks, which is why we have cliche phrases like "put your money where
> your mouth is" and "talk is cheap".

Sympathetic rich people often want to keep their money for the same
reasons that sympathetic poor people want to keep their money, and
sympathetic G7 middle-class people (who are rich compared with the
median person in the world, and are filthy rich compared with the
average person who's lived throughout history) want to keep their
money. There's almost always someone richer and more successful than
you who you can use as an excuse to shirk, if you're the shirking
type.

As to why many people prefer saving whales to fighting malaria, and
fighting malaria to building an FAI, well, that's more complicated,
and any answer I give would be long and would almost certainly be
wrong in some minor detail.

-Rolf

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science  
behind it.  There is no clear definition of what intelligence is,  
there is only the living example of the human mind that tells us  
that some things are "intelligent".



The fact that the vast majority of AGI theory is pulled out of /dev/ 
ass notwithstanding, your above characterization would appear to  
reflect your limitations which you have chosen to project onto the  
broader field of AGI research.  Just because most AI researchers are  
misguided fools and you do not fully understand all the relevant  
theory does not imply that this is a universal (even if it were).



This is not about mathematical proof, it is about having a credible,  
accepted framework that allows us to say that we have already come  
to an agreement that intelligence is X, and so, starting from that  
position we are able to do some engineering to build a system that  
satisfies the criteria inherent in X, so we can build an intellgence.



I do not need anyone's "agreement" to prove that system Y will have  
property X, nor do I have to accommodate pet theories to do so.  AGI  
is mathematics, not science.  Plenty of people can agree on what X is  
and are satisfied with the rigor of whatever derivations were  
required.  There are even multiple X out there depending on the  
criteria you are looking to satisfy -- the label of "AI" is immaterial.


What seems to have escaped you is that there is nothing about an  
agreement on X that prescribes a real-world engineering design.  We  
have many examples of tightly defined Xs in theory that took many  
decades of R&D to reduce to practice or which in some cases have never  
been reduced to real-world practice even though we can very strictly  
characterize them in the mathematical abstract.  There are many AI  
researchers who could be accurately described as having no rigorous  
framework or foundation for their implementation work, but conflating  
this group with those stuck solving the implementation theory problems  
of a well-specified X is a category error.


There are two unrelated difficult problems in AGI: choosing a rigorous  
X with satisfactory theoretical properties and designing a real-world  
system implementation that expresses X with satisfactory properties.   
There was a time when most credible AGI research was stuck working on  
the former, but today an argument could be made that most credible AGI  
research is stuck working on the latter.  I would question the  
credibility of opinions offered by people who cannot discern the  
difference.



And in case you are tempted to do what (e.g.) Russell and Norvig do  
in their textbook...



I'm not interested in lame classical AI, so this is essentially a  
strawman.  To the extent I am personally in a "theory camp", I have  
been in the broader algorithmic information theory camp since before  
it was on anyone's radar.



It is not that these investors understand the abstract ideas I just  
described, it is that they have a gut feel for the rate of progress  
and the signs of progress and the type of talk that they should be  
encountering if AGI had mature science behind it.  Instead, what  
they get is a feeling from AGI researchers that each one is doing  
the following:


1)  Resorting to a bottom line that amounts to "I have a really good  
personal feeling that my project really will get there", and


2)  Examples of progress that look like an attempt to dress a  
doughnut up as a wedding cake.



Sure, but what does this have to do with the topic at hand?  The  
problem is that investors lack any ability to discern a doughnut from  
a wedding cake.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-06 Thread Derek Zahn
 
I would think an investor would want a believable specific answer to the 
following question:
 
"When and how will I get my money back?"
 
It can be uncertain (risk is part of the game), but you can't just wave your 
hands around on that point.  As to why sympathetic rich people are apparently 
not willing to toss this consideration aside, it doesn't make much sense to me 
unless they simply don't think specific approaches are feasible -- although 
there's also a disconnect between sympathies and checkbooks, which is why we 
have cliche phrases like "put your money where your mouth is" and "talk is 
cheap".
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be "compelling" about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm 
theoretical basis, because there is no science that says "this design 
should produce an intelligent machine because intelligence is KNOWN to 
be x and y and z, and this design unambiguously will produce something 
that satisfies x and y and z".


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  
Before that, the best that any outside investor can do is use their 
gut instinct to decide whether they think that it will work.



Even if every single AGI design in existence is fundamentally broken 
(and I would argue that a fair amount of AGI design is theoretically 
correct and merely unavoidably intractable), this is a false 
characterization.  And at a minimum, it should be "no mathematics" 
rather than "no science".


Mathematical proof of validity of a new technology is largely 
superfluous with respect to whether or not a venture gets funded.  
Investors are not mathematicians, at least not in the sense that 
mathematical certainty of the correctness of the model would be 
compelling.  If they trust the person enough to invest in them, they 
will generally trust that the esoteric mathematics behind the venture 
are correct as well.  No one tries to actually understand the 
mathematics even if though they will give them a cursory glance -- that 
is your job.



Having had to sell breakthroughs in theoretical computer science before 
(unrelated to AGI), I would make the observation that investors in 
speculative technology do not really put much weight on what you "know" 
about the technology.  After all, who are they going to ask if you are 
the presumptive leading authority in that field? They will verify that 
the current limitations you claim to be addressing exist and will want 
concise qualitative answers as to how these are being addressed that 
comport with their model of reality, but no one is going to dig through 
the mathematics and derive the result for themselves.  Or at least, I am 
not familiar with cases that worked differently than this.  The real 
problem is that most AGI designers cannot answer these basic questions 
in a satisfactory manner, which may or may not reflect what they "know".


You are addressing (interesting and valid) issues that lie well above 
the level at which I was making my argument, so unfortnately they miss 
the point.


I was arguing that whenever a project claims to be doing "engineering" 
there is always a background reference that is some kind of science or 
mathematics or prescription that justifies what the project is trying to 
achieve:


1)  Want to build a system to manage the baggage handling in a large 
airport?  Background prescription = a set of requirements that the flow 
of baggage should satisfy.


2)  Want to build an aircraft wing? Background science =  the physics of 
air flow first, along with specific criteria that must be satisfied.


3)  Want to send people on an optimal trip around a set of cities? 
Background mathematics = a precise statement of the travelling salesman 
problem.


No matter how many other cases you care to list, there is always some 
credible science or mathematics or common sense prescription lying at 
the back of the engineering project.


Here, for contrast, is an example of an engineering project behind which 
there was NO credible science or mathematics or prescription:


4*)  Find an alchemical process that will lead to the philosophers' stone.

Alchemists knew what they wanted - kind of - but there was no credible 
science behind what they did.  They were just hacking.


Artificial Intelligence research does not have a credible science behind 
it.  There is no clear definition of what intelligence is, there is only 
the living example of the human mind that tells us that some things are 
"intelligent".


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to an 
agreement that intelligence is X, and so, starting from that position we 
are able to do some engineering to build a system that satisfies the 
criteria inherent in X, so we can build an intellgence.


Instead what we have are AI researchers who have gut instincts about 
what intelligence is, and from that gut instinct they proceed to hack.


They are, in short, alchemists.

And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook, and claim that the Rational Agents framework plus 
logical reasoning is the scientific framework on which an idealized 
intelligent system can be designed, I should point out that this concept 
is completely rejected by most cognitive psychologists:  they point out 
that the "intelligence" to be found in the only example of 

Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be "compelling" about a project? (Novamente or any  
other). Artificial Intelligence is not a field that rests on a firm  
theoretical basis, because there is no science that says "this  
design should produce an intelligent machine because intelligence is  
KNOWN to be x and y and z, and this design unambiguously will  
produce something that satisfies x and y and z".


Every single AGI design in existence is a Suck It And See design.   
We will know if the design is correct if it is built and it works.   
Before that, the best that any outside investor can do is use their  
gut instinct to decide whether they think that it will work.



Even if every single AGI design in existence is fundamentally broken  
(and I would argue that a fair amount of AGI design is theoretically  
correct and merely unavoidably intractable), this is a false  
characterization.  And at a minimum, it should be "no mathematics"  
rather than "no science".


Mathematical proof of validity of a new technology is largely  
superfluous with respect to whether or not a venture gets funded.   
Investors are not mathematicians, at least not in the sense that  
mathematical certainty of the correctness of the model would be  
compelling.  If they trust the person enough to invest in them, they  
will generally trust that the esoteric mathematics behind the venture  
are correct as well.  No one tries to actually understand the  
mathematics even if though they will give them a cursory glance --  
that is your job.



Having had to sell breakthroughs in theoretical computer science  
before (unrelated to AGI), I would make the observation that investors  
in speculative technology do not really put much weight on what you  
"know" about the technology.  After all, who are they going to ask if  
you are the presumptive leading authority in that field? They will  
verify that the current limitations you claim to be addressing exist  
and will want concise qualitative answers as to how these are being  
addressed that comport with their model of reality, but no one is  
going to dig through the mathematics and derive the result for  
themselves.  Or at least, I am not familiar with cases that worked  
differently than this.  The real problem is that most AGI designers  
cannot answer these basic questions in a satisfactory manner, which  
may or may not reflect what they "know".



J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote:
That's surely part of it ... but investors have put big $$ into much  
LESS

mature projects in areas such as nanotech and quantum computing.



This is because nanotech and quantum computing can be readily and  
easily packaged as straightforward physical machinery technology,  
which a lot of people can readily conceptualize even if they do not  
actually understand it.  AGI is not a physical touchable technology in  
the same sense (or even software sense), which is further aggravated  
by the many irrational memes of woo-ness that surround the idea of  
consciousness, intelligence, spirituality that the vast majority of  
investors uncritically subscribe to.  Indeed, many view the poor track  
record of AI as validation of their nutty beliefs. There have been  
some technically ridiculous AI projects that got substantial funding  
because they appealed to the biases of the investors.


If AGI was merely a function of hardware design, I suspect it would be  
much easier to sell because many investors would much more easily  
delude themselves into thinking they understand it, or at least  
conceptualize it in a way that comports with reality.  Over the years  
I have slowly come to believe that the long track record of failure in  
AI is a minor contributor to the relative dearth of funding for bold  
AI ventures -- the problem has never been a lack of people willing to  
take a risk per se.


J. Andrew Rogers




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:46 AM, Ben Goertzel wrote:

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
"elevator pitch" treatment ... or even "PPT summary" treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias



Yes, and this happens far more often than just with AGI.  Many venture  
concepts, particularly speculative technology ventures, are extremely  
difficult to package into an elevator pitch because the minimum amount  
of material required for even the above average investor exceeds the  
bandwidth of an elevator pitch or slide deck.


In my experience, this is best framed as a problem of education.  More  
education of the investor required before the pitch indicates an  
exponential drop-off in the probability of being funded.  One of the  
reasons this is true is that not only does the person you are dealing  
with need to be educated, they have to be able to successfully educate  
*their* associates before investment is an option as a practical  
matter.  If the education required is complex and nuanced, this second  
stage will almost certainly be a failure.


Ben already knows this, but I will elaborate for the peanut gallery  
unfamiliar with venture finance.  The trick to dealing with this  
problem is to repackage the venture concept solely for the purpose of  
minimizing the amount of education required to raise money, which in  
the case of AGI means that you are selling a graspable product far  
removed from AGI per se.  The danger of this is that you end up going  
down a road where there is no AGI left in the venture.  Investors need  
to be able to wrap their heads around the venture (any venture), which  
given their limited resources means that the person with the idea  
needs to frame the desired result in terms that require the very  
minimum of education on the part of the investor to be compelling.   
People invest in products, not ideas, and the products must be  
concrete and obvious.  For something like AGI, packaging the  
technology into a fundable venture is an extraordinarily difficult task.



I would go as far as to say that funding speculative technology  
ventures is largely a problem of eliminating the apparent amount of  
education required so that it no longer appears particularly  
speculative but instead "obvious" when no concrete example exists.   
Successfully doing this is far, far more difficult than I suspect most  
people who have not tried believe.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
>  I know personally (and have met with) a number of folks who
>
>  -- could invest a couple million $$ in NM without it impacting their
>  lives at all
>
>  -- are deeply into the Singularity and AGI and related concepts
>
>  -- appear to personally like and respect me and other in the NM team
>
>  But, after spending about 1.5 years courting these sorts of folks,
>  Bruce and I largely
>  gave up and decided to focus on other avenues.

Just to be clear: these individuals have not funded any other AI projects
either ... so, it's not a matter of them disliking some particulars of the NM
project or the team as compared to others...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 12:21 PM, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> Ben:
> I may be mistaken, but it seems to me that AGI today in 2008 is "in the air"
> again after 50 years.

Yes

>You are not trying to present a completely novel and
> unheard of idea and with today's crowd of sophisticated angel investors I am
> surprised that no one bites given the modest sums involved. BTW I was not
> trying to give needless advice, just finishing my thoughts. I already took
> it as a given that you look for funding. I am trying to understand why no
> one bites. It's not as if there are a hundred different AGI efforts out
> there to choose from.

I don't fully understand it myself, but it's a fact.

To be clear: I understand why VC's and big companies don't want to fund
NM.

VC's are in a different sort of business ...

and big companies are either focused
on the short term, or else have their own
research groups who don't want a bunch of upstart outsiders to get
their research
$$ ...

But what vexes me a bit is that none of the many wealthy futurists out
there have been
interested in funding NM extensively, either on an angel investment
basis, or on a
pure nonprofit donation basis (and we have considered doing NM as a nonprofit
before, though right now that's not our focus as the virtual-pets biz
opp seems so
grand...)

I know personally (and have met with) a number of folks who

-- could invest a couple million $$ in NM without it impacting their
lives at all

-- are deeply into the Singularity and AGI and related concepts

-- appear to personally like and respect me and other in the NM team

But, after spending about 1.5 years courting these sorts of folks,
Bruce and I largely
gave up and decided to focus on other avenues.

I have some psychocultural theories as to why things are this way, but
nothing too
solid...

>I am surprised that the reason may only be that the
> project isn't far enough along (too immature) given the historical
> precedents of what investors have ponied up money for before.

That's surely part of it ... but investors have put big $$ into much LESS
mature projects in areas such as nanotech and quantum computing.

AGI arouses an irrational amount of skepticism, compared to these other
futurist technologies, it seems to me.  I suppose this partly is
because there have
been more "false starts" toward AI in the past.

-- Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:38 AM, Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20  
programmers in 3 to 10 years at a cost of under $10 million, then  
this represents such a paltry expense to some companies (Google for  
example) that it would seem to me that the thing to do is share the  
design with them and go for it (Google could R&D this with no impact  
to their shareholders even if it fails). The potential of an AGI is  
so enormous that the cost (risk)/benefit ratio swamps anything  
Google (or others) could possibly be working on.



You just used the Pascal's Wager fallacy in the context of AGI,  
congratulations.  The cost of investing in AGI is well above "zero",  
investment resources are most assuredly finite, and the risk of  
investing in a failure is extremely high -- and many billions of  
dollars have already been invested despite this.


Or to look at it another way, you are also using a variant of the  
infamous (and also fallacious) "5% market share" argument.



If the concept behind Novamente is truly compelling enough, it  
should be no problem to make a successful pitch.



The above statement leads me to believe you have little experience  
with funding speculative technology ventures of the scale being  
discussed here.  The dynamic is considerably, and rightly, more  
complicated than this.  A truly compelling concept and a dollar will  
buy you a cup of coffee.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Eric B. Ramsay
Ben:
I may be mistaken, but it seems to me that AGI today in 2008 is "in the air" 
again after 50 years. You are not trying to present a completely novel and 
unheard of idea and with today's crowd of sophisticated angel investors I am 
surprised that no one bites given the modest sums involved. BTW I was not 
trying to give needless advice, just finishing my thoughts. I already took it 
as a given that you look for funding. I am trying to understand why no one 
bites. It's not as if there are a hundred different AGI efforts out there to 
choose from. I am surprised that the reason may only be that the project isn't 
far enough along (too immature) given the historical precedents of what 
investors have ponied up money for before. There is nothing in the world 
comparable to the impact that an AGI would make (and I know you know this). 
Shouldn't this fire the imagination of someone for whom 10 mil. is a charity 
donation?

Eric B. Ramsay

Eric B. Ramsay

Ben Goertzel <[EMAIL PROTECTED]> wrote: > If the concept behind Novamente is 
truly compelling enough, it
> should be no problem to make a successful pitch.
>
> Eric B. Ramsay

Gee ... you mean, I could pitch the idea of funding Novamente to
people with money??  I never thought of that!!  Thanks for the
advice ;-pp

Evidently, the concept behind Novamente is not "truly compelling
enough" to the casual observer,
as we have failed to attract big-bucks backers so far...

Many folks we've talked to are interested in what we're doing but
it seems we'll have to get further toward the end goal in order to
overcome their AGI skepticism...

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
"elevator pitch" treatment ... or even "PPT summary" treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias

Please note that many successful inventors in history have had
huge trouble getting financial backing, although in hindsight
we find their ideas "truly compelling."  (And, many failed inventors
with terrible ideas have also had huge trouble getting financial
backing...)

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20 
programmers in 3 to 10 years at a cost of under $10 million, then this 
represents such a paltry expense to some companies (Google for example) 
that it would seem to me that the thing to do is share the design with 
them and go for it (Google could R&D this with no impact to their 
shareholders even if it fails). The potential of an AGI is so enormous 
that the cost (risk)/benefit ratio swamps anything Google (or others) 
could possibly be working on. If the concept behind Novamente is truly 
compelling enough it should be no problem to make a successful pitch.


Eric B. Ramsay


[WARNING!  Controversial comments.]


When you say "If the concept behind Novamente is truly compelling 
enough", this is the point at which your suggestion hits a brick wall.


What could be "compelling" about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm theoretical 
basis, because there is no science that says "this design should produce 
an intelligent machine because intelligence is KNOWN to be x and y and 
z, and this design unambiguously will produce something that satisfies x 
and y and z".


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  Before 
that, the best that any outside investor can do is use their gut 
instinct to decide whether they think that it will work.


Now, my own argument to investors is that the only situation in which we 
can do better than say "My gut instinct says that my design will work" 
is when we do actually base our work on a foundation that gives 
objective reasons for believing in it.  And the only situation that I 
know of that allows that kind of objective measure is by taking the 
design of a known intelligent system (the human cognitive system) and 
staying as close to it as possible.  That is precisely what I am trying 
to do, and I know of no other project that is trying to do that 
(including the neural emulation projects like Blue Brain, which are not 
pitched at the cognitive level and therefore have many handicaps).


I have other, much more compelling reasons for staying close to human 
cognition (namely the complex systems problem and the problem of 
guaranteeing friendliness), but this objective-validation factor is one 
of the most important.


My pleas that more people do what I am doing fall on deaf ears, 
unfortunately, because the AI community is heavily biassed against the 
messy empiricism of psychology.  Interesting situation:  the personal 
psychology of AI researchers may be what is keeping the field in Dead 
Stop mode.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
> If the concept behind Novamente is truly compelling enough, it
> should be no problem to make a successful pitch.
>
> Eric B. Ramsay

Gee ... you mean, I could pitch the idea of funding Novamente to
people with money??  I never thought of that!!  Thanks for the
advice ;-pp

Evidently, the concept behind Novamente is not "truly compelling
enough" to the casual observer,
as we have failed to attract big-bucks backers so far...

Many folks we've talked to are interested in what we're doing but
it seems we'll have to get further toward the end goal in order to
overcome their AGI skepticism...

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
"elevator pitch" treatment ... or even "PPT summary" treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias

Please note that many successful inventors in history have had
huge trouble getting financial backing, although in hindsight
we find their ideas "truly compelling."  (And, many failed inventors
with terrible ideas have also had huge trouble getting financial
backing...)

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Eric B. Ramsay
If the Novamente design is able to produce an AGI with only 10-20 programmers 
in 3 to 10 years at a cost of under $10 million, then this represents such a 
paltry expense to some companies (Google for example) that it would seem to me 
that the thing to do is share the design with them and go for it (Google could 
R&D this with no impact to their shareholders even if it fails). The potential 
of an AGI is so enormous that the cost (risk)/benefit ratio swamps anything 
Google (or others) could possibly be working on. If the concept behind 
Novamente is truly compelling enough, it should be no problem to make a 
successful pitch.

Eric B. Ramsay

Ben Goertzel <[EMAIL PROTECTED]> wrote: Much of this discussion is very 
abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

This is based on my bias that AGI is best approached, at the current time,
by focusing on software not specialized hardware.

One of the things I like about AGI is that a single individual or a
small team CAN
"just do it" without need for massive capital investment in physical
infrastructure.

It's tempting to get into specialized hardware for AGI, and we may
want to at some
point, but I think it makes sense to defer that until we have a very
clear idea of
exactly what AGI design needs the hardware and strong prototype results of some
sort indicating why this AGI design will work on this hardware.  My
suspicion is that
we can get to human-level AGI without any special hardware, though
special hardware
will certainly be able to accelerate things after that.

-- Ben G




On Sun, Apr 6, 2008 at 7:22 AM, Samantha Atkins  wrote:
> Arguably many of the problems of Vista including its legendary slippages
> were the direct result of having thousands of merely human programmers
> involved.   That complex monkey interaction is enough to kill almost
> anything interesting. 
>
>  - samantha
>
>  Panu Horsmalahti wrote:
>
> >
> > Just because it takes thousands of programmers to create something as
> complex as Vista, does *not* mean that thousands of programmers are required
> to build an AGI, since one property of AGI is/can be that it will learn most
> of its complexity using algorithms programmed into it.
> > 
> > *singularity* | Archives
> 
>  | Modify
>  Your Subscription   [Powered by
> Listbox] 
> >
> >
>
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

This is based on my bias that AGI is best approached, at the current time,
by focusing on software not specialized hardware.

One of the things I like about AGI is that a single individual or a
small team CAN
"just do it" without need for massive capital investment in physical
infrastructure.

It's tempting to get into specialized hardware for AGI, and we may
want to at some
point, but I think it makes sense to defer that until we have a very
clear idea of
exactly what AGI design needs the hardware and strong prototype results of some
sort indicating why this AGI design will work on this hardware.  My
suspicion is that
we can get to human-level AGI without any special hardware, though
special hardware
will certainly be able to accelerate things after that.

-- Ben G




On Sun, Apr 6, 2008 at 7:22 AM, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> Arguably many of the problems of Vista including its legendary slippages
> were the direct result of having thousands of merely human programmers
> involved.   That complex monkey interaction is enough to kill almost
> anything interesting. 
>
>  - samantha
>
>  Panu Horsmalahti wrote:
>
> >
> > Just because it takes thousands of programmers to create something as
> complex as Vista, does *not* mean that thousands of programmers are required
> to build an AGI, since one property of AGI is/can be that it will learn most
> of its complexity using algorithms programmed into it.
> > 
> > *singularity* | Archives
> 
>  | Modify
>  Your Subscription   [Powered by
> Listbox] 
> >
> >
>
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Samantha Atkins
Arguably many of the problems of Vista including its legendary slippages 
were the direct result of having thousands of merely human programmers 
involved.   That complex monkey interaction is enough to kill almost 
anything interesting. 


- samantha

Panu Horsmalahti wrote:
Just because it takes thousands of programmers to create something as 
complex as Vista, does *not* mean that thousands of programmers are 
required to build an AGI, since one property of AGI is/can be that it 
will learn most of its complexity using algorithms programmed into it.


*singularity* | Archives 
 
 | Modify 
 
Your Subscription 	[Powered by Listbox] 




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com