Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:
 On Apr 17, 2008, at 3:32 PM, YKY (Yan King Yin) wrote:
  Disk access rate is ~10 times faster than ethernet access rate.  IMO,
  if RAM is not enough the next thing to turn to should be the harddisk.

 Eh?  Ethernet latency is sub-millisecond, and in a highly tuned system
 approaches the 10 microsecond range for something local.  Much, much faster
 than disk if the remote node has your data in RAM and is relatively local.

 Note that relatively local can mean geographically regional.  The
 round-trip RAM access time from my machine to a machine on the other side of
 town is a fraction of millisecond over the Internet connection (not
 hypothetical, actually measured at ~400 microseconds).  I wish disk access
 was even remotely that good.  And this was with inexpensive Gigabit
 Ethernet.

LOL... you're right, I forgot to consider latency.  Ethernet is much
faster than harddisk if we measure access times.  But there is another
factor:  Harddisk is owned by the user.  Memory over the net is owned
by others, so must be shared.  It's not easy to arrange a distributed
and cooperative storage scheme.  It's hard enough to solve core AGI
problems, I simply don't have time to do deal with that.

Solid State Disks seems to be a promising solution.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread Mark Waser

Plus, learning requires that we store a lot of hypotheses.  Let's say
1000-1 times the real KB.


I reject this hypothesis as ludicrously incorrect.


- Original Message - 
From: YKY (Yan King Yin) [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, April 17, 2008 4:58 PM
Subject: Re: [agi] database access fast enough?



On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote:

  Yes.  RAM is *HUGE*.  Intelligence is *NOT*.

 Really?  I will believe that if I see more evidence... right now I'm
skeptical.

And your *opinion* has what basis?  Are you arguing that RAM isn't huge?
That's easily disprovable.  Or are you arguing that intelligence is huge?
That too is easily disprovable.  Which one do I need to knock down?


The current OpenCyc KB is ~200 Mbs (correct me if I'm wrong).

The RAM size of current high-end PCs is ~10 Gbs.

My intuition estimates that the current OpenCyc is only about 10%-40%
of a 5 year-old human intelligence.

Plus, learning requires that we store a lot of hypotheses.  Let's say
1000-1 times the real KB.

That comes to 500Gb - 20Tb.

It seems that if we allow several years for RAM size to double a few
times, RAM may have a chance to catch up to the low end.  Obviously
not now.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread Mark Waser
 I agree with your side of the debate about whole KB not fitting into RAM.  
 As a solution, I propose to partition the whole KB into the tiniest possible 
 cached chunks, suitable for a single agent running on a host computer with 
 RAM resources of at least one GB.  And I propose that AGI will consist not 
 of one program running on one computer, but a vast multitude of separately 
 hosted agents working in concert.

Um.  Neither side is arguing that the whole KB fit into RAM.  I'm arguing that 
the necessary *core* for intelligence plus enough cached chunks (as you 
phrase it) to support the current thought processes WILL fit into RAM.  It's 
obviously ludicrous that all the world's knowledge is going to fit into RAM at 
one time.



  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Thursday, April 17, 2008 5:20 PM
  Subject: Re: [agi] database access fast enough?


  YKY,

  I agree with your side of the debate about whole KB not fitting into RAM.  As 
a solution, I propose to partition the whole KB into the tiniest possible 
cached chunks, suitable for a single agent running on a host computer with RAM 
resources of at least one GB.  And I propose that AGI will consist not of one 
program running on one computer, but a vast multitude of separately hosted 
agents working in concert.

  But my opinion of the OpenCyc concept coverage with respect to that of a 
human five-year old differs greatly from yours.  I concede that 20 OpenCyc 
facts are about the number a child might know, but in order to properly ground 
these concepts, I believe that a much larger number of feature vectors will 
have to be stored or available in abstracted form.   For example, there is the 
concept of the child's mother.  Properly grounding that one concept might 
require abstracting features from thousands of observations:

a.. wet hair mother
b.. far away mother
c.. angy mother
d.. mother hidden from view
e.. mother in a crowd
f.. mother's voice
g.. mother in dim light
h.. mother from below
i.. and so on

  Of course you can ignore fully grounded concepts as does current Cycorp for 
its applications, and as I will with Texai until it is past the bootstrap stage.

  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: YKY (Yan King Yin) [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Thursday, April 17, 2008 3:58:43 PM
  Subject: Re: [agi] database access fast enough?

  On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote:
 Yes.  RAM is *HUGE*.  Intelligence is *NOT*.
   
Really?  I will believe that if I see more evidence... right now I'm
   skeptical.
  
   And your *opinion* has what basis?  Are you arguing that RAM isn't huge?
   That's easily disprovable.  Or are you arguing that intelligence is huge?
   That too is easily disprovable.  Which one do I need to knock down?

  The current OpenCyc KB is ~200 Mbs (correct me if I'm wrong).

  The RAM size of current high-end PCs is ~10 Gbs.

  My intuition estimates that the current OpenCyc is only about 10%-40%
  of a 5 year-old human intelligence.

  Plus, learning requires that we store a lot of hypotheses.  Let's say
  1000-1 times the real KB.

  That comes to 500Gb - 20Tb.

  It seems that if we allow several years for RAM size to double a few
  times, RAM may have a chance to catch up to the low end.  Obviously
  not now.

  YKY

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com





--
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread Matt Mahoney

--- Mark Waser [EMAIL PROTECTED] wrote:

 Um.  Neither side is arguing that the whole KB fit into RAM.  I'm arguing
 that the necessary *core* for intelligence plus enough cached chunks (as
 you phrase it) to support the current thought processes WILL fit into RAM. 
 It's obviously ludicrous that all the world's knowledge is going to fit into
 RAM at one time.

What is your estimate of the quantity of all the world's knowledge?  (Or the
amount needed to achieve AGI or some specific goal?)

Google probably keeps a copy of the searchable part of the internet in about 1
PB of RAM, but this isn't AGI yet.  I suppose an internet-wide distributed
system could cache about 1 EB (10^18 bytes).


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote:
 Um.  Neither side is arguing that the whole KB fit into RAM.  I'm arguing
that the necessary *core* for intelligence plus enough cached chunks (as
you phrase it) to support the current thought processes WILL fit into RAM.
It's obviously ludicrous that all the world's knowledge is going to fit into
RAM at one time.

Then we have no disagreement.

Notice that the loading-on-demand chunks require that we *duplicate*
data.  For example facts about JK Rowling can be in a literature chunk
as well as a entrepreneur chunk.

The question is whether DBMSs support this.  Materialized views may be the
answer (http://en.wikipedia.org/wiki/Materialized_view).

As I said before, minimizing disk access is still an important issue.

And all this is peripheral to AGI.  I wish I can just focus on AGI
algorithms!

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 What is your estimate of the quantity of all the world's knowledge?  (Or the
 amount needed to achieve AGI or some specific goal?)

Matt,

The world's knowledge is irrelevant to the goal of AGI.  What we
need is to build a commonsense AGI and then let the it control other
expert systems with specialized knowledge.

So the pertinent question is how large is the core commonsense KB.

I guess anywhere from 1Gb to 100Gb is possible, excluding hypotheses
from learning, and episodic memory.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-18 Thread Matt Mahoney

--- Mark Waser [EMAIL PROTECTED] wrote:

  What is your estimate of the quantity of all the world's knowledge?  (Or 
  the amount needed to achieve AGI or some specific goal?)
 
 I have no idea (and the question is further muddled by what knowledge is and
 what formats are included).  The question itself is fundamentally 
 nonsensical in it's current form.

I mean either algorithmic complexity, or more practically, how much memory you
need (which depends on the data representation).  But really, it depends on
the goal, which I have been trying unsuccessfully for years to get YKY to pin
down.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-18 Thread Richard Loosemore

Benjamin Johnston wrote:


I have stuck my neck out and written an Open Letter to AGI (Artificial 
General Intelligence) Investors on my website at http://susaro.com.


All part of a campaign to get this field jumpstarted.

Next week I am going to put up a road map for my own development project.



Hi Richard,

If I were a potential investor, I don't think I'd find your letter 
convincing.


AI was first coined some 50 years ago: before I was born, and therefore 
long before I entered the field of AI. Naturally, I can't speak with 
personal experience on the matter, but when I read the early literature 
on AI or when I read about field's pioneers reminiscing on the early 
days, I get the distinct impression that this was an incredibly 
passionate and excited group. I would feel comfortable calling them a 
gang of hot-headed revolutionaries - even today, 50 years after 
inventing the term AI and at the age of 80, McCarthy writes about AI 
and the possibility of strong AI with passion and excitement. Yet, in 
spite of all the hype, excitement and investment that was apparently 
around during that time (or, more likely, as a result of the hype and 
excitement) the field crashed in the AI winter of the 80s without 
finding that dramatic breakthrough.


There's the Japanese Fifth Generation Computer Systems project that I 
understand to be a massive billion dollar investment during the 80s into 
parallel machines and artificial intelligence; an investment that is 
today largely considered to be a huge failure.


And of course, there's Cyc; formed with an inspiring aim to capture all 
commonsense knowledge, but still remains in development some 20 years 
later.


And in addition to these, there are the many many early research papers 
on AI problem solving systems that show early promise and cause the 
authors to make wild predictions and claims in their Future Work... 
predictions that time has reliably proven to be false.


So, why would I want to invest now? When I track down the biographies of 
several of the regulars on this list, I find that they entered the field 
during or after the AI Winter and never experienced the early optimism 
as an insider. How can you convince an investor that the passion today 
isn't just the unfounded optimism of researchers who don't remember the 
past? How can you convince an investor that AGI isn't also going to 
devolve again into an emphasis on publications rather than quality (as 
you claim AI has devolved) or into a new kind of weak AGI with no 
dramatic breakthrough?


I think a better argument would be to point to a fundamental 
technological or methodological change that makes AGI finally credible. 
I'm not convinced that being lean, mean, hungry and hellbent on getting 
results is enough. If I believe in AGI, maybe my best bet is to invest 
my money elsewhere and wait until the fundamental attitudes have changed 
so each dollar will have a bigger impact, rather than squandered on a 
bad dead-end idea. Alternately, my best bet may be to invest in weak AI 
because it will give me a short-term profit (that can be reinvested) AND 
has a plausible case for eventually developing into strong AI. If you 
can offer no good reason to invest in AGI today (given all its past 
failures), aside from a renewed passion of its researchers, then a sane 
reader would have to conclude that AGI is probably a bad investment.



Personally, I'm not sure what I feel about AGI (though, I wouldn't be 
here if I didn't think it was valuable and promising). However, in this 
email I'm trying to play the devil's advocate in response to your open 
letter to investors.


Ben,

Thanks for the thoughtful comments.

I have three responses.

First, I think there is a world of difference between passionate 
researchers at the beginning of the field, in 1956, and passionate 
researchers in 2008 who have a half-century of other people's mistakes 
to learn from.  The secret of success is to try and fail, then to try 
again with a fresh outlook.  That exactly fits the new AGI crowd.


Second, when you say that a better argument would be to point to a 
fundamental technological or methodological change that makes AGI 
finally credible I must say that I could not agree more.  That is 
*exactly* what I have tried to do in my project, because I have pointed 
out a problem with the way that old-style AI has been carried out, and 
that problem is capable of neatly explaining why the early optimism 
produced nothing.  I have also suggested a solution to that problem, 
pointed out that the solution has never been tried before (so it has the 
virtue of not being a failure yet!), and also pointed out that the 
proposed solution resembles some previous approaches that did have 
sporadic, spectacular success (the early work in connectionism).


However, in my Open Letter post, I did not want to emphasize my own work 
(I will do that elsewhere on the website), but instead point out some 
general facts about all AGI 

Re: [agi] An Open Letter to AGI Investors

2008-04-18 Thread Richard Loosemore

Mark Waser wrote:

Richard Loosemore wrote:
To say to an investor that AGI would be useful because we could use 
them to build travel agents and receptionists is to utter something 
completely incoherent.


Not at all.  It is catering to their desires and refraining from 
forcibly educating them.  Where is the harm?  It's certainly better than 
getting the door slammed in your face.


I think this is a mistake.  Selling investors the idea of replacement
travel agents and housemaids is something that they know, in their gut,
is a stupid idea IN THIS CONTEXT.  The context is that you are saying
that you will build something with the completely general powers of
thought that a person has.  If you can build such a thing, then claiming
that it will be used for a trivial task after (e.g.) $100 million of
development money would make no business sense whatsoever.

A big part of being coherent in front of an investor is being able
to think your idea through to its logical conclusion.  Trying to
soft-pedal the idea and pretend that it will be less useful than it
really is is considered to be just as bad as overselling the idea -
this is thinking too small.  Either way, you look as if you haven't
really thought it through.

Here is what I would call thinking it through.

The definition of AGI is that it has all the powers of thought that we
have, rather than being able to answer questions about a blocks world
perfectly, but be completely incapable of talking about the weather.  We
all agree on this, no?

With that understood, there are some obvious consequences to building an
AGI.  One is that we will be able to duplicate a machine that has
acquired expert-level knowledge in its field.  This is a stupendous
advance on the situation today, obviously, because it means that if an
AGI can be taught to reach expert level in some field, it can be
duplicated manyfold and suddenly we have a vast army of people pushing
back the frontiers together.

Now the question is whether it will be so much harder to produce a
housemaid than a medical expert.  It is not at all obvious that the
housemaid or travel agent will be a step on the road.  If we can
understand how to make something think, why would our efforts happen to
land on the intelligence-point that equates to travel agent?  Just
because this is the kind of work that a human is forced to do when they
cannot get anything better, does not mean that this is a natural level
of intellectual capacity.  The first AGI could just as easily be a
blithering idiot, an idiot-savant, a rocket scientist or an 
unsurpassable genius.  To ask it to be a travel agent is to assume that 
what you build will have a very particular level of intelligence, and be 
incapable of improvement, so it would beg the question Why would it 
only reach that level?.


I think, in truth, that this talk of using the first AGIs as travel 
agents and housemaids is based on a weak analysis of what it would mean 
to produce an early prototype or a step-on-the-road to full AGI. 
Because we have in our minds this picture of human beings and the way 
they develop, some people are automatically assuming that an early AGI 
would be equivalent to a housemaid.  What I am saying here is that this 
is by no means obvious, at the very least.


I think that if we can build such thinking machines, we would
surely by that stage have come to understand the dynamics of
intellectual development in ways that we have no hope of doing today:
we will be able to look inside the developing mind and see what factors
enable some thinkers to have trouble getting their thoughts together
while others zoom on to great heights of achievement.  Given that we
will be able to do that, we will have much greater chance of being able
to produce something that can continue to develop without hitting a
roadblock of some kind.  In my opinion, what makes a travel agent a
travel agent is not a lack of horsepower, but a complicated interaction
of drives and social interactions (as well as some contribution from
lack of horsepower).  A travel agent, in other words, is more like a
genius who got stopped along the way, than a person whose brain simply
did not have the right design.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Pei Wang
PREMISES:

(1) AGI is one of the most complicated problem in the history of
science, and therefore requires substantial funding for it to happen.

(2) Since all previous attempts failed, investors and funding agencies
have enough reason to wait until a recognizable breakthrough to put
their money in.

(3) Since the people who have the money are usually not AGI
researchers (so won't read papers and books), a breakthrough becomes
recognizable to them only by impressive demos.

(4) If the system is really general-purpose, then if it can give an
impressive demo on one problem, it should be able to solve all kinds
of problems to roughly the same level.

(5) If a system already can solve all kinds of problems, then the
research has mostly finished, and won't need funding anymore.

CONCLUSION: AGI research will get funding when and only when the
funding is no longer needed anymore.

Q.E.D. :-(

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 1:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
 PREMISES:

  (1) AGI is one of the most complicated problem in the history of
  science, and therefore requires substantial funding for it to happen.


Potentially, though, massively distributed, collaborative open-source
software development could render your first premise false ...


  (2) Since all previous attempts failed, investors and funding agencies
  have enough reason to wait until a recognizable breakthrough to put
  their money in.

  (3) Since the people who have the money are usually not AGI
  researchers (so won't read papers and books), a breakthrough becomes
  recognizable to them only by impressive demos.

  (4) If the system is really general-purpose, then if it can give an
  impressive demo on one problem, it should be able to solve all kinds
  of problems to roughly the same level.

  (5) If a system already can solve all kinds of problems, then the
  research has mostly finished, and won't need funding anymore.

  CONCLUSION: AGI research will get funding when and only when the
  funding is no longer needed anymore.

  Q.E.D. :-(

  Pei

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Richard Loosemore

Ben Goertzel wrote:

On Fri, Apr 18, 2008 at 1:01 PM, Pei Wang [EMAIL PROTECTED] wrote:

PREMISES:

 (1) AGI is one of the most complicated problem in the history of
 science, and therefore requires substantial funding for it to happen.



Potentially, though, massively distributed, collaborative open-source
software development could render your first premise false ...


 Though it is unlikely to do so, because collaborative open-source 
projects are best suited to situations in which the fundamental ideas 
behind the design has been solved.


Just having a large gang of programmers on an open-source project does 
not address Pei's point about AGI being the most complicated problem in 
the history of science.


Pei:  what I take you to be saying is that the research problem has an 
unusually high initial overhead.




Richard Loosemore.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
  Potentially, though, massively distributed, collaborative open-source
  software development could render your first premise false ...
 

   Though it is unlikely to do so, because collaborative open-source
 projects are best suited to situations in which the fundamental ideas behind
 the design has been solved.

I believe I've solved the fundamental issues behind the Novamente/OpenCog
design...

Time and effort will tell if I'm right ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Pei Wang
Richard,

You are right, though the overhead is not mainly money, but time.

Of course I don't really believe in my proof, otherwise I'd say that
AGI is impossible. ;-)

Among the premises I listed, only (1) is not my personal belief,
though I know it is assumed by many people.

I believe AGI is basically a theoretical problem, which will be solved
by a single person or a small group, with little funding. To make
impressive demos, the theoretical result will need to be implemented,
where the collaborative open-source projects can help. After that,
funding will get in to turn the result into applicable technology.

Even so, my previous conclusion still holds --- for the people who
want to make the key breakthrough, no funding is available until the
breakthrough has been made and (may be after years) recognized.

Pei


On Fri, Apr 18, 2008 at 1:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ben Goertzel wrote:

  On Fri, Apr 18, 2008 at 1:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
 
   PREMISES:
  
(1) AGI is one of the most complicated problem in the history of
science, and therefore requires substantial funding for it to happen.
  
 
 
  Potentially, though, massively distributed, collaborative open-source
  software development could render your first premise false ...
 

   Though it is unlikely to do so, because collaborative open-source
 projects are best suited to situations in which the fundamental ideas behind
 the design has been solved.

  Just having a large gang of programmers on an open-source project does not
 address Pei's point about AGI being the most complicated problem in the
 history of science.

  Pei:  what I take you to be saying is that the research problem has an
 unusually high initial overhead.



  Richard Loosemore.



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Mike Tintner

Pei: I believe AGI is basically a theoretical problem, which will be solved
by a single person or a small group, with little funding

How do you define that problem?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Pei Wang
See http://nars.wang.googlepages.com/wang.AI_Definitions.pdf

On Fri, Apr 18, 2008 at 2:11 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei: I believe AGI is basically a theoretical problem, which will be solved
  by a single person or a small group, with little funding

  How do you define that problem?



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Richard Loosemore [EMAIL PROTECTED] wrote:
   PREMISES:
  
(1) AGI is one of the most complicated problem in the history of
science, and therefore requires substantial funding for it to happen.
 
 
  Potentially, though, massively distributed, collaborative open-source
  software development could render your first premise false ...

  Though it is unlikely to do so, because collaborative open-source
 projects are best suited to situations in which the fundamental ideas behind
 the design has been solved.

I agree.  Opensource is a good thing but it is not sufficient to solve
fundamental problems such as architecture and algorithm design.

Very few people have comprehensive understanding of AGI, and the few
who do are not collaborating, due to theoretical differences.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:
    Though it is unlikely to do so, because collaborative open-source
  projects are best suited to situations in which the fundamental ideas behind
  the design has been solved.

 I believe I've solved the fundamental issues behind the Novamente/OpenCog
 design...

It's hard to tell whether you have really solved the AGI problem, at
this stage. ;)

Also, your AGI framework has a lot of non-standard, home-brew stuff
(especially the knowledge representation and logic).  I bet there are
some merits in your system, but is it really so compelling that
everybody has to learn it and do it that way?

Creating a standard / common framework is not easy.  Right now I think
we lack such a consensus.  So the theorists are not working together.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Bob Mottram
Another problem is how to judge the impressiveness of a demo,
especially if you're a non expert.  It's relatively easy to come up
with superficially impressive demos, which then turn out upon closer
investigation to be fraught with problems or just not scalable.  This
seems to happen all the time with robotics and computer vision, as
countless humanoids show.

So I think you're right.  The big money only arrives once most of the
research problems have been hammered out and a working prototype is
available for inspection.  Funding might appear earlier only if the
organisations involved are suitably convinced that (a) the promised
technology is going to arrive in the near future and (b) that there is
a strong first mover advantage to owning or influencing that
technology.

It may, as Ben says, be possible to ameliorate some of the costs using
open source methods.  Open source is not a panacea, but it could help
to turn AGI into more of a science than an art form in that it permits
experiments to be independently verified with greater ease and
provides more opportunities to stand on the shoulders of giants.



On 18/04/2008, Pei Wang [EMAIL PROTECTED] wrote:
 PREMISES:

  (1) AGI is one of the most complicated problem in the history of
  science, and therefore requires substantial funding for it to happen.

  (2) Since all previous attempts failed, investors and funding agencies
  have enough reason to wait until a recognizable breakthrough to put
  their money in.

  (3) Since the people who have the money are usually not AGI
  researchers (so won't read papers and books), a breakthrough becomes
  recognizable to them only by impressive demos.

  (4) If the system is really general-purpose, then if it can give an
  impressive demo on one problem, it should be able to solve all kinds
  of problems to roughly the same level.

  (5) If a system already can solve all kinds of problems, then the
  research has mostly finished, and won't need funding anymore.

  CONCLUSION: AGI research will get funding when and only when the
  funding is no longer needed anymore.

  Q.E.D. :-(

  Pei

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
 we lack such a consensus.  So the theorists are not working together.

I correct that.  Theorists do not need to work together;  theories can
be applied anywhere.  It's the *designers* who are not working
together.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Linas Vepstas
On 18/04/2008, Pei Wang [EMAIL PROTECTED] wrote:

  I believe AGI is basically a theoretical problem, which will be solved
  by a single person or a small group, with little funding.

I'm not sure I believe this. After working on this a bit, it has become
clear to me that there are more ideas than there is time to explore
them all. Exploration is further hindered by a lack of software
infrastructure. There are no lab facilities, no easy way to
perform high-level experiments.  I know certainly that I have some
high-level theoreies I want ot explore, but I can't even get started
due to the lack of infrastructure.

I think what Ben is trying to do is to provide those facilities by providing
OpenCog.  I think opeen-source programmers *can* help build this.
And, judging by the Google summer-of-code applications, many of
the students have a strong understanding of many of the basic concepts.

Richard wrote:
 Though it is unlikely to do so, because collaborative
open-source projects are best suited to situations in which the
fundamental ideas behind the design has been solved.

Just having a large gang of programmers on an open-source project
does not address Pei's point about AGI being the most complicated
problem in the history of science.

Yes, but a large gang of open source programmers can help build
the infrastructure.  Curing the Manhattan project, it may have been
Feynmann and von Neumann and Teller and Oppenheimer doing
all the thinking, but it sure wasn't them that built 42 acres of uranium
enrichment plants. This was done by large gangs.

The fundamental ideas behind Bayesian nets and whatever have
been solved but there is no way, not without a lot of work, to hook
Bayesian nets to english language parsers, or to any sort of predicate
reasoning systems, or knowledge representation systems or ontologies.

Doing such a  hookup is scientifically straight-foward and
scientifically easy but a huge pain-in-the-arse. Until this hookup is done,
we can't run experiments,. can't test theories, can't even get started on
solving the scientifically hard part of the problem.

--linas

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Richard Loosemore

Linas Vepstas wrote:

Richard wrote:

 Though it is unlikely to do so, because collaborative

open-source projects are best suited to situations in which the
fundamental ideas behind the design has been solved.


Just having a large gang of programmers on an open-source project

does not address Pei's point about AGI being the most complicated
problem in the history of science.

Yes, but a large gang of open source programmers can help build
the infrastructure.  Curing the Manhattan project, it may have been
Feynmann and von Neumann and Teller and Oppenheimer doing
all the thinking, but it sure wasn't them that built 42 acres of uranium
enrichment plants. This was done by large gangs.


I guess I agree with this point by itself (I could do with a large gang, 
for example, to build SAFAIRE) but when I made the remarks I was 
thinking about solving the actual core problem of designing the right 
architecture.


So for example, I think Pei is correct to point out that the basic 
solution is going to come from one person's idea, but that if we have a 
situation in which nobody has yet had that idea, then (and only then) 
the strategy of getting a large gang together would not help.


Now, Ben thinks that he does have the correct solution and is ready for 
the gang.  I think the same about my solution, and perhaps Pei Wang and 
Peter Voss and Hugo de Garis all have the same opinion about their own 
work  in other words, perhaps they all believe that all they need 
right now is a large enough gang.  But if we were all wrong, then our 
gangs would only serve to prove (eventually!) that we were wrong.  I 
doubt that the open-source collective would be where the solution would 
come from.


So, no disagreement that a big gang is beneficial, but



Richard Loosemore

P.S.  Now I am deep trouble because I just said that a big open source 
collective could help me build SAFAIRE, and Stephen Reed is going to ask 
me any minute now why I don't simply get me a big open-source gang ;-)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Pei Wang
Linas,

Not all theoretical problems can or need to be solved by practical
testing. Also, in this field, no infrastructure is really
theoretically neutral --- OpenCog is clearly not suitable to test
all kinds of AGI theories, though I like the project, and is willing
to help.

Open-source will solve many technical problems, and may also reveal
many theoretical problems by putting theories under testing. However,
it won't replace theoretical thinking.

Pei

On Fri, Apr 18, 2008 at 3:59 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 On 18/04/2008, Pei Wang [EMAIL PROTECTED] wrote:
  
I believe AGI is basically a theoretical problem, which will be solved
by a single person or a small group, with little funding.

  I'm not sure I believe this. After working on this a bit, it has become
  clear to me that there are more ideas than there is time to explore
  them all. Exploration is further hindered by a lack of software
  infrastructure. There are no lab facilities, no easy way to
  perform high-level experiments.  I know certainly that I have some
  high-level theoreies I want ot explore, but I can't even get started
  due to the lack of infrastructure.

  I think what Ben is trying to do is to provide those facilities by providing
  OpenCog.  I think opeen-source programmers *can* help build this.
  And, judging by the Google summer-of-code applications, many of
  the students have a strong understanding of many of the basic concepts.


  Richard wrote:
   Though it is unlikely to do so, because collaborative
  open-source projects are best suited to situations in which the
  fundamental ideas behind the design has been solved.

  Just having a large gang of programmers on an open-source project
  does not address Pei's point about AGI being the most complicated
  problem in the history of science.

  Yes, but a large gang of open source programmers can help build
  the infrastructure.  Curing the Manhattan project, it may have been
  Feynmann and von Neumann and Teller and Oppenheimer doing
  all the thinking, but it sure wasn't them that built 42 acres of uranium
  enrichment plants. This was done by large gangs.

  The fundamental ideas behind Bayesian nets and whatever have
  been solved but there is no way, not without a lot of work, to hook
  Bayesian nets to english language parsers, or to any sort of predicate
  reasoning systems, or knowledge representation systems or ontologies.

  Doing such a  hookup is scientifically straight-foward and
  scientifically easy but a huge pain-in-the-arse. Until this hookup is done,
  we can't run experiments,. can't test theories, can't even get started on
  solving the scientifically hard part of the problem.

  --linas



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Pei Wang [EMAIL PROTECTED] wrote:
 Not all theoretical problems can or need to be solved by practical
 testing. Also, in this field, no infrastructure is really
 theoretically neutral --- OpenCog is clearly not suitable to test
 all kinds of AGI theories, though I like the project, and is willing
 to help.

 Open-source will solve many technical problems, and may also reveal
 many theoretical problems by putting theories under testing. However,
 it won't replace theoretical thinking.

I agree, but I'd add that it is still tremendously helpful to have an
opensource gang solving technical problems.

For this reason, I'm tempted to opensource my stuff, but where would
be my compensation?  Do I really HAVE to sacrifice my pay check...??

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Pei Wang
Richard,

Though I do believe I have the right idea, I surely know that there
are still issues I haven't fully solved. Therefore I don't really want
a big gang at now (that will only waste the time of mine and the
others), but a small-but-good gang, plus more time for myself ---
which means less group debates, I guess. ;-)

Pei

On Fri, Apr 18, 2008 at 4:31 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

  Now, Ben thinks that he does have the correct solution and is ready for the
 gang.  I think the same about my solution, and perhaps Pei Wang and Peter
 Voss and Hugo de Garis all have the same opinion about their own work 
 in other words, perhaps they all believe that all they need right now is a
 large enough gang.  But if we were all wrong, then our gangs would only
 serve to prove (eventually!) that we were wrong.  I doubt that the
 open-source collective would be where the solution would come from.

  So, no disagreement that a big gang is beneficial, but

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Vladimir Nesov
On Sat, Apr 19, 2008 at 12:48 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

  For this reason, I'm tempted to opensource my stuff, but where would
  be my compensation?  Do I really HAVE to sacrifice my pay check...??


Yes, you do, as Wang's Theorem demonstrates.

You must persevere in your Faith, and the Way to Nerds' Heaven will
open to you, after years of poverty-stricken life as underfunded AGI
researcher. :-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Mike Tintner

Pei:  I don't really want
a big gang at now (that will only waste the time of mine and the
others), but a small-but-good gang, plus more time for myself ---
which means less group debates, I guess. ;-)

Alternatively, you could open your problems for group discussion  
think-tanking...   I'm surprised that none of you systembuilders do this.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 5:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei:  I don't really want

  a big gang at now (that will only waste the time of mine and the
  others), but a small-but-good gang, plus more time for myself ---
  which means less group debates, I guess. ;-)

  Alternatively, you could open your problems for group discussion 
 think-tanking...   I'm surprised that none of you systembuilders do this.


That is essentially what I'm doing with OpenCog ... but it's a big job,
just preparing stuff in terms of documentation and code and designs
so that others have a prayer of understanding it ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
YKY,

   I believe I've solved the fundamental issues behind the Novamente/OpenCog
   design...

  It's hard to tell whether you have really solved the AGI problem, at
  this stage. ;)

Understood...

  Also, your AGI framework has a lot of non-standard, home-brew stuff
  (especially the knowledge representation and logic).  I bet there are
  some merits in your system, but is it really so compelling that
  everybody has to learn it and do it that way?

I don't claim that the Novamente/OpenCog design is the **only** way ... but I do
note that the different parts are carefully designed to interoperate together
in subtle ways, so replacing any one component w/ some standard system
won't work.

For instance, replacing PLN with some more popular but more limited
probabilistic
logic framework, would break a lot of other stuff...

  Creating a standard / common framework is not easy.  Right now I think
  we lack such a consensus.  So the theorists are not working together.

One thing that stuck out at the 2006 AGI Workshop and AGI-08
conference, was the commonality between several different approaches,
for instance

-- my Novamente approach
-- Nick Cassimatis's Polyscheme system
-- Stan Franklin's LIDA approach
-- Sam Adams (IBM) Joshua Blue
-- Alexei Samsonovich's BICA architecture

Not that these are all the same design ... there are very real differences
... but there are also a lot of deep parallels.   Novamente seems to
be more fully fleshed out than these overall, but each of these guys
has thought through specific aspects more deeply than I have.

Also, John Laird (SOAR creator) is moving SOAR in a direction that's a
lot closer to the Goertzel/Cassimatis/Franklin/Adams style system than
his prior approaches ...

All the above approaches are

-- integrative, involving multiple separate components tightly bound
together in a high-level cognitive architecture

-- reliant to some extent on formal inference (along with subsymbolic methods)

-- clearly testable/developable in a virtual worlds setting

I would bet that with appropriate incentives all of the above
researchers could be persuaded to collaborate on a common AI project
-- without it degenerating into some kind of useless
committee-think...

Let's call these approaches LIVE, for short -- Logic-incorporating,
Integrative, Virtually Embodied

On the other hand, when you look at

-- Pei Wang's approach, which is interesting but is fundamentally
committed to a particular form of uncertain logic that no other AGI
approach accepts

-- Selmer Bringsjord's approach, which is founded on the notion that
standard predicate  logic alone is The Answer

-- Hugo de Garis's approach which is based on brain emulation

you're looking at interesting approaches that are not really
compatible with the LIVE approach ... I'd say, you could not viably
bring these guys into a collaborative AI project based on the LIVE
approach...

So, I do think more collaboration and co-thinking could occur than
currently does ... but also that there are limits due to fundamentally
different understandings

OpenCog is general enough to support any approach falling within the
LIVE category, and a number of other sorts of approaches as well
(e.g. a variety of neural net based architectures).  But it is not
**completely** general and doesn't aim to me ... IMO, a completely
general AGI development
framework is just basically, say, C++ and Linux ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


FW: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ed Porter
In the below quote in the below article the number 1000 was meant to be
100 in the below quote.  With intelligent RAM, this number could be
perhaps a high as 500, depending on what you mean by a current PC, but
intelligent RAM would, at least initially, be much more expensive.

Such a system would crank roughly 1TOpp/sec and enable 4G random accesses
in its 1TB of DRAM/sec.  That allows a fair amount of connectionism to be
computed --- more than 1000 times as much as current PCs

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 18, 2008 6:36 PM
To: agi@v2.listbox.com
Subject: RE: [agi] The Strange Loop of AGI Funding: now logically proved!

PEI'S SELF-DEFEATING LOOP WILL PROBABLY BE BROKEN WITHIN 3 TO 8 YEARS

Over then next 3 - 8 years there probably will arise from the level of AI
and AGI projects being funded an ever increasing appreciation and proof of
the power, generality, and potential of AGI --- enough so that funding of
large AGI projects will start.  

This will be aided by the growing increase in the number of people who
believe in AGI --- the increasing organization of the AGI community, as
represented by the recent AGI 2008 conference and the planed AGI 2009 ---
the growing knowledge and tools, such as OpenCog tools, for building AGI
projects --- and, perhaps most importantly, the rapidly dropping cost and
rapidly rising power of hardware. 

Today a PC with 4G or RAM should be able to demonstrate some important
pieces of the AGI problem.  For under $40k --- money many funded projects
can afford --- you can buy a 4 processor quad-core server with 256GB DRAM
and many TBs of disk.  With that you should be able to demonstrate even more
of AGI's potential.

But in about 5 years things should really start changing with the arrival of
the many-core, many-layer chips and the operating systems for efficiently
using them (with the help of people like Sam Adams).  We should begin to see
chips with 256 cores --- connected by a high-speed on chip mesh network ---
with each core having fat thru-wafer paths to memory totaling --- for the
whole chip --- roughly 8GBytes of DRAM --- plus a GByte of embedded DRAM for
L2 cache.  All this will fit in one multi-layer chip.   Each such chip
will have --- say --- 1TByte of external bandwidth in the form of 128
different 8GByte/sec channels to memory or other processors.  I'm guessing
such chip will probably sell for a very fat premium at introduction --- say
$5K --- but will be under $2K within two years.   In 5 years a TByte of DRAM
should cost about $10K.  This means by 2013 you could have a system with
such a chip and 1TB of DRAM for under $20K --- cheap enough for most teams
of AI grad students to share one.  Such a system would crank roughly
1TOpp/sec and enable 4G random accesses in its 1TB of DRAM/sec.  That allows
a fair amount of connectionism to be computed --- more than 1000 times as
much as current PCs.

Such a $20K system should be a powerful enough testbed to develop and test
much of the basic architecture for artificial minds.  And they should be
cheap enough that within the next six or seven years hundreds of AGI teams
could test and perfect their approaches on them.

Such testbeds could come in a range of different sizes --- including one
with 256 such 256 core chips and 50TBytes of DRAM.  This would include 2TB
of on-chip DRAM and 256G of L2 cache.  Such hardware could provide 100 TOpps
--- 512G random accesses/sec in the 50TB of DRAM --- and a theoretical
cross-sectional bandwidth between chips of about 512G global 64byte
messages/sec (with a much higher total number of interchip messages/sec,
since a many messages would be between near by chips).  This computational,
representational, and communication capacity is very possibly enough to
create human level AGI.  In 7 years such hardware could cost roughly $1m in
parts, which means it probably could be profitable be sold for under $2m if
it was being sold in any quantity.  And understand these system would be
good for general scientific, data base, virtual reality, and advanced
web-based programming as well --- so they should sell in quantity --- and
their price should come down rapidly.  

At these price every major university and research lab could afford several
of them.  Within 10 years from today there could be thousands such roughly
human level machines. 

The combination of increasingly interest in AGI, increasing understand of
it, increasingly powerful AGI tools --- and more powerful hardware ---  all
will combine to make it highly likely that within 3 to 8 years there will
have been enough progress that it will become obvious that there is
tremendous strategic and economic value for investing in it big time --- and
then the big money will come in the hundreds of millions or billions of
dollars.

So I think Pei's self defeating strange loop of AGI funding holds in the
short term --- but also that in less than a decade this self-defeating loop
will 

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Matt Mahoney
--- Pei Wang [EMAIL PROTECTED] wrote:
 I believe AGI is basically a theoretical problem, which will be solved
 by a single person or a small group, with little funding.

I think that we are still massively underestimating the cost of AGI, just as
we have been doing for the last 50 years.  The value of AGI is the value of
the human labor it would replace, between $2 and $5 quadrillion over the next
30 years worldwide.  To suggest that it could be solved for a billionth of
this cost is ludicrous.  Google has $169 billion and the motivation, market,
brains, and computing power to solve AGI, but they haven't yet.

I realize there is a tradeoff between having AGI sooner or waiting for the
price of technology to come down.  Simple economics suggest we will be willing
to pay a significant fraction of the value to have it now.

We are chasing a moving target.  It is not enough for a computer to match the
intelligence of a human.  It has to match the intelligence of a human with an
internet connection, and the internet keeps getting smarter as AI is deployed
on it.  You hit the target not at one human brain (which Google has probably
surpassed in computing power and data), but at 10 billion human brains.  You
need a vision system for a billion eyes, a language model to converse with a
billion people at the same time.

Given a good communication infrastructure, general models of intelligence are
at a distinct disadvantage against narrow AI.  You will be competing with
millions or billions of specialized experts that are individually easier to
build, train, optimize, and maintain for one particular task and run on a PC
using mature technology.  Standalone AGI can't do that.  We don't even know
how much computing power is needed to do what one brain does (but I am pretty
sure it is more than 1000 PCs).

I know the argument that you only have to build it once.  I've heard it
before. Each standalone AGI has to be trained for a different task.  This is a
nontrivial expense.  We should not expect it to cost significantly less than
training a new employee.

I think AGI is too big for anyone to own or invest in.  If you want to invest,
look for market opportunities that don't yet exist, the way Google built its
fortune by indexing something that didn't exist 15 years ago.



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 I don't claim that the Novamente/OpenCog design is the **only** way ... but I 
 do
 note that the different parts are carefully designed to interoperate together
 in subtle ways, so replacing any one component w/ some standard system
 won't work.

This problem may be common to all AGI designs -- no one seems to be
able to build AGI out of standard components.  But I would strive
towards that ideal as close as possible.

 For instance, replacing PLN with some more popular but more limited
 probabilistic
 logic framework, would break a lot of other stuff...

PLN is not based on predicate logic but on term logic, right?  That
may be a source of problem.

 I would bet that with appropriate incentives all of the above
 researchers could be persuaded to collaborate on a common AI project
 -- without it degenerating into some kind of useless
 committee-think...

THAT would be highly desirable, but are we ready yet to reconcile our
differences?  I guess we can start from gradually re-using standard
components in a bottom-up manner.  Also, establishing a knowledge
interchange format.

 Let's call these approaches LIVE, for short -- Logic-incorporating,
 Integrative, Virtually Embodied

LIVE is good =)

 -- Pei Wang's approach, which is interesting but is fundamentally
 committed to a particular form of uncertain logic that no other AGI
 approach accepts

I think Pei Wang makes his own versions of abduction and induction
that, from the classical logic perspective, are unsound.  Otherwise
his approach is also LIVE.

 -- Selmer Bringsjord's approach, which is founded on the notion that
 standard predicate  logic alone is The Answer

Agreed.  Binary logic can go a long way, but ultimately is insufficient for AGI.

That said, I'm currently designing learning algorithms using binary
logic only, and plan to add fuzzy and probability later.

 OpenCog is general enough to support any approach falling within the
 LIVE category, and a number of other sorts of approaches as well
 (e.g. a variety of neural net based architectures).  But it is not
 **completely** general and doesn't aim to me ... IMO, a completely
 general AGI development
 framework is just basically, say, C++ and Linux ;-)

Yes, OpenCog is definitely a good move.  I hope you will allow it more
free so it can bring about more fundamental changes, to the point that
different AGI projects can interoperate. =)

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-18 Thread Matt Mahoney
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 4/19/08, Pei Wang [EMAIL PROTECTED] wrote:
  Not all theoretical problems can or need to be solved by practical
  testing. Also, in this field, no infrastructure is really
  theoretically neutral --- OpenCog is clearly not suitable to test
  all kinds of AGI theories, though I like the project, and is willing
  to help.
 
  Open-source will solve many technical problems, and may also reveal
  many theoretical problems by putting theories under testing. However,
  it won't replace theoretical thinking.
 
 I agree, but I'd add that it is still tremendously helpful to have an
 opensource gang solving technical problems.
 
 For this reason, I'm tempted to opensource my stuff, but where would
 be my compensation?  Do I really HAVE to sacrifice my pay check...??

Not at all.  I released most of my data compression software under GPL.  If a
company wants to use it in a commercial product or wants something customized,
they have to pay me.  Meanwhile a lot of people have improved the software for
free until it moved to the top of the rankings where it got the attention of
companies that need data compression experts and pay well.  This would never
have happened if I had kept the code proprietary.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com