As I have said many before, to have brain-level AGI I believe you need
within several orders of magnitude the representational, computational, and
interconnect capability of the human mind.

If you had 1 million PC bots on the web, the representational and
computational power would be there.  But what sort of interconnect would you
have?  What is the average cable box connected computers upload bandwidth? 

Is it about 1MBit/sec?  If so that would be a bandwidth of 1TBit/sec.  But
presumably only a small percent of that total 1TBit/sec could be effectively
used, say 100Gbits/sec. That's way below brain level, but it is high enough
to do valuable AGI research.

But would even 10% of this total 1Tbit/sec bandwidth be practically
available?

How many messages a second can a PC upload a second at say 100K, 10K, 1K,
and 128 bytes each? Does anybody know?  

On the net, can one bot directly talk to another bot, or does the
communication have to go through some sort of server (other than those
provided gratis on the web, such as DNS servers)?  

If two bots send messages to a third bot at the same time, does the net
infrastructure hold the second of the conflicting messages until the first
has been received, or what?

To me the big hurdle to achieving the equivalent of SETI-at-home AGI is
getting the bandwidth necessary to allow the interactive computing of large
amounts of knowledge. If we could solve that problem, then it should be
pretty easy to get some great tests going, such as with something like
OpenCog.


Ed Porter

-----Original Message-----
From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 5:26 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

There have been a few attempts to use the internet for data collection
which might be used to build AIs, or for teaching chatbots such as
jabberwacky, but you're right that as yet nobody has really made use
of the internet as a basis for distributed intelligence.  I think this
is primarily because of the lack of good theories of how to build AIs
which are suitably scalable across many machines.


On 29/11/2007, John G. Rose <[EMAIL PROTECTED]> wrote:

> Typical networked applications running on PCs are extremely narrow
function.
> Yeah there has been a lot of research and code on all of this, there are
> many open source tools and papers written, etc. but who has really taken
the
> full advantage of the available resources and capabilities? Most of the
work
> has been on the substrate but not on the capability of potential
> applications. There are a few interesting apps like peer to peer search
> engines but nothing that I know of that more than scrapes the surface of
the
> capabilities of those millions of networked computers.
>
> John
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70593452-a62da9

Reply via email to