More powerful, more interesting, and if done badly quite dangerous,
indeed...

OTOH a global brain coordinating humans and narrow-AI's can **also** be
quite dangerous ... and arguably more so, because it's **definitely** very
unpredictable in almost every aspect ... whereas a system with a dual
hierarchical/heterarchical structure and a well-defined goal system, may
perhaps be predictable in certain important aspects, if it is designed with
this sort of predictability in mind...

ben

On Thu, Oct 2, 2008 at 2:48 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> > For instance, your proposed AGI would have no explicit self-model, and no
> capacity to coordinate a large percentage of its resources into a single
> deliberative process.....
>
> That's a feature, not a bug. If an AGI could do this, I would regard it as
> dangerous. Who decides what it should do? In my proposal, resources are
> owned by humans who can trade them on a market. Either a large number of
> people or a smaller group with a lot of money would have to be convinced
> that the problem was important. However, the AGI would also make it easy to
> form complex organizations quickly.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> --- On *Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Let's face it, this is just dumb.
> To: agi@v2.listbox.com
> Date: Thursday, October 2, 2008, 2:08 PM
>
>
>
> On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>> --- On Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>
>> >I hope not to sound like a broken record here ... but ... not every
>> >narrow AI advance is actually a step toward AGI ...
>>
>> It is if AGI is billions of narrow experts and a distributed index to get
>> your messages to the right ones.
>>
>> I understand your objection that it is way too expensive ($1 quadrillion),
>> even if it does pay for itself. I would like to be proved wrong...
>
>
> IMO, that would be a very interesting AGI, yet not the **most** interesting
> kind due to its primarily heterarchical nature ... the human mind has this
> sort of self-organized, widely-distributed aspect, but also a more
> centralized, coordinated control aspect.  I think an AGI which similarly
> combines these two aspects will be much  more interesting and powerful.  For
> instance, your proposed AGI would have no explicit self-model, and no
> capacity to coordinate a large percentage of its resources into a single
> deliberative process.....   It's much like what Francis Heyllighen envisions
> as the "Global Brain."  Very interesting, yet IMO not the way to get the
> maximum intelligence out of a given amount of computational substrate...
>
>
> ben g
>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to