Well, the existence of different contingencies is one reason I don't wont
the first one modeled after a brain. I would like it to be a bit simpler in
the sense that it only tries to answer questions from the most scientific
perspective as possible. To me it seems like there isn't someone stable
enough to model the first AGI after; so perhaps understanding a brain
completely shouldn't be top on the priority list. I think the focus should
be on NLP so it can utilize all human knowledge that exist in text and audio
form. I have no strategy on how to do this but it seems like the safest
path. The brain is a tangled mess that could be understood post-singularity
but for now I would think NLP is what really-really matters when it comes to
developing an AGI.

Would there be some major contention between Ben's AGI and Osama bin Laden's
AGI?

I think our contentions will regard radically different topics in a
post-singularity civilization, so I can't say. But I predict they will both
agree with the main AGI's take on
trans-national corporate imperialism, amongst other currently
disputed issues, because the AGI will be as objective as possible and as the
two individuals augment their intelligence they will naturally flow towards
a more objective perception of reality.

What if some accident disables an AGI's protective mechanisms?

I don't know. You guys should do your best to create stable goal systems and
I'll go sweep this floor.

contention between future AGIs

I don't feel qualified to mediate between two future AGIs so I don't know.

why should AGIs give a damn about us?

I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its insuring positive experiences for all areas
in which it is conscious, I think it will realize that. And with the
resources available in the solar system alone, I don't think we will be much
of a burden. Obviously this can't be screwed up and it ends up using us for
its own reproduction or whatever because thats all it was programmed to care
about. But I don't think trying is inherently suicidal by any means if
that's ultimately what you're getting at.
On Sat, Jun 26, 2010 at 1:37 AM, Steve Richfield
<steve.richfi...@gmail.com>wrote:

> Travis,
>
> The AGI world seems to be cleanly divided into two groups:
>
> 1.  People (like Ben) who feel as you do, and aren't at all interested or
> willing to look at the really serious lapses in logic that underlie this
> approach. Note that there is a similar belief in Buddhism, akin to the
> "prisoners dilemma", that if everyone just decides to respect everyone else,
> that the world will be a really nice place. The problem is, it doesn't work,
> and it can't work for some sound logical reasons that were unknown thousands
> of years ago when those beliefs were first advanced, and are STILL unknown
> to most of the present-day population, and...
>
> 2.  People (like me) who see that this is a really insane, dangerous, and
> delusional belief system, as it encourages activities that are every bit as
> dangerous as DIY thermonuclear weapons. Sure, you aren't likely to build a
> "successful" H-bomb in your basement using heavy water that you separated
> using old automobile batteries, but should we encourage you to even try?
>
> Unfortunately, there is ~zero useful communication between these two
> groups. For example, Ben explains that he has heard all of the horror
> scenarios for AGIs, and I believe that he has, yet he continues in this
> direction for reasons that he "is too busy" to explain in detail. I have
> viewed some of his presentations, e.g. at the 2009 Singularity conference.
> There, he provides no glimmer of any reason why his approach isn't
> predictably suicidal if/when an AGI ever comes into existence, beyond what
> you outlined, e.g. imperfect protective mechanisms that would only serve to
> become their own points of contention between future AGIs. What if some
> accident disables an AGI's protective mechanisms? Would there be some major
> contention between Ben's AGI and Osama bin Laden's AGI? How about those
> nasty little areas where our present social rules enforce specie-destroying
> dysgenic activity? Ultimately and eventually, why should AGIs give a damn
> about us?
>
> Steve
> =============
> On Fri, Jun 25, 2010 at 1:25 PM, Travis Lenting <travlent...@gmail.com>wrote:
>
>> I hope I don't miss represent him but I agree with Ben (at
>> least my interpretation) when he said, "We can ask it questions like, 'how
>> can we make a better A(G)I that can serve us in more different ways without
>> becoming dangerous'...It can help guide us along the path to a
>> positive singularity." I'm pretty sure he was also saying at first it
>> should just be a question answering machine with a reliable goal system and
>> stop the development if it has an unstable one before it gets to smart. I
>> like the idea that we should create an automated
>> cross disciplinary scientist and engineer (if you even separate the two) and
>> that NLP not modeled after the human brain is the best proposal for
>> a benevolent and resourceful super intelligence that enables a positive
>> singularity and all its unforeseen perks.
>> On Wed, Jun 23, 2010 at 11:04 PM, The Wizard <key.unive...@gmail.com>wrote:
>>
>>
>>> If you could ask an AGI anything, what would you ask it?
>>> --
>>> Carlos A Mejia
>>>
>>> Taking life one singularity at a time.
>>> www.Transalchemy.com
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to