The Medium article answers the questions I posed earlier in the thread.
This is pretty exciting and it seems to have a lot of good people involved
already.

On Sun, Dec 13, 2015 at 12:11 PM, Ben Kapp <[email protected]> wrote:

> "Our goal is to advance digital intelligence in the way that is most
> likely to *benefit humanity* as a whole, unconstrained by a need to
> generate financial return ... Since our research is free from financial
> obligations, we can better focus on a *positive* *human impact*."
> https://openai.com/blog/introducing-openai/
>
>
> So their mission is (to put it simply) to attempt to push AGI research
> towards a positive outcome.
>
> "The outcome of this venture is *uncertain* and the work is difficult,
> but we believe the goal and the structure are right. We hope this is what
> matters most to the best in the field."
> https://openai.com/blog/introducing-openai/
>
>
> So it seems they themselves are uncertain as to the outcome of their
> venture.
>
> "It's hard to fathom how much human-level AI could benefit society, and
> it's equally hard to imagine how much it could damage society if built or
> used incorrectly. Because of AI's surprising history, it's hard to predict
> when human-level AI might come within reach. When it does, it'll be
> important to have a leading research institution which can prioritize *a
> good outcome for all *over its own self-interest"
>  https://openai.com/blog/introducing-openai/
>
>
> So they seem to believe that if AI was created by a for profit company
> then that company would use that AI for their own self-interest, where as
> if you have an organization (like OpenAI) which is free
> from financial obligations then such an organization would be free to
> create AI for the betterment of all of mankind.
>
>
> "If I’m Dr. Evil and I use it, won’t you be empowering me?
> Musk: I think that’s an excellent question and it’s something that we
> debated quite a bit.
> Altman: There are a few different thoughts about this. Just like humans
> protect against Dr. Evil by the fact that most humans are good, and the
> collective force of humanity can contain the bad elements, we think its far
> more likely that *many, many AIs, will work to stop the occasional bad
> actors* than the idea that there is a single AI a billion times more
> powerful than anything else. If that one thing goes off the rails or if Dr.
> Evil gets that one thing and there is nothing to counteract it, then we’re
> really in a bad place."
>
> https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.aojim3ery
>
>
> Their rationale for making this openly available, is the belief that doing
> so would result in many AGI's which would compete with each other and in
> this way provide protection from one single super intelligence from taking
> over.
>
>
> On Sun, Dec 13, 2015 at 2:52 PM, Robert Levy <[email protected]> wrote:
>
>> I didn't get the impression that Musk was anti-AGI, but rather he
>> expressed some unfortunately worded concerns about the specific way in
>> which AGI is approached being important.  From this perspective it makes
>> perfect sense that he would want to direct the course of AGI development in
>> the way he believes to be safe for the future of humankind.  The problem
>> isn't AGI itself, it's the way those other people might do it, who aren't
>> me!
>>
>> On Sun, Dec 13, 2015 at 11:46 AM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> HI all,
>>>
>>> Am I missing something here, or is this really as stupid as it sounds?!!!
>>>
>>> On the one hand, Musk says that AI is "humanity greatest existential
>>> threat" and then he pledges money to develop that threat?!!! Bad guys, e.g.
>>> the military industrial complex, can simply take whatever OpenAI develops
>>> and turn it on US.
>>>
>>> I have seen NOTHING suggesting any great value in AGI over fully funding
>>> human efforts in the same areas that AGI is being promoted. Geniuses have
>>> always been able to get to the bottom of things - IF they can live well
>>> while doing so and not be impaired by competing interests. If you think AGI
>>> can somehow sidestep these influences, think again, as these influences are
>>> pervasive. Heck, even just living as we do is seen by some people as being
>>> SO much of a threat that they are willing to kill themselves just to impair
>>> a pleasant Friday evening in Paris.
>>>
>>> If not for drug company influence, I believe most chronic illnesses
>>> would have been cured long ago. If not for self-serving mismanagement of
>>> our economy, space travel would now be as routine as vacation travel.
>>> Thorium reactors appear to be the cheap and simple solution to limitless
>>> energy, with more thorium now being discarded than would be necessary to
>>> power the world, yet special interests have kept thorium reactors from
>>> being developed (see YouTube videos about this)
>>>
>>> Our system is SO mis-controlled "our" government won't even reduce the
>>> length of a workweek to promote full employment - as some other countries
>>> have done. Having an AGI come up with these same sorts of solutions would
>>> be of ZERO value, because in present human society they would NOT be
>>> implementable, unless you are contemplating the AGI of *Colossus, the
>>> Forbin Project*.
>>>
>>> ONLY in the hands of unscrupulous entities (e.g. Skynet) could the AGI
>>> of people's misguided dreams truly thrive without effective impairment by
>>> the entirety of humanity.
>>>
>>> If these guys see SOME way their investments could do anything but
>>> create humanity's greatest existential threat, then PLEASE let me in on the
>>> secret.
>>>
>>> *Steve*
>>> =======
>>>
>>> On Sun, Dec 13, 2015 at 5:49 AM, <[email protected]> wrote:
>>>
>>>> http://futurism.com/links/19499/
>>>>
>>>>
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Full employment can be had with the stoke of a pen. Simply institute a
>>> six hour workday. That will easily create enough new jobs to bring back
>>> full employment.
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/18769370-bddcdfdc> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18769370-bddcdfdc> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to