Re: [agi] popularizing & injecting sense of urgenc

2007-10-30 Thread Jiri Jelinek
> Because AI will save the world or destroy it? Because it can significantly help us to accomplish our goals - whatever that is ATM. Destroying the Earth might be in our best interest at some point in the future. But not now I guess :). Of course depends on who will control the AGI, but powerful t

Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Bob Mottram
>From a promotional perspective these ideas seem quite weak. To most people AI saving the world or destroying it just sounds crackpot (a cartoon caricature of technology), whereas "helping us to accomplish our goals" is too vague. On 31/10/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote: > > Becau

Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Jiri Jelinek
>From a promotional perspective these ideas seem quite weak. It was an addition to other complex and relatively near future issues e.g. the longevity and demographic related problems mentioned by Minsky in his "emergency" presentation. What are your suggestions? >AI saving the world .. sounds cra

Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Matt Mahoney
AGI does not need promoting. AGI could potentially replace all human labor, currently valued at US $66 trillion per year worldwide. Google has gone from nothing to the fifth biggest company in the U.S. in 10 years by solving just a little bit of of the AI problem better than its competitors. We

Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Richard Loosemore
Matt Mahoney wrote: AGI does not need promoting. AGI could potentially replace all human labor, currently valued at US $66 trillion per year worldwide. Google has gone from nothing to the fifth biggest company in the U.S. in 10 years by solving just a little bit of of the AI problem better than

Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Russell Wallace
On 11/2/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > This is the worst possible summary of the situation, because instead of > dealing with each issue as if there were many possibilities, it pretends > that there is only one possible outcome to each issue. > > In this respect it is as bad as

Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Example 4: "Each successive generation gets smarter, faster, and less > dependent on human cooperation." Absolutely not true. If "humans" take > advantage of the ability to enhance their own intelligence up to the > same level as the AGI syst

Re: [agi] popularizing & injecting sense of urgenc

2007-11-03 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Example 4: "Each successive generation gets smarter, faster, and less dependent on human cooperation." Absolutely not true. If "humans" take advantage of the ability to enhance their own intelligence up to the same level a