Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007
Any comments on this: http://news.com.com/2100-11395_3-6160372.html Google has been mentioned in the context of AGI, simply because they have money, parallel processing power, excellent people, an orientation towards technological innovation, and important narrow AI successes and research goals. Do Page's words mean that Google is seriously working towards AGI? If so, does anyone know the people involved? Do they have a chance and do they understand the need for Friendliness? Also: Vinge's notes on his Long Now Talk, What If the Singularity Does NOT Happen are at http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm I'm delighted to see counter-Singularity analysis from a respected Singularity thinker. This further reassurance that the the flip-side is being considered deepens my beliefs in pro-Singularity arguments. Joshua - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007
Joshua Fox wrote: Any comments on this: http://news.com.com/2100-11395_3-6160372.html Google has been mentioned in the context of AGI, simply because they have money, parallel processing power, excellent people, an orientation towards technological innovation, and important narrow AI successes and research goals. Do Page's words mean that Google is seriously working towards AGI? If so, does anyone know the people involved? Do they have a chance and do they understand the need for Friendliness? This topic has come up intermittently over the last few years... Google can't be counted out, since they have a lot of $$ and machines and a lot of smart people. However, no one has ever pointed out to me a single Google hire with a demonstrated history of serious thinking about AGI -- as opposed to statistical language processing, machine learning, etc. That doesn't mean they couldn't have some smart staff who shifted research interest to AGI after moving to Google, but it doesn't seem tremendously likely. Please remember that the reward structure for technical staff within Google is as follows: Big bonuses and copious approval go to those who do cool stuff that actually gets incorporated in Google's customer offerings I don't have the impression they are funding a lot of blue-sky AGI research outside the scope of text search, ad placement, and other things related to their biz model. So, my opinion remains that: Google staff described as working on AI are almost surely working on clever variants of highly scalable statistical language processing. So, if you believe that this kind of work is likely to lead to powerful AGI, then yeah, you should attach a fairly high probability to the outcome that Google will create AGI. Personally I think it's very unlikely (though not impossible) that AGI is going to emerge via this route. Evidence arguing against this opinion is welcomed ;-) -- Ben G Also: Vinge's notes on his Long Now Talk, What If the Singularity Does NOT Happen are at http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm I'm delighted to see counter-Singularity analysis from a respected Singularity thinker. This further reassurance that the the flip-side is being considered deepens my beliefs in pro-Singularity arguments. Joshua This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
Re: [singularity] Poll = AGI Motivation / Life Extension?
For this AGI Poll, you can reply to this list or back to me privately/directly (bruce -at- novamente.net). If you reply to me, please let me know what info. I can use when replying back to the list as I will create a summary of the AGI Poll results. Thanks, Bruce Bruce Klein wrote: In June 2006, I started a topic called Viability of AGI for Life Extension Singularity which grew to 252 posts. Lively discussion, including updates on Novamente here: http://www.imminst.org/forum/index.php?act=STf=11t=11197 Along these lines, I was wondering the general motivation / attitude of [singularity] list subscribers toward AGI as it relates to Life Extension. If interested, please answer: *My **Life Extension **motivation is...* - 100% of the reason why I'm interested in AGI+Singularity - somewhere between 0 and 100% AND / OR *I'm interested in AGI+Singularity because I... * - find AGI an interesting puzzle - want to save the world - want to Thanks for playing! Bruce This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007
I saw a talk about a year or two ago where one of the Google founders was asked if they had projects to build general purpose artificial intelligence. He answered that they did not have such a project at the company level, however they did have many AI people in the company, some of whom where interested in this kind of thing. Indeed a few people were playing around with such projects as part of their 20% free time in the company. Shane - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
[singularity] Poll = AGI Motivation / Life Extension?
-- Forwarded message -- From: Joel Pitt [EMAIL PROTECTED] Date: Feb 19, 2007 8:49 PM Subject: Re: [singularity] Poll = AGI Motivation / Life Extension? To: [EMAIL PROTECTED] Hi Bruce, I believe life is all about seeking experience. So my belief is that the singularity a) enables us to have longer/indefinite life spans with which to experience more. b) will allow us to experience so much more than our current human senses allow us. Of course I also think AGI is an amazing puzzle and will answer questions (and raise new ones) about self awareness, consciousness and intelligence. I also believe that humanity is currently heading towards collapse if some major changes don't happen soon - so if the singularity can help us survive I'm all for it! :) In summary I'd say life extension is only 25% of my interest in it. Hope that helps, Joel On 2/19/07, Bruce Klein [EMAIL PROTECTED] wrote: In June 2006, I started a topic called Viability of AGI for Life Extension Singularity which grew to 252 posts. Lively discussion, including updates on Novamente here: http://www.imminst.org/forum/index.php?act=STf=11t=11197 Along these lines, I was wondering the general motivation / attitude of [singularity] list subscribers toward AGI as it relates to Life Extension. If interested, please answer: My Life Extension motivation is... - 100% of the reason why I'm interested in AGI+Singularity - somewhere between 0 and 100% AND / OR I'm interested in AGI+Singularity because I... - find AGI an interesting puzzle - want to save the world - want to Thanks for playing! Bruce This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983 -- -Joel Unless you try to do something beyond what you have mastered, you will never grow. -C.R. Lawton -- -Joel Unless you try to do something beyond what you have mastered, you will never grow. -C.R. Lawton - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007
Ben wrote: That doesn't mean they couldn't have some smart staff who shifted research interest to AGI after moving to Google, but it doesn't seem tremendously likely. I don't agree. Google is a form of research engine that enables information in grose load. How you decyfer it, is up to the individual. Having the advantage of learning about so many new interests may lead to new conclusive ideas. Ben: I don't have the impression they are funding a lot of blue-sky AGI ... I would have to agree but I think they will become more wise regarding the how important research is regarding AGI. Ben wrote: So, my opinion remains that: Google staff described as working on AI are almost surely working on clever variants of highly scalable statistical language processing. So, if you believe that this kind of work is likely to lead to powerful AGI, then yeah, you should attach a fairly high probability to the outcome that Google will create AGI. Personally I think it's very unlikely (though not impossible) that AGI is going to emerge via this route. I think an AGI will be a mix of both a Google staff as well as a working clever variant. Thanks Anna:) On 2/19/07, Shane Legg [EMAIL PROTECTED] wrote: I saw a talk about a year or two ago where one of the Google founders was asked if they had projects to build general purpose artificial intelligence. He answered that they did not have such a project at the company level, however they did have many AI people in the company, some of whom where interested in this kind of thing. Indeed a few people were playing around with such projects as part of their 20% free time in the company. Shane - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983