Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, It follows that the AIXItl algoritm applied to friendliness would be effectively more friendly than any other time t and space bounded agent. Personally I find that satisfying in the sense that once "compassion", "growth" and "choice" or the classical "friendliness" has been defined an opti

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Stefan Pernar
Hi again, On 9/11/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: > It follows that the AIXItl algoritm applied to friendliness would be > effectively more friendly than any other time t and space l bounded > agent. Yes, but the problem is that AIXItl, in order to run effectively, requires unpractic

[singularity] Switching aging off in stem cells

2006-09-11 Thread Ben Goertzel
Meanwhile, the biologists continue making excellent, steady progress toward understanding how the curse of aging operates http://www.hhmi.org//news/morrison20060906.html " A single molecular switch plays a central role in inducing stem cells in the brain, pancreas, and blood to lose function

[singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, On Ben's essay: Ben is arguing that due to incomputable complexity 'friendliness' can only be guranteed under unsatisfactory narrow circumstances. Independent of one agrees or not, it would follow that if this is the case then substituting friendliness with one or all of the alternative goal

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Since we assume that AIXItl is effectively better at achieving its goal than any other agent with the same space and time resource limitations the specific values for t and l do not matter. Do they? > In short: it's some pretty math with some conceptual evocativeness, > but not of any pragmatic v

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Russell Wallace
On 9/11/06, Stefan Pernar <[EMAIL PROTECTED]> wrote: Assuming that AIXItl be effectively better at achieving its goal thanany other agent with thesame space and time resource limitations (t,l) would still make it thealgorithm of choice no matter how computationally intense, as any other system give

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Matt Mahoney
This discussion on how to guarantee that AI be friendly seems to presume that a super AI and human brains will be separate entities. I believe this will not be the case, and the issue will be moot. Humans would like to be smarter: to be able to think faster, learn faster, communicate faster, a

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Shane Legg
On 9/11/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: This discussion on how to guarantee that AI be friendly seems to presume that a super AI and human brains will be separate entities.  I believe this will not be the case, and the issue will be moot. I was actually planning a followup to my "negati

Re: [singularity] Switching aging off in stem cells

2006-09-11 Thread Matt Mahoney
From: Ben Goertzel <[EMAIL PROTECTED]> > http://www.hhmi.org//news/morrison20060906.html Unfortunately, limiting the number of cell divisions is necessary to prevent cancer. We also need a way to reduce the mutation rate. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message To: sing

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
The subtle question raised by the AI/human fusion approach is: Once you become something vastly more intelligent and general than "human", in what sense are you still "you" ... ?? In what sense have you simply killed yourself slowly (or not so slowly) and replaced yourself with something cleverer

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Shane Legg
On 9/11/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: The subtle question raised by the AI/human fusion approach is: Onceyou become something vastly more intelligent and general than "human",in what sense are you still "you" ... ??  In what sense have you simply killed yourself slowly (or not so slow

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Lúcio de Souza Coelho
On 9/11/06, Shane Legg <[EMAIL PROTECTED]> wrote: (...) I'm no longer the child who used to play with the kids across the road in New Zealand in the late 1970's. I have some memories of being that person, but that's it. I don't see this as fundamentally different, especially if it is allowed to

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Michael Anissimov
Technologically, AI is far far easier than uploading. So, AI will come first, and we will have to build AI that is reliably nice to us, or suffer the consequences. It's not "controlling an entity smarter than you" if it was built from scratch to be nice, and continues to be nice of its own accor

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Lúcio de Souza Coelho
On 9/11/06, Michael Anissimov <[EMAIL PROTECTED]> wrote: Technologically, AI is far far easier than uploading. So, AI will come first, and we will have to build AI that is reliably nice to us, or suffer the consequences. (...) I am not so sure that AI is easier than uploading. Surely upload n

Re: [singularity] Singularity - Human or Machine?

2006-09-11 Thread Bruce LaDuke
Joel, You make some good points that make me think deeper about this, but I'm still locked into a non-intentional KC. I'm going to try to explain this in light of my defnition of knowledge creation, which is different from others out there today. Per one of my last notes, knowledge is a str

Re: [singularity] Singularity - Human or Machine?

2006-09-11 Thread Joel Pitt
Bruce, Thanks for taking the time for that detailed response. I'll respond to your views on AKC once I've had a read through your book chapter. In the mean time: Some of the technical discussion in this forum is tough for me to follow, but it seems to me that this is where most of the discussi

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Lucio wrote: In order to produce strong AI, though, we need to understand the mind from low to high levels, or understand the processes that make high levels emerge from low ones. That seems a much tortuous scientific path, one that cannot be achieved by conventional and comparatively predictabl

[singularity] Is Friendly AI Bunk?

2006-09-11 Thread Stefan Pernar
On 9/12/06, Russell Wallace <[EMAIL PROTECTED]> wrote: On 9/11/06, Stefan Pernar <[EMAIL PROTECTED]> wrote: > Assuming that AIXItl be effectively better at achieving its goal than > any other agent with the > same space and time resource limitations (t,l) would still make it the > algorithm of c

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Russell Wallace
On 9/12/06, Stefan Pernar <[EMAIL PROTECTED]> wrote: An interesting question for me now is when the comparativeinefficencies of the AIXItl algorythms will become irrelevant due toever decreasing cost and availability of computational resources. It won't. Basically AIXI and suchlike work by exhaust

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, Just for kicks - let's assume that AIXItl yields 1% more intelligent results when provided 10^6 times the computational resources when compared to another algorythm X. Let's further assume that today the cost asscociated with X for reaching a benefit of 1 will be 1 compared to a cost of 10^6

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Stefan Pernar
On 9/12/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: In this scenario it will be computationally cheaper to apply the AIXItl in less than 20 years. But this is nowhere near reality -- the level of inefficiency of AIXItl is such that it will never be usable within the physical universe, unless c

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Thanks Ben, Russel et al for being so patient with me ;-) To summarize: AIXItl's inefficiencies are so large and the additional benefit it provides is so small that it will likely never be a logical choice over other more efficient, less optimal algorithms. Stefan The additional benefit it *wou