Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Sunday 04 November 2007 14:37, Edward W. Porter wrote: > > Re: augmenting/replacing the PFC. We can advance this field of > > knowledge via attempting to extend Dr. White's work on brain > > transplantation in monkeys, instead with mice, in an attempt to keep > > brain regions of the mice on lif

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Russell Wallace
On 11/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > Let's say your goal is to stimulate your nucleus accumbens. (Everyone has > this goal; they just don't know it). The problem is that you would forgo > food, water, and sleep until you died (we assume, from animal experiments). We have no need

RE: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Edward W. Porter
In response to Richard Loosemore's Post of Sun 11/4/2007 12:15 PM responding to my prior message of Sat 11/3/2007 3:28 PM ED's prior msg> For example, humans might for short sighted personal gain (such as when using them in weapon systems) RL Whoaa! You assume that it would be possible

Re: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Charles D Hixson
Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively called "The Story of the Machine". You can find them i

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > Matt, > > Create a numeric "pleasure" variable in your mind, initialize it with > a positive number and then keep doubling it for some time. Done? How > do you feel? Not a big difference? Oh, keep doubling! ;-)) The point of autobliss.cpp is to illus

Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Saturday 03 November 2007 16:53, Edward W. Porter wrote: > In my below recent list of ways to improve the power of human > intelligent augmentation I forgot to think about possible ways to > actually increase the bandwidth of the top level decision making of > the brain, which I had listed as a

Re: [agi] NLP + reasoning?

2007-11-04 Thread Benjamin Goertzel
Jiri, IMO, proceeding with AGI development using formal-language input rather than NL input is **not** necessarily a bad approach. However, one downside is that your incremental steps toward AGI, in this approach, will not be very convincing to skeptics. Another downside is that in this approach

Re: [agi] NLP + reasoning?

2007-11-04 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > If you can't get meaning from clean input format then what makes you > think you can handle NL? Humans seem to get meaning more easily from ambiguous statements than from mathematical formula. Otherwise you are programming, not teaching. > When worki

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-04 Thread Benjamin Goertzel
> I think that if it were dumb enough that it could be treated as a tool, > then it would have to no be able to understand that it was being used as > a tool. > > And if it could not understand that, it would just not have any hope of > being generally intelligent. You seem to be assuming this hyp

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-04 Thread Richard Loosemore
Jiri Jelinek wrote: On Nov 3, 2007 1:17 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: Isn't there a fundamental contradiction in the idea of something that can be a "tool" and also be "intelligent"? No. It could be just a sophisticated search engine. What I mean is, is the word "tool" usa

Re: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Richard Loosemore
Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: “If AI systems are built with motivation systems that are stable, then we could predict that they will remain synchronized with the goals of the human race until the end of history.” and “I can think of many

RE: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Edward W. Porter
Jiri, Thanks for your reply. I think we have both stated our positions fairly well. It doesn't seem either side is moving toward the other. So I think we should respect the fact we have very different opinions and values, and leave it at that. Ed Porter -Original Message- From: Jiri Je

Re: [agi] Connecting Compatible Mindsets

2007-11-04 Thread YKY (Yan King Yin)
I think we can use the AGIRI wiki for this purpose: http://www.agiri.org/wiki/AGI_Projects Afterall we've been using this list for several years, and the list has maintained a fairly neutral stance throughout. My entry for G0 is here: http://www.agiri.org/wiki/Generic_AI AGIRI should let wiki us

Re: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Jiri Jelinek
Ed, >But I guess I am too much of a product of my upbringing and education to want only bliss. I like to create things and ideas. I assume it's because it provides pleasure you are unable to get in other ways. But there are other ways and if those were easier for you, you would prefer them over t