Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-25 Thread Eric Burton
It's definitely occurred to me before that alens who came across a planet where the indigenous life had been replaced by their machines in what has been called a "negative singularity", that they would see that as sort of like a weedy lot. Technology crowds culture out so the positive singularity s

RE: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-25 Thread Ed Porter
Since there have been multiple discussions of aliens lately on this list, I think I should communicate a thought that I have had concerning them that I have not heard any one else say --- although I would be very surprised if others have not thought it --- and it does relate to AGI --- so it is

Re: [agi] Cog Sci Experiment

2008-11-25 Thread Mike Tintner
Acilio, Sorry not to reply sooner. I'm interested in this inquiry, but only as observer not serious participant. It's something - i.e. the organization of memory - I know, and have thought little about. But I think your idea of online experimentation is a good one, (especially if you could

[agi] Re: JAGI submission

2008-11-25 Thread Eliezer Yudkowsky
On Mon, Nov 24, 2008 at 4:20 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > I submitted my paper "A Model for Recursively Self Improving Programs" to > JAGI and it is ready for open review. For those who have already read it, it > is essentially the same paper except that I have expanded the abstr

Re: [agi] JAGI submission

2008-11-25 Thread Ben Goertzel
> I could also argue that the limitations on RSI would constrain a hard-takeoff > singularity to an explosion of computational power, not of knowledge. But I > think that might be a stretch. Not everyone agrees that there will even be a > singularity in the first place. You could argue that, b

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread John LaMuth
Mike The abstract nouns Honor. Justice. Truth can all be shown to be objectively based in science of Behaviorism http://www.angelfire.com/rnb/fairhaven/behaviorism.html as outlined in technically linked schematics http://www.angelfire.com/rnb/fairhaven/schematics.html and even granted US pat

Re: [agi] JAGI submission

2008-11-25 Thread Trent Waddington
On Wed, Nov 26, 2008 at 8:51 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > I am not aware of any published papers proposing pure RSI (without input) as > a path to AGI. But in 2002 there were experiments with AI boxing to test the > feasibility of detecting and containing unfriendly AI, discusse

Re: [agi] JAGI submission

2008-11-25 Thread Matt Mahoney
I am not aware of any published papers proposing pure RSI (without input) as a path to AGI. But in 2002 there were experiments with AI boxing to test the feasibility of detecting and containing unfriendly AI, discussed on SL4. http://www.sl4.org/archive/0207/4935.html (The results showed that u

Re: [agi] JAGI submission

2008-11-25 Thread Trent Waddington
On Wed, Nov 26, 2008 at 12:30 AM, Russell Wallace <[EMAIL PROTECTED]> wrote: > It certainly wasn't a strawman as of a couple of years ago; I've had > arguments with people who seemed to seriously believe in the > possibility of creating AI in a sealed box in someone's basement > without any feedbac

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
your list is a start to a list of only potentially problematic questions or constructs, since using these words and concepts is actually going to be required in any AGI system... a flag list is a start, but a set of rules to eliminate areas of language construction we do not need to ever worry abou

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Mike Tintner
Tudor: I agree that there are many better questions to elucidate the tricks/pitfalls of language. but lets list the biggest time wasters first, Er, it's a rather big job. I think you're talking about all abstract nouns. Time. Space. Honour. Justice. Truth. Realism Beauty. Science. Art.

Re: [agi] Glocal memory

2008-11-25 Thread Mike Tintner
Ben: yeah, it's coming back to me now .. I remember holons and holarchies and all that stuff ;-) However, Koestler was writing before complex dynamics and attractors and such were well-understood and well-known ... and all this gives a quite different flavor to the web of ideas he was exploring,

Re: [agi] Glocal memory

2008-11-25 Thread Ben Goertzel
On Tue, Nov 25, 2008 at 11:48 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > yeah, it's coming back to me now .. I remember holons and holarchies > and all that stuff ;-) > > However, Koestler was writing before complex dynamics and attractors > and such were well-understood and well-known ... and a

Re: [agi] Glocal memory

2008-11-25 Thread Ben Goertzel
yeah, it's coming back to me now .. I remember holons and holarchies and all that stuff ;-) However, Koestler was writing before complex dynamics and attractors and such were well-understood and well-known ... and all this gives a quite different flavor to the web of ideas he was exploring, I thin

Re: [agi] Glocal memory

2008-11-25 Thread Mike Tintner
Ben, Yeah, I'd heavily recommend it. I don't know anything like Koestler for setting out the general importance of the hierarchical principle. And I didn't do it justice, because it *is* two-way. It's not just about triggers (or your keys) acting downwards, through a whole "holarchy" of "holon

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore
Tudor Boloni wrote: Richard, please give me a link to the paper or at least the example related to manipulation of subjective experience in others, i am indeed curious to see how their approach would fare... thanks for the effort in advance Sure thing: http://susaro.com/wp-content/uploads/20

Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore
Ben Goertzel wrote: http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/ There are always more papers that can be discussed. OK, sure, but this is a more recent paper **by the same authors, discussing the same data*** and more recent similar data. But

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
Richard, please give me a link to the paper or at least the example related to manipulation of subjective experience in others, i am indeed curious to see how their approach would fare... thanks for the effort in advance tudor > For example, they could not, in principle, answer any questions a

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore
Tudor Boloni wrote: wrong category is trivial indeed, but quickly removing computing resources from impossible processes can be a great benefit to any system, and an incredible benefit if the system learns to spot deeply nonsensical problems in advance of dedicating almost any resources to it.

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
I agree that there are many better questions to elucidate the tricks/pitfalls of language. but lets list the biggest time wasters first, and the post showed some real time wasters from various fields that i found valuable to be aware of > It implies it is pointless to ask what the essence of time

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
wrong category is trivial indeed, but quickly removing computing resources from impossible processes can be a great benefit to any system, and an incredible benefit if the system learns to spot deeply nonsensical problems in advance of dedicating almost any resources to it... what if we could desig

RE: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Ed Porter
Tudor, If you were referring to the following post as the source of an appropriate filter for what should and should not be considered an appropriate question I think you picked the wrong source. oh yes, there are indeed stupid questions... e.g. what is essence of time? what is the natu

Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Ben Goertzel
>> >> http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/ > > > There are always more papers that can be discussed. OK, sure, but this is a more recent paper **by the same authors, discussing the same data*** and more recent similar data. > > But that does

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore
Tudor Boloni wrote: we invariably generate and then fruitlessly explore (our field is even more exposed to this than most others) until we come up against the limits of our own language, and defeated and fatigued realize we never thought the questions through. i nominate this guy: http://hype

Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore
Ben Goertzel wrote: Richard, It might be more useful to discuss more recent papers by the same authors regarding the same topic, such as the more accurately-titled *** Sparse but not "Grandmother-cell" coding in the medial temporal lobe. Quian Quiroga R, Kreiman G, Koch C and Fried I. Trends in

Re: [agi] JAGI submission

2008-11-25 Thread Russell Wallace
On Tue, Nov 25, 2008 at 12:58 AM, Trent Waddington <[EMAIL PROTECTED]> wrote: > summed up in the last two words of the abstract: "without input". Who > ever said that RSI had anything to do with programs that had no input? It certainly wasn't a strawman as of a couple of years ago; I've had argum

Re: [agi] Glocal memory

2008-11-25 Thread Ben Goertzel
You know, I read that book 25 years ago ... maybe I should look at it again... However, my point was definitely not "the hierarchical principle as the organizing principle of life"... that is a rather different point. If any example conveys my point clearly, it would be the "glocal Hopfield net",

[agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
we invariably generate and then fruitlessly explore (our field is even more exposed to this than most others) until we come up against the limits of our own language, and defeated and fatigued realize we never thought the questions through. i nominate this guy: http://hyperlogic.blogspot.com/ at a

Re: [agi] Glocal memory

2008-11-25 Thread Mike Tintner
Ben, Have you read Koestler's The Ghost in the Machine? You seem to be reaching in your post for what he sets out there - albeit v. loosely - namely the hierarchical principle as the organizing principle of life, both of organisms and of societies (and perhaps one can add machines). You talk