Re: [agi] New AGI Interface potential - Contacts

2008-01-24 Thread Mike Tintner
Bob: This is probably off topic, but I think there will be a huge market in augmented reality using devices like this and the more tradition eyetap devices. Is it off topic?My impression is there is an extensive and quite developed philosophy/study of AI/computer interfaces which I've never s

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Robert Wensman
1. Brembs and his colleagues reasoned that if fruit flies (Drosophila melanogaster) *were simply reactive robots entirely determined by their environment*, in completely featureless rooms they should move completely randomly. Yes, but no one has ever argued that a flier is a stateless machine. It

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Bob Mottram
On 24/01/2008, Robert Wensman <[EMAIL PROTECTED]> wrote: > Yes, but no one has ever argued that a flier is a stateless machine. It > seems like their argument ignores the concept of internal state. If they > went through all this trouble just to prove that the brain of the flies has > an internal s

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Vladimir Nesov
On Jan 24, 2008 4:14 PM, Bob Mottram <[EMAIL PROTECTED]> wrote: > I don't think anyone with knowledge of insect nervous systems would > argue that they're stateless machines. Even simple invertebrates such > as slugs can exhibit classical condition effects which means that at > least some minimal

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
I think a more precise way to phrase what they showed, philosophically, would be like this: " Very likely, to the extent that flies are conscious, then they have a SUBJECTIVE FEELING of possessing free will. " In other words, flies seem to possess the same kind of internal spontaneity-generation

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Robert Wensman
> > > > I don't think anyone with knowledge of insect nervous systems would > argue that they're stateless machines. Even simple invertebrates such > as slugs can exhibit classical condition effects which means that at > least some minimal state is retained. > > To me the idea of free will suggest

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
> In other words, flies seem to possess the same kind of internal > spontaneity-generation that we possess, and that we associate with our > subjectively-experienced feeling of free will. > > -- Ben G To clarify further: Suppose you are told to sit still for a while, and then move your hand sudde

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Mike Tintner
You and others are right in that Brembs was perhaps confused about the difference between spontaneity and free will. But perhaps the experiment, in demonstrating spontaneity, does weigh against the idea of the fly being programmed? Robert: 1. Brembs and his colleagues reasoned that if frui

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Vladimir Nesov
On Jan 24, 2008 5:35 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > > But perhaps the experiment, in demonstrating spontaneity, does weigh against > the idea of the fly being programmed? > What does this idea state? What do you mean when you say that something is programmed? Can you provide example

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Jim Bromer
- Original Message From: Bob Mottram <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, January 24, 2008 8:14:09 AM Subject: Re: [agi] Study hints that fruit flies have free will I don't think anyone with knowledge of insect nervous systems would argue that they're stateless mac

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Mike Tintner
That there is some series of instructions, contained presumably in neurons (or in a computer) which produces a consistent series of movements/thoughts/actions in a family of situations. So when you/I write "when" it is almost certainly a programmed action, which can be and is automatically vari

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
> The question vis-a-vis the fly - or any animal - is whether the *whole* > course of action of the fly in that experiment can be accounted for by one - > or a set of - programmed routines or programs period. My impression - > without having studied the experiment in detail - is that it weighs agai

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Mike Tintner
I take your general point re how complex systems can produce apparently spontaneous behaviour. But to what actual courses of action of actual animals (such as the fly here) or humans has this theory been successfully applied? Ben: The question vis-a-vis the fly - or any animal - is whether

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
If you're asking whether there are accurate complex-systems simulations of whole animals, there aren't yet ... At present, we lack instrumentation capable of gathering detailed data about how animals work; and we lack computers powerful enough to run such simulations (though some supercomputers ma

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Mike Tintner
Theory suggests that such simulations will be possible, but it hasn't been proved conclusively ... so I guess you can still maintain some kind of "vitalism" for a couple decades or so if you really want to ;-) Possible major misunderstanding : I am not in any shape or form a vitalist. My argu

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Randall Randall
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions.

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
> Possible major misunderstanding : I am not in any shape or form a vitalist. > My argument is solely about whether a thinking machine (brain or computer) > has to be instructed to think rigidly or freely, with or without prior > rules - and whether, with the class of problems that come under AG

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> The problem with the scenarios that people imagine (many of which are > >> Nightmare Scenarios) is that the vast majority of them involve > >> completely untenable as

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Randall Randall wrote: On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely un

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Vladimir Nesov
On Jan 24, 2008 6:30 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > That there is some series of instructions, contained presumably in neurons > (or in a computer) which produces a consistent series of > movements/thoughts/actions in a family of situations. So when you/I write > "when" it is almost

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable

Re: [agi] CEMI Field

2008-01-24 Thread Vladimir Nesov
On Jan 24, 2008 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > Just about all humans claim to have an awareness of sensations, thoughts, and > feelings, and control over decisions they make, what we commonly call > "consciousness". A P-zombie would make such claims too (because by definition

Re: [agi] CEMI Field

2008-01-24 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Jan 24, 2008 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > Just about all humans claim to have an awareness of sensations, thoughts, > and > > feelings, and control over decisions they make, what we commonly call > > "consciousness".

Re: [agi] CEMI Field

2008-01-24 Thread Vladimir Nesov
On Jan 24, 2008 11:28 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > Episodic memory is an aspect of belief in consciousness. Consciousness does > not exist. > OK, thank you, now I understand what you are talking about. You use 'belief in consciousness' to designate behavioral patterns that are

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Because recursive self improvement is a competitive evolutionary process > even > > if all agents have a common ancestor. > > As explained in parallel post: this is a non-sequiteur. OK, consider a network of agents, such

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. As explained in parallel post: this is a non-sequiteur. OK, consider a network of ag