Richard,

This is probably covered elsewhere, but help me on this, just some thoughts at 
the end.

Many humans don't share the full complement of sensory apparatus: blind, deaf, 
cannot feel pain, taste, vestibular sense of motion, body sensation, etc; 
either through damage or congenitally.  So questions about what makes red 
appear "red" or why does chicken taste like "chicken" has no meaning for them; 
e.g., a sixth (6) dimensional being asking you what does a certain fifth (5) 
dimension  object look like.

Under your model or view, is consciousness on a graduated scale, and does it go 
to zero?

For instance if you remove ALL the qualia's sensor origins/function completely, 
is the human still conscious?  ...removing the circuits and mechanism, etc.  In 
this same vain, does adding more qualia as you proposed in the paper lead to 
more consciousness. E.g. tetrachromats (usually) women with four color 
receptors, are they more conscious than color blind (usually) men?

My view remains the same, there's probably nothing there but mechanisms with 
feedback and attention on other mechanism. Other unknowns leading to actual 
consciousness... the unsensed, unknown forces or intangibles, plains of 
existence, simulations, gods, etc..it's all possible.  However, consciousness 
is not necessary for a human-skill level AGI that can invent tools to help us 
answer all the above questions and unknowns.  E.g., a microscope or medical 
tool need not be alive to teach us more about life and biology. We'll build,  
ask it if its comfortable and to tell us god's phone number and the meaning of 
life the universe and everything (42).

Richard Loosemore wrote:
> And, please don't misunderstand: this is not a "path to AGI".  Just an 
> important side issue that the geneal public cares about enormously.

For any AGI system, the saying applies: "If it walks like a duck, and it quacks 
like a duck...it is a duck".  We treat ducks like ducks, not because they walk 
and quack like a duck, but because we, each individual's value system, "feel" 
that this is how a duck-like-walk and quack-like-sound should be treated, 
whether its a duck or not.  If the AGI exhibits human-like behavior's, humans, 
the general public, will treat it like humans feel about those particular 
behaviors (positive and aversive). 

AIBO robo pet dogs, ELIZA chatterbox programs, sex toys,.. all are treated by 
their owners/users beacause of how the owner feels about the behaviors 
expressed, and not because of what is innately going on in, or actually 
characterizes the object.  It's just as valid to say the zombies give other 
zombies meaning/value because they have behaviors that other zombies value as 
useful.  We see the earth as alive, because we attend to behaviors it has that 
are similar to our own behaviors that we value.  Is the earth really alive or 
have awareness/sensory mechanisms?  Stars as conscious lifeforms?.. an 
unconscious AGI can answer these as quick as  a conscious one.

But, I say keep up your research efforts, discoveries come from everywhere 
others fail to look.

Robert


--- On Mon, 11/17/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> From: Richard Loosemore <[EMAIL PROTECTED]>
> Subject: Re: [agi] A paper that actually does solve the problem of 
> consciousness
> To: agi@v2.listbox.com
> Date: Monday, November 17, 2008, 6:33 PM
> Mark Waser wrote:
> > An excellent question from Harry . . . .
> > 
> >> So when I don't remember anything about those
> towns, from a few 
> >> minutes ago on my road trip, is it because (a) the
> attentional 
> >> mechanism did not bother to lay down any episodic
> memory traces, so I 
> >> cannot bring back the memories and analyze them,
> or (b) that I was 
> >> actually not experiencing any qualia during that
> time when I was on 
> >> autopilot?
> >>
> >> I believe that the answer is (a), and that IF I
> can stopped at any 
> >> point during the observation period and thought
> about the experience I 
> >> just had, I would be able to appreciate the last
> few seconds of 
> >> subjective experience.
> > 
> > So . . . . what if the *you* that you/we speak of is
> simply the 
> > attentional mechanism?  What if qualia are simply the
> way that other 
> > brain processes appear to you/the attentional
> mechanism?
> > 
> > Why would "you" be experiencing qualia when
> you were on autopilot?  It's 
> > quite clear from experiments that human's
> don't "see" things in their 
> > visual field when they are concentrating on other
> things in their visual 
> > field (for example, when you are told to concentrate
> on counting 
> > something that someone is doing in the foreground
> while a man in an ape 
> > suit walks by in the background).  Do you really have
> qualia from stuff 
> > that you don't sense (even though your sensory
> apparatus picked it up, 
> > it was clearly discarded at some level below the
> conscious/attentional 
> > level)?
> 
> Yes, I did not mean to imply that all unattended stimuli
> register in 
> consciousness.  Clearly there are things that are simply
> not seen, even 
> when they are in the visual field.
> 
> But I would distinguish between that and a situation where
> you drive for 
> 50 miles and do not have a memory afterwards of the places
> you went 
> through.  I do not think that we "do not see" the
> road and the towns and 
> other traffic in the same sense that we "do not
> see" an unattended 
> stimulus in a dual task experiment, for example.
> 
> But then, there are probably intermediate cases.
> 
> Some of the recent neural imaging work is relevant in this
> respect.  I 
> will think some more about this whole issue.
> 
> 
> 
> Richard Loosemore
> 
> 
> 
> 
> 
> 
> > 
> > 
> > ----- Original Message ----- From: "Richard
> Loosemore" <[EMAIL PROTECTED]>
> > To: <agi@v2.listbox.com>
> > Sent: Monday, November 17, 2008 1:46 PM
> > Subject: **SPAM** Re: [agi] A paper that actually does
> solve the problem 
> > of consciousness
> > 
> > 
> >> Harry Chesley wrote:
> >>> On 11/14/2008 9:27 AM, Richard Loosemore
> wrote:
> >>>>
> >>>>  I completed the first draft of a
> technical paper on consciousness the
> >>>>  other day.   It is intended for the
> AGI-09 conference, and it can be
> >>>>  found at:
> >>>>
> >>>>
> >>>>
> http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
> 
> >>>>
> >>>
> >>> Good paper.
> >>>
> >>> A related question: How do you explain the
> fact that we sometimes are 
> >>> aware of qualia and sometimes not? You can
> perform the same actions 
> >>> paying attention or "on auto pilot."
> In one case, qualia "manifest," 
> >>> while in the other they do not. Why is that?
> >>
> >> I actually *really* like this question:  I was
> trying to compose an 
> >> answer to it while lying in bed this morning.
> >>
> >> This is what I started referring to (in a longer
> version of the paper) 
> >> as a "Consciousness Holiday".
> >>
> >> In fact, if start unpacking the idea of what we
> mean by conscious 
> >> experience, we start to realize that it inly
> really exists when we 
> >> look at it.  It is not even logically possible to
> think about 
> >> consciousness - any form of it, including
> *memories* of the 
> >> consciousness that I had a few minutes ago, when I
> was driving along 
> >> the road and talking to my companion without
> bothering to look at 
> >> several large towns that we drove through -
> without applying the 
> >> analysis mechanism to the consciousness episode.
> >>
> >> So when I don't remember anything about those
> towns, from a few 
> >> minutes ago on my road trip, is it because (a) the
> attentional 
> >> mechanism did not bother to lay down any episodic
> memory traces, so I 
> >> cannot bring back the memories and analyze them,
> or (b) that I was 
> >> actually not experiencing any qualia during that
> time when I was on 
> >> autopilot?
> >>
> >> I believe that the answer is (a), and that IF I
> can stopped at any 
> >> point during the observation period and thought
> about the experience I 
> >> just had, I would be able to appreciate the last
> few seconds of 
> >> subjective experience.
> >>
> >> The real reply to your question goes much much
> deeper, and it is 
> >> fascinating because we need to get a handle on
> creatures that probably 
> >> do not do any reflective, language-based
> philosophical thinking (like 
> >> guinea pigs and crocodiles).  I want to say more,
> but will have to set 
> >> it down in a longer form.
> >>
> >> Does this seem to make sense so far, though?
> >>
> >>
> >>
> >>
> >> Richard Loosemore
> >>
> >>
> >> -------------------------------------------
> >> agi
> >> Archives:
> https://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >>
> > 
> > 
> > 
> > 
> > -------------------------------------------
> > agi
> > Archives:
> https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: 
> > https://www.listbox.com/member/?&; 
> > 
> > Powered by Listbox: http://www.listbox.com
> > 
> > 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to