Richard,

In your paper you say

"
The argument does not
say anything about the nature of conscious experience, qua
subjective experience, but the argument does say why it
cannot supply an explanation of subjective experience. Is
explaining why we cannot explain something the same as
explaining it?
"

I think it isn't the same...

The problem is that there may be many possible explanations for why we can't
explain consciousness.  And it seems there is no empirical way to decide
among these explanations.  So we need to decide among them via some sort of
metatheoretical criteria -- Occam's Razor, or conceptual consistency with
our scientific ideas, or some such.  The question for you then is, why is
yours the best explanation of why we can't explain consciousness?

But I have another confusion about your argument.  I understand the idea
that a mind's analysis process has eventually got to "bottom out" somewhere,
so that it will describe some entities using descriptions that are (from its
perspective) arbitrary and can't be decomposed any further.  These
bottom-level entities could be sensations or they could be sort-of arbitrary
internal tokens out of which internal patterns are constructed....

But what do you say about the experience of being conscious of a chair,
then?  Are you saying that the consciousness I have of the chair is the
*set* of all the bottom-level unanalyzables into which the chair is
decomposed by my mind?

ben


On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 11/14/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >
> http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
>
> Interesting that some of your predictions have already been tested, in
> particular, synaesthetic qualia was described by George Stratton in 1896.
> When people wear glasses that turn images upside down, they adapt after
> several days and begin to see the world normally.
>
> http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdf<http://www.cns.nyu.edu/%7Enava/courses/psych_and_brain/pdfs/Stratton_1896.pdf>
> http://wearcam.org/tetherless/node4.html
>
> This is equivalent to your prediction #2 where connecting the output of
> neurons that respond to the sound of a cello to the input of neurons that
> respond to red would cause a cello to sound red. We should expect the effect
> to be temporary.
>
> I'm not sure how this demonstrates consciousness. How do you test that the
> subject actually experiences redness at the sound of a cello, rather than
> just behaving as if experiencing redness, for example, claiming to hear red?
>
> I can do a similar experiment with autobliss (a program that learns a 2
> input logic function by reinforcement). If I swapped the inputs, the program
> would make mistakes at first, but adapt after a few dozen training sessions.
> So autobliss meets one of the requirements for qualia. The other is that it
> be advanced enough to introspect on itself, and that which it cannot analyze
> (describe in terms of simpler phenomena) is qualia. What you describe as
> "elements" are neurons in a connectionist model, and the "atoms" are the set
> of active neurons. "Analysis" means describing a neuron in terms of its
> inputs. Then qualia is the first layer of a feedforward network. In this
> respect, autobliss is a single neuron with 4 inputs, and those inputs are
> therefore its qualia.
>
> You might object that autobliss is not advanced enough to ponder its own
> self existence. Perhaps you define "advanced" to mean it is capable of
> language (pass the Turing test), but I don't think that's what you meant. In
> that case, you need to define more carefully what qualifies as "sufficiently
> powerful".
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to