Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Benjamin Johnston


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:



Hi Richard,

I don't have any comments yet about what you have written, because I'm 
not sure I fully understand what you're trying to say... I hope your 
answers to these questions will help clarify things.


It seems to me that your core argument goes something like this:

That there are many concepts for which an introspective analysis can 
only return the concept itself.

That this recursion blocks any possible explanation.
That consciousness is one of these concepts because "self" is inherently 
recursive.
Therefore, consciousness is explicitly blocked from having any kind of 
explanation.


Is this correct? If not, how have I misinterpreted you?


I have a thought experiment that might help me understand your ideas:

If we have a robot designed according to your molecular model, and we 
then ask the robot "what exactly is the nature of red" or "what is it 
like to experience the subjective essense of red", the robot may analyze 
this concept, ultimately bottoming out on an "incoming signal line".


But what if this robot is intelligent and can study other robots? It 
might then examine other robots and see that when their analysis bottoms 
out on an "incoming signal line", what actually happens is that the 
incoming signal line is activated by electromagnetic energy of a certain 
frequency, and that the object recognition routines identify patterns in 
"signal lines" and that when an object is identified it gets annotated 
with texture and color information from its sensations, and that a 
particular software module injects all that information into the 
foreground memory. It might conclude that the experience of 
"experiencing red" in the other robot is to have sensors inject atoms 
into foreground memory, and it could then explain how the current 
context of that robot's foreground memory interacts with the changing 
sensations (that have been injected into foreground memory) to make that 
experience 'meaningful' to the robot.


What if this robot then turns its inspection abilities onto itself? Can 
it therefore further analyze "red"? How does your theory interpret that 
situation?


-Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Why consciousness is hard to define (was Re: [agi] Ethics of computer-based cognitive experimentation)

2008-11-16 Thread Colin Hales

Matt Mahoney wrote:

--- On Fri, 11/14/08, Colin Hales <[EMAIL PROTECTED]> wrote:
  

Try running yourself with empirical results instead of metabelief


(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are testably absent, no
matter how weird the answer, it will deliver maximally informed
choices. Not facts. Facts will only ever appear differently after
choices are made. This too is a fact...which I have chosen to make
choices about. :-) If you fail to resolve your inconsistency then you
are guaranteeing that your choices are minimally informed.

Fine. By your definition of consciousness, I must be conscious because I can 
see and because I can apply the scientific method, which you didn't precisely 
define, but I assume that means I can do experiments and learn from them.
  
Not quite. The claim is specific: You have visual P-consciousness 
because you can do science evidenced from visual P-consciousness. This 
is the crucial and unique circumstance involved here.


The scientific process: We scientists are obliged to construct and 
deliver abstractions about the natural world that have the status of 
generalisations that operate independently of a particular scientist. 
Yes, learning is involved. But the deliverable is more than just 
learning (the act). The deliverable evidence (which makes it testable) 
is what is learnt the generalisation. Like F=MA and such.the 
'law of nature'. What is learnt must be applied by the agent in a 
completely novel (degenerate I./O) circumstance in which the law of 
nature is *implicitly *encoded in the natural world outside the agent. 
That is what humans do.

But by your definition, a simple modification to autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) would make it conscious. It already 
applies the scientific method. It outputs 3 bits (2 randomly picked inputs to 
an unknown logic gate and a proposed output) and learns the logic function. The 
missing component is vision. But suppose I replace the logic function (a 4 bit 
value specified by the teacher) with a black box with 3 switches and a light 
bulb to indicate whether the proposed output (one of the switches) is right or 
wrong. You also didn't precisely define what constitutes vision, so I assume a 
1 pixel system qualifies.

  
The test for consciousness involves the delivery of the abstraction (in 
your case some kind of logic gate)  in a completely different context to 
that in which it was acquired. So rework your test so that there is a 
logic gate of the kind 'learnt', but encoded in the position of rocks, 
for example. Then make the agent recognise the same abstraction applies 
by some kind of cued behaviour that will only result if the abstraction 
was known. Of course you have to verify that the abstraction was not 
known before the original learning, by intially verifying the failure of 
this stage of the test.


It also requires all acquisition of the abstraction to occur without any 
human involvement whatever. The nice thing about the test is that we can 
specify that the scientific evidence shall be obvious through perception 
of photon radiation in the visible range (for example). That's all we 
have to specify. If the test subject can do all this autonomously then 
visual experience must have been involved. That is the rationale.


The PCST (P-conscious scientist test) will demand learning in an 
environment that you can never have been exposed it to before, that no 
human can be involved in and the knowledge will be applied to solving a 
completely novel problem that no human involved in the testing will have 
had anything to do with. This is my PCST: a test for what human 
scientists do.


A single pixel will not suffice. Nor will any system that has been told 
what to learn (knowledge = configuration of  logic gates). However... so 
what? The test demands that the test subject learn the same way humans 
do, not what humans actually learn. You can't be trained to be 
successful at the PCST...the test itself is the training. It's what 
humans do. Any system that requires a-priori training will fail. The 
test candidate merely has to be suited to survive in the (a-priori 
unknown) test environment.


If you think 'autobliss' can be conscious (in this case, be claimed to 
have visual P-consciousness) as a result of  behaving to way you 
say...then simply submit it to the PCST. If it passes then you'll have a 
scientific claim to the existence of visual P-consciousness. I predict 
that 'autobliss' will fail irretrievably and permanently. Indeed it 
won't even be able to begin the test.. The test is for completely 
autonomous, embodied agency.


If you can get the entity to do authentic original science on the 
unknown in a 'double blind' fashion you have a really good claim to the 
P-consciousness of the entity in the perceptual mode in which the 
science operated.


You don't have to believe my solution to conscious

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Trent Waddington
Richard,

  After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
"consciousness" that many lay-people use the word to refer to.  I
don't think your explanation is fleshed out enough for those
lay-people, but its certainly sufficient for most the people on this
list.  I would recommend that anyone who hasn't read the paper, and
has an interest in this whole consciousness business, give it a read.

I especially liked the bit where you describe how the model of self
can't be defined in terms of anything else.. as it is inherently
recursive.  I wonder whether the dynamic updating of the model of self
may well be exactly the subjective experience of "consciousness" that
people describe.  If so, the notion of a p-zombie is not impossible,
as you suggest in your conclusions, but simply an AGI without a
self-model.

Finally, the introduction says:

  "Given  the  strength  of  feeling on  these matters - for  example,
 the widespread belief  that AGIs  would  be  dangerous  because,  as
conscious  beings, they would inevitably rebel against their lack of
freedom - it  is  incumbent upon  the AGI  community  to  resolve
these questions  as  soon  as  possible."

I was really looking forward to seeing you address this widespread
belief, but unfortunately you declined.  Seems a bit of a tease.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ben Goertzel
> Sorry to be negative, but no, my proposal is not in any way a modernization
> of Peirce's metaphysical analysis of awareness.
>
>

Could you elaborate the difference?  It seems very similar to me.   You're
saying that consciousness has to do with the bottoming-out of mental
hierarchies in raw percepts that are unanalyzable by the mind ... and
Peirce's Firsts are precisely raw percepts that are unanalyzable by the
mind...


***
The standard meaning of Hard Problem issues was described very well by
Chalmers, and I am addressing the hard problem of concsciousness, not the
other problems.
***

Hmmm  I don't really understand why you think your argument is a
solution to the hard problem  It seems like you explicitly acknowledge
in your paper that it's *not*, actually  It's more like a philosophical
argument as to why the hard problem is unsolvable, IMO.


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> 
> Three things.
> 
> 
> First, David Chalmers is considered one of the world's foremost
> researchers in the consciousness field (he is certainly now the most
> celebrated).  He has read the argument presented in my paper, and he
> has
> discussed it with me.  He understood all of it, and he does not share
> any of your concerns, nor anything remotely like your concerns.  He had
> one single reservation, on a technical point, but when I explained my
> answer, he thought it interesting and novel, and possibly quite valid.
> 
> Second, the remainder of your comments below are not coherent enough to
> be answerable, and it is not my job to walk you through the basics of
> this field.
> 
> Third, about your digression:  gravity does not "escape" from black
> holes, because gravity is just the curvature of spacetime.  The other
> things that cannot escape from black holes are not "forces".
> 
> I will not be replying to any further messages from you because you are
> wasting my time.
> 
> 

I read this paper several times and still have trouble holding the model
that you describe in my head as it fades quickly and then there is a just a
memory of it (recursive ADD?). I'm not up on the latest consciousness
research but still somewhat understand what is going on there. Your paper is
a nice and terse description but to get others to understand the highlighted
entity that you are trying to describe may be easier done with more
diagrams. When I kind of got it for a second it did appear quantitative,
like mathematically describable. I find it hard to believe though that
others have not put it this way, I mean doesn't Hofstadter talk about this
in his books, in an unacademical fashion?
 
Also Edward's critique is very well expressed and thoughtful. Just blowing
him off like that is undeserving.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Trent Waddington
On Mon, Nov 17, 2008 at 10:47 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I will not be replying to any further messages from you because you are
> wasting my time.

Welcome to the Internet.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


Three things.


First, David Chalmers is considered one of the world's foremost 
researchers in the consciousness field (he is certainly now the most 
celebrated).  He has read the argument presented in my paper, and he has 
discussed it with me.  He understood all of it, and he does not share 
any of your concerns, nor anything remotely like your concerns.  He had 
one single reservation, on a technical point, but when I explained my 
answer, he thought it interesting and novel, and possibly quite valid.


Second, the remainder of your comments below are not coherent enough to 
be answerable, and it is not my job to walk you through the basics of 
this field.


Third, about your digression:  gravity does not "escape" from black 
holes, because gravity is just the curvature of spacetime.  The other 
things that cannot escape from black holes are not "forces".


I will not be replying to any further messages from you because you are 
wasting my time.




Richard Loosemore





Ed Porter wrote:

Richard,

 

Thank you for your reply. 

 

It implies your article was not as clearly worded as I would have liked 
it to have been, given the interpretation you say it is limited to.  
When you said


 

"subjective phenomena associated with consciousness ... have the special 
status of being unanalyzable." (last paragraph in the first column of 
page 4 of your paper.) 



  you apparently meant something much more narrow, such as

 

"subjective phenomena associated with consciousness [of the type that 
cannot be communicated between people --- and/or --- of the type that 
are unanalyzable] ... have the special status of being unanalyzable."


 

If you always intended that all your statements about the limited 
ability to analyze conscious phenomena be so limited --- then you were 
right --- I misunderstood your article, at least partially. 

 

We could argue about whether a reader should have understood this narrow 
interpretation.  But it should be noted Wikipedia, that unquestionable 
font of human knowledge, states “qualia” has multiple definitions, only 
some of which matche the meaning you claim “everyone agrees upon.”, 
i.e., subjective experiences that “do not involve anything that can be 
compared across individuals.” 

 

And in Wikipedia’s description of Chalmers’ hard problem of 
consciousness, it lists questions that arguably would be covered by my 
interpretation.


 

It is your paper, and it is up to you to decide how you define things, 
and how clearly you make your definitions known.  But even given your 
narrow interpretation of conscious phenomena in your paper, I think 
there are important additional statements that can be made concerning it.


 

First given some of the definitions of Chalmers hard problem it is not 
clear how much your definition adds.


 

Second, and more importantly, I do not think there is a totally clear 
distinction between Chalmers’ “hard problem of consciousness” and what 
he classifies as the easy problems of consciousness.  For example, the 
first two paragraphs on the second page of your paper seem to be 
discusses the unanalyzable nature of the hard problem.  This includes 
the following statement:


 

“…for every “objective” definition that has ever been proposed [for the 
hard problem], it seems, someone has countered that the real mystery has 
been side-stepped by the definition.”


 

If you define the hard problem of consciousness as being those aspects 
of consciousness that cannot be physically explained, it is like the 
hard problems concerning physical reality.  It would seem that many key 
aspects of physical reality are equally


 

“intrinsically beyond the reach of objective definition, while at the 
same time being as deserving of explanation as anything else in the 
universe” (Second paragraph on page 2 of your paper).


 

Over time we have explained more and more about concepts at the heart of 
physical reality such as time, space, existence, but always some mystery 
remains.  I think the same will be true about consciousness.  In the 
coming decades we will be able to explain more and more about 
consciousness, and what is covered by the “hard problem” (i.e., that 
which is unexplainable) will shrink, but there will always remain some 
mystery.  I believe that within decades two to six decades we will


 

--be able to examine the physical manifestations of aspects of qualia 
that now cannot now be communicated between people (and thus now fit 
within your definition of qualia);


 

--have an explanation for most of the major types of subjectively 
perceived properties and behaviors of consciousness; and


 

--be able to posit reasonable theories about why we experience 
consciousness as a sense of awareness and how the various properties of 
that sense of awareness are created.


 

But I believe there will always remain some mysteries, such as why there 
is any existence of anything, why there is any separation of anything, 
why there i

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ed Porter
Richard,

 

Thank you for your reply.  

 

It implies your article was not as clearly worded as I would have liked it
to have been, given the interpretation you say it is limited to.  When you
said

 

"subjective phenomena associated with consciousness ... have the special
status of being unanalyzable." (last paragraph in the first column of page 4
of your paper.)  


you apparently meant something much more narrow, such as 


 

"subjective phenomena associated with consciousness [of the type that cannot
be communicated between people --- and/or --- of the type that are
unanalyzable] ... have the special status of being unanalyzable." 

 

If you always intended that all your statements about the limited ability to
analyze conscious phenomena be so limited --- then you were right --- I
misunderstood your article, at least partially.  

 

We could argue about whether a reader should have understood this narrow
interpretation.  But it should be noted Wikipedia, that unquestionable font
of human knowledge, states "qualia" has multiple definitions, only some of
which matche the meaning you claim "everyone agrees upon.", i.e., subjective
experiences that "do not involve anything that can be compared across
individuals."  

 

And in Wikipedia's description of Chalmers' hard problem of consciousness,
it lists questions that arguably would be covered by my interpretation.

 

It is your paper, and it is up to you to decide how you define things, and
how clearly you make your definitions known.  But even given your narrow
interpretation of conscious phenomena in your paper, I think there are
important additional statements that can be made concerning it.

 

First given some of the definitions of Chalmers hard problem it is not clear
how much your definition adds.

 

Second, and more importantly, I do not think there is a totally clear
distinction between Chalmers' "hard problem of consciousness" and what he
classifies as the easy problems of consciousness.  For example, the first
two paragraphs on the second page of your paper seem to be discusses the
unanalyzable nature of the hard problem.  This includes the following
statement: 

 

".for every "objective" definition that has ever been proposed [for the hard
problem], it seems, someone has countered that the real mystery has been
side-stepped by the definition."

 

If you define the hard problem of consciousness as being those aspects of
consciousness that cannot be physically explained, it is like the hard
problems concerning physical reality.  It would seem that many key aspects
of physical reality are equally 

 

"intrinsically beyond the reach of objective definition, while at the same
time being as deserving of explanation as anything else in the universe"
(Second paragraph on page 2 of your paper).

 

Over time we have explained more and more about concepts at the heart of
physical reality such as time, space, existence, but always some mystery
remains.  I think the same will be true about consciousness.  In the coming
decades we will be able to explain more and more about consciousness, and
what is covered by the "hard problem" (i.e., that which is unexplainable)
will shrink, but there will always remain some mystery.  I believe that
within decades two to six decades we will

 

--be able to examine the physical manifestations of aspects of qualia that
now cannot now be communicated between people (and thus now fit within your
definition of qualia); 

 

--have an explanation for most of the major types of subjectively perceived
properties and behaviors of consciousness; and 

 

--be able to posit reasonable theories about why we experience consciousness
as a sense of awareness and how the various properties of that sense of
awareness are created.

 

But I believe there will always remain some mysteries, such as why there is
any existence of anything, why there is any separation of anything, why
there is any time, etc.  In fifty to one hundred years the hard problem of
consciousness may well just be viewed as one of the other hard problems of
understanding reality.

 

My belief is that consciousness is inherently no more mysterious than any of
reality, given the technological advance that will occur in this century.  I
believe human consciousness is an extremely complex, dynamic,
self-interacting, dynamically self-focus-selecting computation having
trillions of channels connected in a small world network.  And each human
consciousness is in, and thus aware of, its own computation, just as a
physical object located in a certain point of space is affected by a set of
physical forces determined as a function of its location.   The only
difference is that different human consciousness seem to be largely
separated from each other, whereas we believe the computation of the
observable universe, other than what is in black holes, is continuously
connected down to a granularity approaching the quantum level.  

 

(Totally digression, but how does gr

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Mike Tintner wrote:
Richard:The precise definition of "qualia", which everyone agrees on, 
and which

you are flatly contradicting here, is that these things do not involve
anything that can be compared across individuals.

Actually, we don't do a bad job of comparing our emotions/sensations - 
not remotely perfect, but not remotely as bad as the above philosophy 
would suggest. We do share each other's pains and joys to a remarkable 
extent. That's because our emotions are very much materially based and 
we share basically the same bodies and nervous systems.


The hard problem of consciousness is primarily about *not* 
qualia/emotions/sensations but *sentience*  - not about what a red bus 
or a warm hand stroking your face feel like to you, but about your 
capacity to feel anything at all - about your capacity not for 
particular types of emotions/sensations, but for emotion generally.


Sentience resides to a great extent in the nervous system, and whatever 
proto-nervous system preceded it in evolution. When we solve how that 
works we may solve the hard problem. Unless you believe that every thing 
including inanimate objects, feels, then the capacity of sentience 
clearly evolved and has an explanation.


(Bear in mind that AGI-ers' approaches to the problem of consciousness 
are bound to be limited by their disembodied and anti-evolutionary 
prejudices).


Mike

"Hard Problem" is a technical term.

It was invented by David Chalmers, and it has a very specific meaning.

See the Chalmers reference in my paper.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ben Goertzel wrote:



Ed / Richard,

It seems to me that Richard's propsal is in large part a modernization 
of Peirce's metaphysical analysis of awareness.


Peirce introduced foundational metaphysical categories of First, Second 
and Third ... where First is defined as raw unanalyzable awareness/being ...


http://www.helsinki.fi/science/commens/terms/firstness.html

To me, Richard's analysis sounds a lot like Peirce's statement that 
consciousness is First...


And Ed's refutation sounds like a rejection of First as a meaningful 
category, and an attempt to redirect the conversation to the level of 
Third...


Sorry to be negative, but no, my proposal is not in any way a 
modernization of Peirce's metaphysical analysis of awareness.


The standard meaning of Hard Problem issues was described very well by 
Chalmers, and I am addressing the hard problem of concsciousness, not 
the other problems.


Ed is talking about consciousness in a way that plainly wanders back and 
forth between Hard Problem issues and Easy Problem, and as such he has 
misunderstood the entirety of what I wrote in the paper.


It might be arguable that my position relates to Feigl, but even that is 
significantly different.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mike Tintner
Richard:The precise definition of "qualia", which everyone agrees on, and 
which

you are flatly contradicting here, is that these things do not involve
anything that can be compared across individuals.

Actually, we don't do a bad job of comparing our emotions/sensations - not 
remotely perfect, but not remotely as bad as the above philosophy would 
suggest. We do share each other's pains and joys to a remarkable extent. 
That's because our emotions are very much materially based and we share 
basically the same bodies and nervous systems.


The hard problem of consciousness is primarily about *not* 
qualia/emotions/sensations but *sentience*  - not about what a red bus or a 
warm hand stroking your face feel like to you, but about your capacity to 
feel anything at all - about your capacity not for particular types of 
emotions/sensations, but for emotion generally.


Sentience resides to a great extent in the nervous system, and whatever 
proto-nervous system preceded it in evolution. When we solve how that works 
we may solve the hard problem. Unless you believe that every thing including 
inanimate objects, feels, then the capacity of sentience clearly evolved and 
has an explanation.


(Bear in mind that AGI-ers' approaches to the problem of consciousness are 
bound to be limited by their disembodied and anti-evolutionary prejudices).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ben Goertzel
Ed / Richard,

It seems to me that Richard's propsal is in large part a modernization of
Peirce's metaphysical analysis of awareness.

Peirce introduced foundational metaphysical categories of First, Second and
Third ... where First is defined as raw unanalyzable awareness/being ...

http://www.helsinki.fi/science/commens/terms/firstness.html

To me, Richard's analysis sounds a lot like Peirce's statement that
consciousness is First...

And Ed's refutation sounds like a rejection of First as a meaningful
category, and an attempt to redirect the conversation to the level of
Third...

-- Ben G



On Sun, Nov 16, 2008 at 7:04 PM, Richard Loosemore <[EMAIL PROTECTED]>wrote:

> Ed Porter wrote:
>
>> Richard,
>>
>> You have provided no basis for your argument that I have misunderstood
>> your paper and the literature upon which it is based.
>>
>> [snip]
>>
>> My position is that we can actually describe a fairly large number of
>> characteristics of our subjective experience consciousness that most other
>> intelligent people agree with.  Although we cannot know that others
>> experience the color red exactly the same way we do, we can determine that
>> there are multiple shared describable characteristics that most people claim
>> to have with regard to their subjective experiences of the color red.
>>
>
> This is what I meant when I said that you had completely misunderstood both
> my paper and the background literature:  the statement in the above
> paragraph could only be written by a person who does not understand the
> distinction between the "Hard Problem" of consciousness (this being David
> Chalmers' term for it) and the "Easy" problems.
>
> The precise definition of "qualia", which everyone agrees on, and which you
> are flatly contradicting here, is that these things do not involve anything
> that can be compared across individuals.
>
> Since this an utterly fundamental concept, if you do not get this then it
> is almost impossible to discuss the topic.
>
> Matt just tried to explain it to you.  You did not get it even then.
>
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mark Waser
I think the reason that the hard question is interesting at all is that 
it would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that 
exists in our minds about what we should or should not do. There is no 
objective experiment you can do that will tell you whether any act, such 
as inflicting pain on a human, animal, or machine, is ethical or not. The 
only thing you can measure is belief, for example, by taking a poll.


What is the point to ethics?  The reason why you can't do objective 
experiments is because *YOU* don't have a grounded concept of ethics.  The 
second that you ground your concepts in effects that can be seen in "the 
real world", there are numerous possible experiments.


The same is true of consciousness.  The hard problem of consciousness is 
hard because the question is ungrounded.  Define all of the arguments in 
terms of things that appear and matter in the real world and the question 
goes away.  It's only because you invent ungrounded unprovable distinctions 
that the so-called hard problem appears.


Torturing a p-zombie is unethical because whether it feels pain or not is 
100% irrelevant in "the real world".  If it 100% acts as if it feels pain, 
then for all purposes that matter it does feel pain.  Why invent this 
mystical situation where it doesn't feel pain yet acts as if it does?


Richard's paper attempts to solve the hard problem by grounding some of the 
silliness.  It's the best possible effort short of just ignoring the 
silliness and going on to something else that is actually relevant to the 
real world.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, November 15, 2008 10:02 PM
Subject: RE: [agi] A paper that actually does solve the problem of 
consciousness



--- On Sat, 11/15/08, Ed Porter <[EMAIL PROTECTED]> wrote:

With regard to the second notion,
that conscious phenomena are not subject to scientific explanation, there 
is

extensive evidence to the contrary. The prescient psychological writings of
William James, and Dr. Alexander Luria’s famous studies of the effects of
variously located bullet wounds on the minds of Russian soldiers after 
World

War II, both illustrate that human consciousness can be scientifically
studied. The effects of various drugs on consciousness have been
scientifically studied.


Richard's paper is only about the "hard" question of consciousness, that 
which distinguishes you from a P-zombie, not the easy question about mental 
states that distinguish between being awake or asleep.


I think the reason that the hard question is interesting at all is that it 
would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that exists 
in our minds about what we should or should not do. There is no objective 
experiment you can do that will tell you whether any act, such as inflicting 
pain on a human, animal, or machine, is ethical or not. The only thing you 
can measure is belief, for example, by taking a poll.


-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ed Porter wrote:

Richard,

You have provided no basis for your argument that I have misunderstood 
your paper and the literature upon which it is based.


[snip]

My position is that we can actually describe a fairly large number of 
characteristics of our subjective experience consciousness that most 
other intelligent people agree with.  Although we cannot know that 
others experience the color red exactly the same way we do, we can 
determine that there are multiple shared describable characteristics 
that most people claim to have with regard to their subjective 
experiences of the color red.


This is what I meant when I said that you had completely misunderstood 
both my paper and the background literature:  the statement in the above 
paragraph could only be written by a person who does not understand the 
distinction between the "Hard Problem" of consciousness (this being 
David Chalmers' term for it) and the "Easy" problems.


The precise definition of "qualia", which everyone agrees on, and which 
you are flatly contradicting here, is that these things do not involve 
anything that can be compared across individuals.


Since this an utterly fundamental concept, if you do not get this then 
it is almost impossible to discuss the topic.


Matt just tried to explain it to you.  You did not get it even then.




Richard Loosemore














---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore <[EMAIL PROTECTED]>
wrote:


This is equivalent to your prediction #2 where connecting the
output of neurons that respond to the sound of a cello to the
input of neurons that respond to red would cause a cello to
sound red. We should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you
test that the subject actually experiences redness at the
sound of a cello, rather than just behaving as if
experiencing redness, for example, claiming to hear red?

You misunderstand the experiment in a very intersting way!

This experiment has to be done on the *skeptic* herself!

The prediction is that if *you* get your brain rewired, *you*
will experience this.

How do you know what I experience, as opposed to what I claim to
experience?

That is exactly the question you started with, so you

haven't gotten anywhere. I don't need proof that I experience
things. I already have that belief programmed into my brain.

Huh?

Now what are we talking about... I am confused:  I was talking
about proving my prediction.  I simply replied to your doubt about
whether a "subject" woudl be experiencing the predicted effects, or
just producing language consistent with it.  I gave you a solution
by pointing out that anyone who had an interest in the prediction
could themselves join in and be a subject.  That seemed to answer
your original question.


You are confusing truth and belief. I am not asking you to make me
believe that consciousness (that which distinguishes you from a
philosophical zombie) exists. I already believe that. I am asking you
to prove it. You haven't done that. I don't believe you can prove the
existence of anything that is both detectable and not detectable.


You are stuck in Level 0.

I showed something a great deal more sophisticated.  In fact, I 
explicitly agreed with you on a Level 0 version of what you just said: 
I actually said in the paper that I (and anyone else) cannot explain 
these phenomena qua the (Level 0) things that they appear to be.


But I went far beyond that:  I explained why people have difficulty 
defining these terms, and I explained a self-consistent understanding of 
the nature of consciousness that involves it being classified as a novel 
type of thing.


You cannot define in properly.

I can explain why you cannot define in properly.

I can both define and explain it, and part of that explanation is that 
the very nature of "explanation" is bound up in the solution.


But instead of understanding that the nature of "explanation" has to 
change to deal with the problem, you remain stuck with the old, broken 
idea of explanation, and keep trying to beat the argument with it!




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ed Porter
Matt, 

 

Although Richard's paper places considerable focus the zombie/non-zombie
distinction, its pronouncements do not appear to be so limited.  For
example, its discussion of the analysis of qualia bottoming out is not so
limited, since presumably qualia and their associated conscious experience
only occur in Non-Zombies.  

 

The paper states in the last sentence of its section on page 5 entitled
"Implications" that 

 

"...we can never say exactly what the phenomena of consciousness are, in the
way we give scientific explanation for other things."  

 

As I said in my prior post, this is one of the major points to which I
disagree, and it does not seem to be limited to the zombie/non-zombie
distinction, since all the zombie/non-zombie distinction has to do is
provide a basis for distinguishing between zombies and non-zombies, and has
no relevance to what the phenomena of consciousness are beyond that.

 

I disagree with the above quote, because although our current technical
capabilities decrease the extent to which we can make explanations about the
phenomena of consciousness, I believe we already can give initial
explanations for many aspects of consciousness and I believe that within the
next 20 to 40 years we will be able to give much greater explanations.  

 

I admit that currently there are problems in making the Zombie/non-zombie
distinction.  But this same limitation arguably applies to making the
zombie/non-zombie distinction for humans as well as AGI's.  

 

Based on my own subjective experience, I believe I have a consciousness, and
as Richard points out, it reasonable to consider that subjective experience
as real as anything else, some would say even more real than anything else.
Since I assume other humans have brainware similar to my own, --- and since
I outward manifestations of substantial similarities between the way the
minds and emotions of other humans appear to work, and the way my mind
appears to me to work --- I assume most other humans are not Zombies.  

 

But after serious brain damage, we are told by doctors such as Antonio
Damasio, humans can become zombies.  And we have to face medical and moral
decisions about when to pull the plug on such humans, as in the famous case
of Terri Schiavo.  The current medical and political community bases their
zombie/non-zombie decisions for humans based on a partial understanding of
what "human" consciousness is, and current measurements they can make
indicating whether or not such a consciousness exists.

 

When it comes to determining whether machines have consciousness of a type
that warrants better treatment than Terri Schiavo, such decision will
probably be based on the advanced understanding of consciousness that we
will develop in the coming decades.

 

Like Richard, I do not believe the attribute of human consciousness we hold
so dear are a mere artifact.  But I don't put much faith in his definition
of consciousness as the ability to sensing something is real even though
analyze of it bottoms out.

 

I believe the sense of awareness humans call consciousness is essential to
power of the computation we call the human mind.  I believe a human-like
consciousness arises from the massively self-aware computation --- having an
internal bandwidth of over 1 million DVD channel/second --- inherent in a
massively parallel spreading activation system like the human brain --- when
a proper mechanism is available for rapidly successively selecting certain
items for broad activation in a relatively coherent manner based on the
competitive relevance or match to current goals or drives of the system of
competing assemblies of activation, and/or based on the current importance
and valence of the emotional associations of such assemblies.  

 

The activations that are most conscious, are sufficiently broad that they
dynamically activate experiential memories and patterns representing the
grounded meaning of the conscious concept.  The effect of prior activations
on the brain state, tend to favor the activations of those aspects of a
currently conscious concept's meaning that are most relevant to the current
context.  This contextually relevant grounding and the massively parallel
dynamic state of activation and its retention of various degrees and
patterns of activation over time, allows the consciousness to have a sense
of being aware of many things at once, and of extending between points in
time and space.

 

People have asked for centuries, what is it inside our mind that seems to
watch the show provided by our senses.  The answer is the tens of billions
of neurons and trillion of synapses that respond to the flood of sensory
information, and store selected portions of it in short, mid, and then long
term memory, to weave a story out of it which is labeled with recognized
patterns, and patterns of explanation.

 

Thus, I believe that the conscious/subconscious theater of the mind, with
its reactive audience of billions of neur

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> 
> John G. Rose wrote:
> >
> > Um... this is a model of consciousness. One way of looking at it.
> > Whether or not it is comprehensive enough, not sure, this irreducible
> > indeterminacy. But after reading the paper a couple times I get what
> you are
> > trying to describe. It's part of an essence of consciousness but not
> sure if
> > it enough.
> 
> But did you notice that the paper argued that if you think on the base
> level, you would have to have that feeling that, as you put it,
> "...It's
> part of an essence of consciousness but not sure if it enough."?
> 
> The question is:  does the explanation seem consistent with an
> explanation of your feeling that it might not be enough of an
> explanation?
> 
> 

I don't know if this recursive thing that you point to is THE fundamental
element of consciousness, the bottoming out effect. It may be more of a
cognitive component... Will have to think on it some more. But it is some
sort of model to look at and the paper that you wrote is nice terse way of
describing it.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

I completed the first draft of a technical paper on consciousness the
other day.   It is intended for the AGI-09 conference, and it can be
found at:

http://susaro.com/wp-
content/uploads/2008/11/draft_consciousness_rpwl.pdf




Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.


But did you notice that the paper argued that if you think on the base 
level, you would have to have that feeling that, as you put it, "...It's 
part of an essence of consciousness but not sure if it enough."?


The question is:  does the explanation seem consistent with an 
explanation of your feeling that it might not be enough of an explanation?






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> 
> I completed the first draft of a technical paper on consciousness the
> other day.   It is intended for the AGI-09 conference, and it can be
> found at:
> 
> http://susaro.com/wp-
> content/uploads/2008/11/draft_consciousness_rpwl.pdf
> 


Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.

Kind of reminds me of Curly's view of consciousness - "I'm trying to think
but nothing happens!"

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


This commentary represents a fundamental misunderstanding of both the 
paper I wrote and the background literature on the hard problem of 
consciousness.




Richard Loosemore



Ed Porter wrote:
  I respect the amount of thought that when into Richard’s paper 
“Consciousness in Human and Machine: A Theory and Some Falsifiable 
Predictions” --- but I do not think it provides a good explanation of 
consciousness. 

 

  It seems to spend more time explaining the limitations on what we 
can know about consciousness than explaining consciousness, itself.  
What little the paper says about consciousness can be summed up roughly 
as follows: that consciousness is created by a system that can analyze 
and seek explanations from some, presumably experientially-learned, 
knowledgebase, based on associations between nodes in that 
knowledgebase, and that it can determine when it cannot describe a given 
node further, in terms of relations to other nodes, but nevertheless 
senses the given node is real (such as the way it is difficult for a 
human to explain what it is like to sense the color red).


 

  First I disagree with the paper’s allegation that “analysis” of 
conscious phenomena necessarily “bottom” out more than analyses of many 
other aspects of reality.  Second, I disagree that conscious phenomena 
are beyond any scientific explanation. 

 

  With regard to the first, I feel our minds contain substantial 
memories of various conscious states, and thus there is actually 
substantial experiential grounding of many aspects of consciousness 
recorded in our brains.  This is particularly true for the consciousness 
of emotional states (for example, brain scans on very young infants 
indicate a high percent of their mental activity is in emotional centers 
of the brain).  I developed many of my concepts of how to design an AGI 
based on reading brain science and performing introspection into my own 
conscious and subconscious thought processes, and I found it quite easy 
to draw many generalities from the behavior of my own conscious mind.  
Since I view the subconscious to be at the same time both a staging area 
for, and a reactive audience for, conscious thoughts, I think one has to 
view the subconscious and consciouness as part of a functioning whole. 

 

  When I think of the color red, I don’t bottom out.  Instead I have 
many associations with my experiences of redness that provide it with 
deep grounding.  As with the description of any other concept, it is 
hard to explain how I experience red to others, other than through 
experiences we share relating to that concept.  This would include 
things we see in common to be red, or perhaps common emotional 
experiences to seeing the red of blood that has been spilled in 
violence, or the way the sensation of red seems to fill a 2 dimensional 
portion of an image that we perceive as a two dimensional distribution 
of differently colored areas.   But I can communicate within my own mind 
across time what it is like to sense red, such as in dreams when my eyes 
are closed.  Yes, the experience of sensing red does not decompose into 
parts, such as the way the sensed image of a human body can be 
de-composed into the seeing of subordinate parts, but that does not 
necessarily mean that my sensing of something that is a certain color of 
red, is somehow more mysterious than my sensing of seeing a human body.


 

  With regard to the second notion, that conscious phenomena are not 
subject to scientific explanation, there is extensive evidence to the 
contrary.  The prescient psychological writings of William James, and 
Dr. Alexander Luria’s famous studies of the effects of variously located 
bullet wounds on the minds of Russian soldiers after World War II, both 
illustrate that human consciousness can be scientifically studied.  The 
effects of various drugs on consciousness have been scientifically 
studied.  Multiple experiments have shown that the presence or absence 
of synchrony between neural firings in various parts of the brain have 
been strongly correlated with human subjects reporting the presence or 
absence, respectively, of conscious experience of various thoughts or 
sensory inputs.  Multiple studies have shown that electrode stimulation 
to different parts of the brain tend to make the human consciousness 
aware of different thoughts.  Our own personal experiences with our own 
individual consciousnesses, the current scientific levels of knowledge 
about commonly reported conscious experiences, and increasingly more 
sophisticated ways to correlate objectively observable brain states with 
various reports of human conscious experience, all indicate that 
consciousness already is subject to scientific explanation.  In the 
future, particularly with the advent of much more sophisticated brain 
scanning tools, and with the development of AGI, consciousness will be 
much more subject to scientific explanation.