Ben, 

If you place the limitations on what is part of the hard problem that
Richard has, most of what you consider part of the hard problem would
probably cease to be part of the hard problem.  In one argument he
eliminated things relating to lateral or upward associative connections from
being consider part of the hard problem of consciousness.  That would
eliminate the majority of sources of grounding from any notion of
consciousness.

I like you tend to think that all of reality is conscious, but I think there
are vastly different degrees and types of consciousness, and I think there
are many meaningful types of consciousness that humans have that most of
reality does not have.

When I was in college and LSD was the rage, one of the main goals of the
heavy duty heads was "ego loss" which was to achieve a sense of cosmic
oneness with all of the universe.  It was commonly stated that 1000
micrograms was the ticket to "ego loss."  I never went there.  Nor have I
ever achieved cosmic oneness through meditation, although I have achieved
temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

Perhaps you have been more brave (acid wise) or much lucky or disciplined
meditation wise, and have achieve a seen of oneness with the cosmic
consciousness.  If so, I tip my hat (and Colbert wag of the finger) to you.

Ed Porter


-----Original Message-----
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 20, 2008 5:46 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

Hmmm...

I don't agree w/ you that the "hard problem" of consciousness is
unimportant or non-critical in a philosophical sense.  Far from it.

However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.

Of course, I think that because I think the hard problem of
consciousness is actually easy: I'm a panpsychist ... I think
everything is conscious, and different kinds of structures just focus
and amplify this universal consciousness in different ways...

Interestingly, this panpsychist perspective is seen as obviously right
by most folks deeply involved with meditation or yoga whom I've talked
to, and seen as obviously wrong by most scientists I talk to...

-- Ben G

On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Richard,
>
>
>
> Thank you for your reply.
>
>
>
> I started to write a point-by-point response to your reply, copied below,
> but after 45 minutes I said "stop".  As interesting as it is, from a
> philosophical and argumentative writing standpoint to play wack-a-mole
with
> your constantly sifting and often contradictory arguments --- right now, I
> have much more pressing things to do.
>
>
>
> And I think I have already stated many of my positions on the subject of
> this thread sufficiently clearly that intelligent people who have a little
> imagination and really want to can understand them.  Since few others
beside
> you have responded to my posts, I don't think there is any community
demand
> that I spend further time on such replies.
>
>
>
> What little I can add to what I have already said is that I basically I
> think the hard problem/easy problem dichotomy is largely, although, not
> totally pointless.
>
>
>
> I do not think the hard problem is central to understanding consciousness,
> because so much of consciousness is excluded from being part of the hard
> problem.  It is excluded either because it can be described verbally by
> introspection by the mind itself, or because it affects external behavior,
> and, thus, at least according to Wikipedia's definition of
p-consciousness,
> is part of the easy problem.
>
>
>
> It should be noted that not affecting external behavior excludes one hell
of
> a lot of consciousness, because emotions, which clearly affect external
> behavior, are so closely associated with much of our sensing of
experience.
>
>
>
> Thus, it seems a large part of what we humans consider to be our
subjective
> sense of experience of consciousness is rejected by "hard problem" purists
> as being part of the easy problem.
>
>
>
> Richard, you in particular seems to be much more of a hard problem purist
> than those who wrote the Wikipedia definition of p-consciousness.   This
is
> because in your responses to me you have even excluded as not part of the
> hard problem any lateral or higher level associations of one of your
bottom
> level red detector nodes might have.  This, for example, would arguably
> exclude from the p-consciousness of the color red the associations between
> the lowest level, local red sensing nodes, that are necessary so the
> activation of such nodes can be recognized as a common color "red" no
matter
> where they occur in different parts of the visual field.
>
>
>
> Thus according to such a definition, qualia for red would have to be
> different for each location of V1 in which red is sensed --- even when
> different portions of V1 get mapped into the same portions of the semi
> stationary representation your brain builds out of stationary surroundings
> as your eyes saccade and pan across them.  Thus, your concept of the
qualia
> for the color red does not cover a unified color red, and necessarily
> includes thousands of separate red qualia, each associated with a
different
> portion of V1.
>
>
>
> Aspects of consciousness that (a) cannot be verbally described by
> introspection; (b) have no effect on behavior, and (c) cannot involve any
> associations with the activation of other nodes (which is an exclusion
you,
> Richard, seem to have added to Wikipedia's description of p-consciousness)
> --- defines the hard problem so narrowly as to make it of relatively
little,
> or no importance.  It certainly is not the central question of
> consciousness, because a sense of experiencing something has no meaning
> unless it has grounding, and that requires associations in large numbers,
> and, thus, according to your definition could not be part of the hard
> problem.
>
>
>
> Plus, Richard, you have not even come close to addressing my statement
that
> just because certain aspects of consciousness cannot be verbally described
> by the introspection of the brain or by affects on external behavior of
the
> body itself does not mean they cannot be subject to further analysis
through
> scientific research --- such as by brain science, brain scanning, brain
> simulations, and advances in understanding of AGIs.
>
>
>
> I have already spent way, way too much time in this response, So, I will
> leave it at that.  If you want to think you have won the argument fine.
>
>
>
> Because of time pressures I should not respond to any reply you make to
this
> post, no matter how tempting it might be to do so.
>
>
>
> Perhaps others can do it for me. (But I doubt they will bother.)
>
>
>
> Ed Porter
>
>
>
>
>
> -----Original Message-----
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 20, 2008 1:56 PM
>
> To: agi@v2.listbox.com
> Subject: Re: [agi] A paper that actually does solve the problem of
> consciousness
>
>
>
> Ed Porter wrote:
>
>> Richard,
>
>>
>
>>
>
>>
>
>> In response to your below copied email, I have the following response to
>
>> the below quoted portions:
>
>>
>
>>
>
>>
>
>> ############### My prior post ################>>>>
>
>>
>
>>>  That aspects of consciousness seem real does not provides much of an
>
>>
>
>>>  "explanation for consciousness."  It says something, but not much.  It
>
>>
>
>>>  adds little to Descartes' "I think therefore I am."  I don't think it
>
>>
>
>>>  provides much of an answer to any of the multiple questions Wikipedia
>
>>
>
>>>  associates with Chalmer's hard problem of consciousness.
>
>>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> I would respond as follows.  When I make statements about consciousness
>
>>
>
>> deserving to be called "real", I am only saying this as a summary of a
>
>>
>
>> long argument that has gone before.  So it would not really be fair to
>
>>
>
>> declare that this statement of mine "says something, but not much"
>
>>
>
>> without taking account of the reasons that have been building up toward
>
>>
>
>> that statement earlier in the paper.
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> Perhaps ---  but this prior work which you claim explains so much is not
>
>> in the paper being discussed.  Without it, it is not clear how much your
>
>> paper itself contributes.  And, Ben, who is much more knowledgeable than
>
>> I on these things seemed similarly unimpressed.
>
>
>
> I would say that it does.  I blieve that the situation is that you do
>
> not yet understand it.  Ben has had similar trouble, but seems to be
>
> comprehending more of the issue as I respond to his questions.
>
>
>
> (I owe him one response right now:  I am working on it)
>
>
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> I am arguing that when we probe
>
>>
>
>> the meaning of "real" we find that the best criterion of realness is the
>
>>
>
>> way that the system builds a population of concept-atoms that are (a)
>
>>
>
>> mutually consistent with one another,
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> I don't know what mutually consistent means in this context, and from my
>
>> memory of reading you paper multiple times I don't think it explains it,
>
>> other than perhaps implying that the framework of atoms represent
>
>> experiential generalization and associations, which would presumably
>
>> tend to represent the regularities of experienced reality.
>
>
>
> I'll grant you that one:  I did not explain in detail this idea of
>
> mutual consistency.
>
>
>
> However, the reason I did not is that I really had to assume some
>
> background, and I was hoping that the reader would already be aware of
>
> the general idea that cognitive systems build their knowledge in the
>
> form of concepts that are (largely) consistent with one another, and
>
> that it is this global consistency that lends strength to the whole.  In
>
> other words, all the bits of our knowledge work together.
>
>
>
> A piece of knowledge like "The Loch Ness monster lives in Loch Ness" is
>
> NOT a piece of knowledge that fits well with all of the rest of our
>
> knowledge, because we have little or no evidence that such a thing as
>
> the Loch Ness Monster has been photographed, observed by independent
>
> people, observed by several people at the same time, caught in a trap
>
> and taken to a museum, been found as a skeletal remain, bumped into a
>
> boat, etc etc etc.  There are no links from the rest of our knowledge to
>
> the LNM fact, so we actually do not credit the LNM as being "real".
>
>
>
> By contrast, facts about Coelacanths are very well connected to the rest
>
> of our knowledge, and we believe that they do exist.
>
>
>
>
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> and (b) strongly supported by
>
>>
>
>> sensory evidence (there are other criteria, but those are the main
>
>>
>
>> ones).  If you think hard enough about these criteria, you notice that
>
>>
>
>> the qualia-atoms (those concept-atoms that cause the analysis mechanism
>
>>
>
>> to bottom out) score very high indeed.  This is in dramatic contrast to
>
>>
>
>> other concept-atoms like hallucinations, which we consider 'artifacts'
>
>>
>
>> precisely because they score so low.  The difference between these two
>
>>
>
>> is so dramatic that I think we need to allow the qualia-atoms to be
>
>>
>
>> called "real" by all our usual criteria, BUT with the added feature that
>
>>
>
>> they cannot be understood in any more basic terms.
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> You seem to be defining "real" here to mean believed to exist in what is
>
>> perceived as objective reality.  I personally believe a sense of
>
>> subjective reality is much more central to the concept of consciousness.
>
>>
>
>>
>
>>
>
>> Personal computers of today, which most people don't think have anything
>
>> approaching a human-like consciousness, could in many tasks make
>
>> estimations of whether some signal was "real" in the sense of
>
>> representing something in objective reality without being conscious.
>
>> But a powerful hallucination, combined with a human level of sense of
>
>> being conscious of it, does not appear to be something any current
>
>> computer can achieve.
>
>>
>
>>
>
>>
>
>> So if you are looking for the hard problems in consciousness focus more
>
>> on the human subjective sense of awareness, not whether there is
>
>> evidence something is real in what we perceive as objective reality.
>
>
>
>
>
> Alas, you have perhaps forgotten, or missed, the reason why "real" was
>
> being discussed in the paper, so you are discussing it out of its
>
> original context.
>
>
>
> So what you say in the above response does not relate to the paper.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>> ########### Richard said ############>>>>
>
>>
>
>> So to contradict that argument (to say "it is not clear that
consciousness
>
>>
>
>> is necessarily any less explainable than are many other aspects of
>
>>
>
>> physical reality") you have to say why the argument does not work.  It
>
>>
>
>> would make no sense for a person to simply assert the opposite of the
>
>>
>
>> argument's conclusion, without justification.
>
>>
>
>>
>
>>
>
>> The argument goes into plenty of specific details, so there are many
>
>>
>
>> kinds of attack that you could make.
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> First, I am not claiming that all aspects of consciousness can ever be
>
>> understood by science, since I do not believe all aspects of physical
>
>> reality can be understood by science.  I am saying that just as science
>
>> has greatly reduced the number of things about physical reality that
>
>> were once unexplainable, I think brain scanning, brain science, computer
>
>> neural simulations, and AGI will greatly reduce the number of things
>
>> about consciousness that cannot be explained.  Even you yourself implied
>
>> as much when I gave examples of such learning about consciousness which
>
>> you dismissed as the easy problems of consciousness.
>
>
>
> All questions about the "Easy" problems of consciousness are completely
>
> outside this discussion, because the paper ONLY addressed the hard
>
> problem of consciousness.
>
>
>
> You may say things about those "Easy" problems, but I must ignore them
>
> because they do not relate in any way to my argument.
>
>
>
> It would help if you could avoid mentioning them, because otherwise the
>
> discussion gets confused if you start an argument appearing to talk
>
> about the Hard Problem, but then slip into one of the Easy problems.
>
>
>
>> Second, with regard to the bottoming out of the ability for analysis in
>
>> your molecular framework.  I have two comments.
>
>>
>
>>
>
>>
>
>> (A) In the human brain, even the lowest level nodes have some
>
>> associations with lateral or higher level nodes.  So it is not as if
>
>> they are totally devoid of grounding, and thus, some source of further
>
>> explanation.  Thus, explanation would not bottom out with such node, but
>
>> it could lead to circular activation.  I certainly admit there are
>
>> limits to the extend to which a subjective consciousness can obtain
>
>> information about itself.  There is no way a consciousness can model all
>
>> of the computation that gives rise to it.
>
>
>
> But this statement misses the actual point that I was trying to make.
>
>
>
> I postulated the "analysis mechanism" to be specifically concerned with
>
> delivering answers to the question "What exactly is the nature of my
>
> subjective experience of [x], as opposed to the nature of all the
>
> extraneous connections and associations that [x] has with other concepts".
>
>
>
> This question defines the Hard Problem of consciousness.  Therefore, I
>
> am ONLY interested, in my paper, in addressing the issue of what happens
>
> when the analysis mechanism tries to answer that question.
>
>
>
> You keep referring to all the other questions that a person could ask
>
> about (e.g.) the color red.  My paper makes no reference to any of those
>
> other questions that can be asked, and in fact the paper specifically
>
> and deliberately excludes all of those questions as being irrelevant.
>
>
>
> But in spite of that, you keep repeating that these questions have some
>
> relevance.  They do not.
>
>
>
> This is exactly the same as confusing the distinction between the easy
>
> and hard problems:  by mentioning these other senses of the
>
> "explanation" of redness, you have skipped right back into Easy problems
>
> that have nothing to do with the issue at hand.
>
>
>
>
>
>
>
>>
>
>>
>
>> (B) But the mere fact that your molecular framework is limited as to
>
>> what of its own computation and representation it can understand from
>
>> its own analysis of itself does not mean that scientific inquiry is
>
>> equally limited.  Just as scientific measurements, instruments, tests,
>
>> and computing have enabled human to learn thing about physical reality
>
>> that are far beyond the natural capabilities of our senses and mind to
>
>> perceive and understand, similarly there is reason to believe the aids
>
>> and methods of science can enable us to understand much about the brain
>
>> that is not available to us through the type of introspective analysis
>
>> that your paper is limited to.
>
>
>
> The paper says precisely why the above statement is not to be believed:
>
>   it gives a mechanism to explain why there is an absolute barrier
>
> beyond which no explanation can go.
>
>
>
> What you are doing is saying "Oh, but future science must not be
>
> underestimated.....", but you are ignoring the way in which my argument
>
> addresses ALL of future science, regardless of how clever it might get.
>
>
>
> You are still just asserting that science may eventually crack the
>
> problem, and ignoring my request that you say exactly why the argument
>
> fails.
>
>
>
> Until you address that argument, this attack is a dead end.
>
>
>
>
>
>> Third, I am not making any claim about the ratio of what percent of
>
>> consciousness can by understood by science, compared two what percent of
>
>> physical reality that can be understood by science.  Instead, I am
>
>> saying that I think great strides can be made in the understanding of
>
>> consciousness, and much of what we currently consider unknowable about
>
>> consciousness, very possibly including many things that now fall in
>
>> Chalmers' hard problem of consciousness, we will either know about, or
>
>> have reasonable theories about, within fifty years, if not before.
>
>
>
> See above comment.
>
>
>
> Please explain why my argument, which demonstrates that this statement
>
> of yours cannot be true, is somehow wrong.
>
>
>
> You do not address my argument, only repeat that it is wrong without
>
> saying why.
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> One of things that we
>
>>
>
>> can explain is that when someone (e.g. Ed Porter) looks at a theory of
>
>>
>
>> consciousness, he will *have* to make the statement that the theory does
>
>>
>
>> not address the hard problem of consciousness.
>
>>
>
>>
>
>>
>
>> So the truth is that the argument (at that stage of the paper) is all
>
>>
>
>> about WHY people have trouble being specific about what consciousness is
>
>>
>
>> and what is the explanation of consciousness.  It does this by an "in
>
>>
>
>> principle" argument:  in principle, if the analysis mechanism hits a
>
>>
>
>> dead end when it tries to analyze certain concepts, the net result will
>
>>
>
>> be that the system comes to conclusions like "There is something real
>
>>
>
>> here, but we can say nothing about it".
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> Again, your paper relates to limits to the understanding of
>
>> consciousness that can be reached by introspection.  It does not prove
>
>> that similar limits will be imposed on obtaining information about
>
>> operation of the mind from other techniques such as brain scanning,
>
>> brain science, and computer brain simulation.
>
>
>
> On the contrary, it does exactly that.
>
>
>
> IMO, you have simply not understood *how* it does that.
>
>
>
> In fact, you appear not to have understood the above statement that I
>
> made, just by itself, so it is difficult to reply without repeating it.
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> Notice that the argument is not
>
>>
>
>> (at this stage) saying that "consciousness is a bunch of dead end
>
>>
>
>> concept atoms", it is simply saying that those concept atoms cause the
>
>>
>
>> system to make a whole variety of statements that exactly coincide with
>
>>
>
>> all the statements that we see philosophers (amateur and professional)
>
>>
>
>> making about consciousness.
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> Your papers is most clear in its attempts to explain why there are
>
>> limits to what an introspective use of consciousness can explain about
>
>> certain aspects of itself.  It makes an even less convincing argument
>
>> for why, despite these limitations, the system has any sense of
>
>> subjective consciousness at all.
>
>>
>
>>
>
>>
>
>> You claim concept atom of your system
>
>>
>
>>
>
>>
>
>> "cause the system to make a whole variety of statements that exactly
>
>> coincide with all the statement that we see philosophers.making about
>
>> consciousness."
>
>>
>
>>
>
>>
>
>> But there is very little in you paper that explains how they accomplish
>
>> anything at all other than an inabiiltiy to introspectively answer
>
>> certain questions, and that the system somehow senses these inexplicable
>
>> things to be real.
>
>>
>
>>
>
>>
>
>> So it actually explains very little about consciousness.
>
>
>
>
>
> Alas, other philosophers (knowledgeable about the whole field) have
>
> given an exactly opposite reaction, saying that the argument clearly
>
> does account, in principle, for many of the problematic questions.
>
>
>
> You appear to be able to see what they see.
>
>
>
> I am having difficulty giving extra explanation to you that allows you
>
> to see what they see.
>
>
>
> I am close to giving up.
>
>
>
>
>
>
>
>
>
>>
>
>>
>
>> ############### My prior post ################>>>>
>
>>
>
>>>  THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS
ON
>
>>
>
>>>  THE PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND
NOT
>
>>
>
>>>  ENOUGH ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS
>
>>
>
>>>  DESPITE SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST
>
>>
>
>>>  ESSENTIAL TO CONSCIOUSNESS.
>
>>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> I could say more.  But why is this a weakness?  Does it break down,
>
>>
>
>> become inconsistent or leave something out?  I think you have to be more
>
>>
>
>> specific about why this is a weak point.
>
>>
>
>>
>
>>
>
>> Every part of the paper coould do with some expansion.  Alas, the limit
>
>>
>
>> is six pages.....
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> It is a weakness because the real mystery of consciousness, the real
>
>> hard problem of consciousness --- whether Chalmers recognizes it or not
>
>> --- is not that there are certain things it cannot introspectively
>
>> explained, but rather that the human mind has a sense of subjective
>
>> reality and self-awareness.  Your paper spends much more time on the
>
>> easier less important problem, and much less on the harder, much more
>
>> important problem.  And without the sense of awareness and realness ---
>
>> which you never convincingly explain the source of --- other than to say
>
>> it exist and it comes from the operation of the framework --- even the
>
>> major conclusion you do draw would be meaningless.
>
>>
>
>>
>
>>
>
>> So a "weakness" it is.
>
>
>
> Sorry, but the above paragraph is deeply confused.
>
>
>
> Chalmers DOES say something equivalent to "the real hard problem of
>
> consciousness [is] that the human mind has a sense of subjective reality
>
> and self-awareness".
>
>
>
> I also say the same thing.
>
>
>
> You appear not to recognize that we say this, nor that I address this
>
> explicitly.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>> ############### My prior post ################>>>>
>
>>
>
>>>  This is the part of your system that is providing the subjective
>
>>
>
>>>  experience that you say is providing the "realness" to your conscious
>
>>
>
>>>  experience.  This is where your papers should focus.  How does it
>
>>
>
>>>  provide this sense of realness.
>
>>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> Well, no, this is the feature of the system that explains why we end up
>
>>
>
>> convinced that there is something, and that it is inexplicable.
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> Exactly.  That's what I was trying to say.  Without these explanations
>
>> you just attributed to these features of the system you paper amounts to
>
>> virtually nothing, because there would be no feeling that something
>
>> exists and it is inexplicable, according to your own argument.
>
>>
>
>>
>
>>
>
>> You fail to explain how such feelings rise to a level that is conscious,
>
>> or subjectively real, as you describe in your paper.  That explanation
>
>> is what any paper that is claiming to explain consciousness should be
>
>> focusing on, i.e., what gives rise to subjective sense of experience.
>
>
>
> The paper does exactly what you ask here.
>
>
>
> You have not shown any sign that you understand *how* it shows that.
>
>
>
>
>
>
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> Remember:  this is a hypothesis in which I say "IF this mechanism
>
>>
>
>> exists, then we can see that all the things people say about
>
>>
>
>> consciousness would be said by an intelligent system that had this
>
>>
>
>> mechanism."  Then i am inviting you to conclude, along with me:  "It
>
>>
>
>> seems that this mechanism is so basically plausible, and it also
>
>>
>
>> captures the expected locutions of philosophers so well, that we should
>
>>
>
>> conclude (Ockhams Razor) that this mechanism is very likely to both
>
>>
>
>> exist and be the culprit."
>
>>
>
>>
>
>>
>
>> ###### My response ######>>>>
>
>>
>
>> You have lost me here.
>
>>
>
>>
>
>>
>
>> If, when you say "IF this mechanism exists" the mechanism you are
>
>> referring is to a part of your molecular framework that actually and
>
>> EXPLAINABLY does gives rise to the subjective experience we human call
>
>> consciousness, then, YES, you would have really said a lot.
>
>>
>
>>
>
>>
>
>> Unfortunately your paper gives extremely minimal explanation for how
>
>> this sense of subjective consciousness arises, other than through what
>
>> is presumably a form of spreading activation in a network of patterns
>
>> presumably learned from experientially and their associative connection.
>
>
>
> These statements do not in any way summarize or relate to what was said
>
> in the paper.
>
>
>
>
>
>
>
>>
>
>>
>
>> Now that is exactly the type of network that I believe is most likely to
>
>> give rise to human-like consciousness in humans and AGIs, but your
>
>> discussion spreads little light on how conscious awareness is derived
>
>> from such a system, and what additional features such a system would
>
>> have to have to be conscious.
>
>
>
> On the contrary, it does exactly that.
>
>
>
>
>
>
>
>>
>
>>
>
>> ########### Richard said ############>>>>
>
>>
>
>> The "realness" issue is separate.
>
>>
>
>>
>
>>
>
>> Concepts are judged "real" by the system to the extent that they play a
>
>>
>
>> very strongly anchored and consistent role in the foreground.  Color
>
>>
>
>> concepts are anchored more strongly than any other, hence they are very
>
>>
>
>> real.
>
>>
>
>>
>
>>
>
>> I could say more about this, for sure, but most of the philosophers I
>
>>
>
>> have talked to have gotten this point fairly quickly.
>
>>
>
>>
>
>>
>
> ...
>
> [Message clipped]



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to