On 6/14/2014 11:23 PM, LizR wrote:
On 15 June 2014 18:12, meekerdb <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:

    On 6/14/2014 10:19 PM, LizR wrote:
    On 15 June 2014 16:49, meekerdb <meeke...@verizon.net
    <mailto:meeke...@verizon.net>> wrote:

        On 6/14/2014 9:37 PM, LizR wrote:

            
http://news.nationalgeographic.com/news/2014/06/140608-regret-rats-neuroscience-behavior-animals-science


        Interesting that this experiment is all about qualia, which we're told 
are
        ineffable and can't be possessed by computers because they're not human.


    Yes. At least we assume there are qualia involved. The experiment only 
measures
    their "neural correlates" (since you can't ask a rat what it's experiencing,
    obviously that's all they can do, of course).
    But if you asked and the rat replied that would just be a different neural 
correlate.

No it wouldn't. That would be the rat introspecting and telling you about its subjective impressions. There would presumably be neural correlates to that process, which could in principle be detected, but the rat replying wouldn't be those, it would be the rat replying.

    However, I'm sure Bruno would be happy to allow a suitably programmed 
computer to
    have qualia.
    Bruno proposes that consciousness goes along with being able to understand 
the proof
    of Godel's incompleteness theorem.  I think that's too high a bar. There 
must be
    different levels and kinds of consciousness.


He does? I must admit I haven't come across that in the parts of comp I've tried to understand so far. Could you elucidate?

Not really.  I just asked him directly what would it take for a computer to be 
conscious.


    And there's this experiment:
    
http://www.the-scientist.com/?articles.view/articleNo/36705/title/Manipulating-Mouse-Memory/

    Which is another step toward being able to engineer consciousness.  Once 
that is
    possible, questions about qualia will seem just another way of talking about
    neuroscience.


I don't see how they're engineering consciousness.

I only said it was "a step toward".

Presumably the mouse's brain can do that (assuming mice are in fact conscious). As far as I can tell from the article, they appear to be creating false memories, i.e. changing what the mice are conscious /of/.

If you can determine what a mouse, or a computer, is conscious of then you've engineered it. The point of using the word "engineer" is that engineering is about getting the right effects; you don't necessarily need a deep theory to do engineering.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to