On 8/23/2011 3:32 PM, Craig Weinberg wrote:
On Aug 23, 5:55 pm, meekerdb<meeke...@verizon.net>  wrote:
On 8/23/2011 2:13 PM, Craig Weinberg wrote:

But the attempts at simulating birds failed completely. Airplanes
don't flap their wings. It took a much more basic understanding of
physics to grasp lift and drag, and it will take a much more elemental
understanding of sensory input and output to replicate human
consciousness. What you are saying about emulating consciousness by
computation is like saying 'if we record the wing flaps exactly, then
we should be able to make artificial birds out of concrete or glass'.
Some airplanes flap their wings and they fly inspite of being made of
wood and cloth and batteries and motors.  They could be made of concrete
and glass if those materials were functiionally adequate.
Neurons could be made of concrete and glass too if those materials
were functionally adequate - but they aren't.

I see nothing to your argument but unsupported assertion that feeling is
inherent in biological materials.  Repeating it in different examples
doesn't make it any more convincing.  If feeling is an inherent property
of biological materials then there must be something special about the
atoms in neurons such that they provide feeling that atoms in
transistors don't.  Do you have a way to test this theory?
What I assert again and again is the possibility that feeling is a
qualitative continuum. Atoms aren't special in cells, but the
molecules feel a special way when they bind themselves together which
allows them to become a cell together. Maybe cells can sense where
molecules can only detect. Not all cells are neurons, so only certain
kinds of physiological contexts can maybe 'feel'. It seems pretty
compelling to me.

That it's possible?  Or it's true?

The fact that we are even having this discussion
should remind us that we do think there is a difference in the first
place. If a transistor could feel like a neuron can feel, then our
computers would be building themselves a world of their own.

A non-sequitur. Spiders don't build a world of their own and neither do we. So why should robots - if we created intelligent/conscious robots.?

They
would be pets and neighbors and not the automaton servants which we
have designed them to be.

Exactly. We don't try to create conscious robots. John McCarthy pointed out many years ago that it would be ethically suspect to create conscious robots when we just want them to serve us.

The way to test the theory is to implant transistors in our brains and
see if we notice a difference.

How will we know if the person with an implant notices a difference? Ask them? And how big a difference and what kind would be evidence?. We already know that electrostimulation of spots in the brain can elicit memories and qualia.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to