On Mon, Aug 22, 2011 at 2:51 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

> But the problem I have with your idea of functional equivalence is
> that you seem to treat it as an objective property when the reality is
> that equivalence is completely contingent upon what functions you are
> talking about. AC power works until there is a blackout. Batteries
> work in a blackout but they wear out. They are different ways of
> achieving the same effect as far as getting what we want out of a
> radio amplifier, but there is no absolute 'functional equivalence'
> property out there beyond our own motives and senses.

Well yes, "functional equivalence" does depend on the function you are
talking about. The function I am talking about for neurons is the
ability to stimulate other neurons. An artificial neuron should
stimulate the neurons to which it is connected with the same timing as
the biological neuron it replaces. It doesn't have to be exactly the
same, just close enough.

> Replicate, but not emulate. If you want to make muscles out of
> something physical, what makes you so sure that it's possible to make
> emotion without physical neurotransmitters? Our consciousness is human
> meat. It exists nowhere else but in the context of living, healthy,
> human tissue. That fact should be of interest to us. Sure, the things
> that we do have patterns which we can neurologically abstract into
> 'logic' and apply that logic to other substances, but that doesn't
> ever have to mean that we can turn other substances into a living
> human or buffalo psyche.

If it's possible to reproduce basic neuronal function as described
above without neurotransmitters then it should be possible to
reproduce consciousness without neurotransmitters, as explained many
times.

> No, it doesn't work like that. You're assuming that all neurons do is
> the same thing over and over again. It's like saying you can write a
> program that watches activity of cnn.com for a while and then replaces
> the news stories with better news.

Neurons will do the same thing over and over again, if they are in the
same initial state and receive the same inputs. A neuron is a finite
state machine. Given certain inputs at the dendrites, it either will
or will not send an action potential down its axon. The trick is to
work out the state transition table describing its behaviour.

>> So are you agreeing that that the artificial neuron can in theory
>> replicate almost 100% of the behaviour of the biological neuron? What
>> would that be like if your entire visual cortex were replaced?
>
> It depends 50% on what you replace it with. If it was nothing but pure
> behavioral logic, maybe it would rapidly be compensated for with
> synesthesia. You would get the same knowledge you do from vision, but
> the qualia would be like using a GPS made of sound, feeling, smell,
> taste, balance, etc. Your brain would learn to use the device.

But your brain would be convinced that this ersatz vision was real,
since it would be getting the same signals in the same sequence from
the artificial visual cortex. For example, your language centre would
be forced to say that your vision was perfectly normal. In that case,
how do you know that right now that you don't have GPS-like qualia
rather than the normal ones?

> As far as I know, even memories of visual qualia would be inaccessible
> if the visual parts of the brain were damaged. If not the brain could
> maybe recover partial visual qualia by re associating colors and forms
> with the memories of colors and forms (which as you see is not the
> same thing. staring at the sun is not possible for long, whereas you
> can imagine staring at the sun as long as you want. I assume.)

Yes, even the memories of visual qualia would be gone but you wouldn't
know it, and you would describe everything you thought you could
remember normally.

>> It seems to me that biology is sufficient since if you exactly
>> replicate the biology, you would replicate awareness.
>
> That's not the case. An identical twin is close to a biological
> replicate the awareness is not at all 'replicated'. They will share
> some personality traits but are by no means the same person. My own
> dad has an identical twin who has a very different personality and
> life path than he has, so I can verify that.

If two identical twins differ mentally, then obviously this is because
they differ physically in their brain configuration. My mental state
is different today than it was yesterday, and there is less difference
between my brain on two consecutive days than there would be between
the brains of identical twins.

> What biology gives you is access to awareness. Two computers can have
> the same hardware, but entirely different contents on their HD and
> entirely different users who put that content there.

A change in content on the HD changes the computer physically.

>> >  It's hugely anthropocentric to say "We are the magic monkeys that
>> > think we feel and see when of course we could only be pachincko
>> > machines responding to complex billiard ball like particle impacts".
>>
>> Of course that's what we are. This is completely obvious to me and I
>> have to make a real effort to understand how you could think
>> otherwise.
>
> You think that we are magic? That Homo sapiens invented awareness by
> accident in a universe that has no awareness? That to me is like
> saying that nuclear power plants invented radiation.

No, I think awareness happens when certain types of information
processing happen.

>> So, you agree: any device that acts just like a neuron has to be a
>> neuron. It doesn't have to look like a neuron, it could be a different
>> colour for example, it just has to behave like a neuron. Right?
>
> No. My answer is not going to change. If it doesn't look like a
> neuron, then there must be SOME difference. Whatever difference that
> is could either result itself in different interiority (≠Ψ) which
> results In different behavior pattern (BP) accumulations over time
> (let's call that Z factor {≠Ψ->Sum(≠BP/Δt)} ), or the visible
> difference (≠v) could be the tip of the iceberg of subtle
> compositional differences which result in the same Z thing. It could
> be a different color with no Z factor - it depends on why it's a
> different color.

It won't be *exactly* the same but if it's close enough it will do the
job. We look at a bird and we make an aircraft; what we are interested
in is a machine that flies, and it doesn't matter that the aircraft
lacks feathers.

>> If they are my opinions then I have control over them, don't I?
>
> Only if you subscribe to a view like mine. In your view, you are
> clearly stating that your opinions are biochemical processes, and
> therefore any semantic conception of them is strictly metaphysical and
> somewhat illusory.

My opinions are determined by biochemical processes. If the
biochemistry in my brain were different then my opinions would be
different. Where's the problem with that?

> Relevant to who? (To paraphrase Suicidal Tendencies), how can You say
> what My brain's best interests are? Until we know how to make the
> color blue from scratch or find the mathematical ingredient that makes
> a joke funny, we can't even come close to saying that we can know what
> is relevant to the production of awareness. The brain appears to be
> not much more than a big soft colony of coral. Nothing any cell does
> looks like it can wind up being funny or blue.

No, but put all the neurons together and they can.

>>If a person with a brain
>> prosthesis can have a normal conversation with you for an hour on a
>> wide range of topics, showing humour and emotion and creativity, that
>> would say something about how well the prosthesis was working. So what
>> if the prosthesis is a different colour and weighs a little bit more
>> compared to the original?
>
> In a real life medical situation, if a prosthesis was developed that
> appeared to work by the reports of the subjects themselves and the
> people around them, I would of course give it the benefit of the
> doubt. I'm not asserting with certainty that no such appliance can
> ever be developed. Philosophically however, there is no
> epistemological support for it, since we can't observe someone else's
> qualia. Like members of an isolated tribe being shown television for
> the first time, our assumption that there are people in the television
> set might be unfounded.

We can't observe their qualia but we guess that they have qualia from
the way they behave, even before we open up their heads to see if they
have a brain like ours.

> Because of that, if we are to determine the course of development of
> artificial neurology, in the face of compelling reasons to the
> contrary, I would make biological replication at the genetic level a
> top priority and computational simulation a distant second. Unless and
> until we have any success whatsoever in creating a device which seems
> on casual inspection to possess free will and feeling out of an
> inorganic material, it's really only academic. If someone thinks that
> there is no significant difference between living organisms and
> computer programs, then let them prove it, even on the most basic
> level.

What is the basic difference in your view, and what sort of proof would suffice?

>> If thoughts are generated by biochemical reactions in the brain
>
> They aren't. No more than this conversation is generated by electronic
> reactions in our computers.
>
>> and
>> only biochemical changes in the brain then there is a deterministic
>> chain of biochemical events from one brain state to another. That is,
>> we can say,
>
> Even that is doubtful. The biochemical changes in the brain, whether
> or not the exclusive cause of thought (which they aren't), are
> probably just like baseball games. Contingent upon unknowable outcomes
> of contests and motives on a molecular and cellular level that we
> can't understand and even they can't predict. Groups of living
> organisms are not just a machine, they are also a community. They
> collectively make decisions, as in photosynthesis and quorum sensing
> in bacteria.

But a baseball game would be predictable if we had all initial
conditions, unless there were a truly random element involved, as in
radioactive decay. But even then we might be able to produce a very
accurate probabilistic model. the difficulty in practice is (a) having
the initial conditions, (b) having a good mathematical model, and (c)
having sufficient computational power.

>> (a) Release of dopamine from Neuron A triggers an action potential in
>> Neuron B which causes Muscle C to contract which causes Hand D to
>> rise,
>
> What caused Neuron A to release the dopamine in the first place?
> Nothing in the brain - it was caused by an event in the mind, or, more
> accurately an experience of the Self, which constellates as many
> overlapping events on different levels of sensation, emotion, and
> cognition. That's the reason the neuron fires, because something is
> happening to us personally. The neuron has no reason to fire or not
> fire on it's own. It doesn't care, it just wants to eat glucose and
> participate in the society of the other neurons.

Neuron A was triggered to fire by the other neurons or sense organs to
which it is connected. There are also some neurons which fire
spontaneously (eg.
http://www.jneurosci.org/content/20/24/9004.full.pdf). But even the
spontaneously firing neurons do so because that is their the way their
biochemistry makes them behave. They don't suddenly start doing
bizarre and magical things. A table will move across the room because
it's pushed or it may move across the room by itself if there is an
earthquake, but it won't just move across the room by itself, with no
external force.

>> or,
>>
>> (b) Release of dopamine from Neuron A generates a desire to lift one's
>> hand up, the dopamine then triggers an action potential in Neuron B
>> which is experienced as the intention of lifting one's hand up, and
>> Neuron B stimulates Muscle C to contract which is experienced as one's
>> hand actually rising.
>
> Great. So we are dopamine puppets from a neuron puppet master. It's
> not a legitimate possibility. If it were there would be no reason for
> anything like a 'desire' to be generated. It's completely superfluous.
> If Neuron A can trigger Neuron B without our help, then it surely
> would. It's like saying that maybe your thermostat has a DVD player in
> it that plays excerpts from the Wizard of Oz and then it turns on the
> furnace and then the house is warmed up which makes the DVD player
> choose a different scene of the movie.

"Neuron A can trigger Neuron B without our help" - what does that
mean? Do you think that we exist separately from our neurons, deciding
whether this one or that one will trigger? The self is just the
collection of neurons, acting together.

> I'm only continuing with this for the benefit of you or anyone else
> who might be interested in reading it. There is nothing in your
> arguments that I have not considered many times in many many long
> discussions. It's all very old news to me. It does help me communicate
> my view more clearly though so I don't mind, just don't get frustrated
> that I'm not going to ever go back to my (our) old worldview. I think
> that I mentioned that I used to hold the same views that you have now
> only a few years ago? It's almost correct, it's just inside out.

It seems that you have an emotional reaction to the idea that you are
no more than the biochemical reactions in your body. But not wanting
something to be true does not make it untrue.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to