On 2/15/08, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
>
> I don't know when Lanier wrote the following but I would be interested to
> know what the AI folks here think about his critique (or direct me to a
> thread where this was already discussed). Also would someone be able to
> re-state his rainstorm thought experiment more clearly -- I am not sure I
> get it:
>
>      http://www.jaronlanier.com/aichapter.html


I take it the target of his rainstorm argument is the idea that the
essential features of consciousness are its information-processing
properties. If you have a physical system that processes the same
information as a brain does, but with a physical substrate different from
biological neurons -- e.g. silicon chips, raindrops or asteroids -- then
that physical system ought to be attributed the same[1] properties of
consciousness that we attribute to the brain. Let's call this view
Functionalism.

His first rainstorm example posits a rainstorm and a possible computer such
that when the computer takes the rainstorm as input, it performs the same
information-processing as your brain does. Functionalism is committed to the
idea that were that computer to be actualized and actually take that
rainstorm as input and thus actually perform the same information-processing
as your brain does, then that system would be conscious. Lanier suggests
that Functionalism would say that simply the rainstorm taken by itself would
be conscious, that this is ridiculous and that Functionalism ought to be
rejected. Then he realizes that's a crappy Straw Man argument because the
rainstorm by itself is just a passive potential program and not processing
any information and goes on to his next example.

Next, he seems to posit another possible computer that would "treat" a
bigger rainstorm as both the rainstorm and computer of the last example. He
seems to rely on the following implicit premise to conclude that there now
really is an information-processing system isomorphic to your brain:

  If there is a possible computer that "treats" the bigger rainstorm as an
information-
  processing system, then that rainstorm is in fact such an
information-processing system.

He tweaks this example in response to some imagined worries to make sure the
rainstorm is treated as an information-processing system that changes over
time isomorphic to your brain's dynamics. He expects us to conclude that
according to Functionalism, such a rainstorm would in fact be an isomorphic
information-processing system, hence conscious, and that Functionalism
should therefore be rejected for delivering such an absurd result.

Given his next tweak, it seems clear that whatever it is to "treat"
something as an information-processing system, it is some sort of rather
superficial correlation of various data that does not take into account any
genuine causal influences. In his next example, he tries to fix this by
positing asteroids which do have some sort of causal influence on each other
and then performing the same trick of correlating asteroid data with brain
data with a possible computer. He seems to think that the worry
Functionalists would have is simply *that* the raindrops don't causally
interact, which would be fixed by merely positing a system with *some* kind
of causal interaction. But presumably, any sensible Functionalist would not
just care about having some kind of causal interaction, but ones of the
right kind, e.g. of the same complex sort that biological neurons have on
each other.

In order to really fix this example, it seems he would have to posit
asteroids whose gravitational effects on each other are genuinely isomorphic
to all the causal interactions the physical particles making up our brain
have on each other. There'd be no need in this example to posit any possible
computer to correlate data from asteroids with computations because the
asteroids would have formed an actual computational system. But if there
really were such an intricate network of interacting asteroids, it seems to
me the Functionalist would no longer treat it as an absurd result and
happily concede that the asteroids have miraculously formed into an actual
computer, or at least an information-processing system, that is relevantly
similar to a brain and therefore conscious. (I imagine the chances of this
are vanishingly small (at least for most finite regions of space) since the
dynamic information-processing we are interested in with respect to positing
consciousness would probably require this complex network of asteroids to
persist for some time with its functional integrity intact.)

To be fair, some AI theorists and perhaps even philosophers (maybe Daniel
Dennett?) do seem to have embraced the crucial implicit premise he relied on
that all there is to being an information-processing system is that it be
treated as such or even possibly treated as such. However, I see no reason
to think Functionalism is committed to such a stupid view. I certainly don't
(yet) have a fully worked out theory of this, but the most systematic
attempt to address these issues can be found in philosopher Fred Dretske's
book "Knowledge and the Flow of Information." He doesn't explicitly address
issues of consciousness, but on plausible views of consciousness,
understanding information-processing and representation is at least an
all-important necessary first step (and the focus of Dretske's book). I'm
sure many of you are familiar with Shannon's work on information theory, but
that was mostly just studying quantity of information. Dretske is concerned
not with how much information is (or can be) communicated but with providing
a systematic *semantic* theory of *what* information is communicated and
what the conditions are for such communication and processing of
information.[2]

I am beginning to suspect that understanding these philosophical issues
(often called a theory of intentionality) is absolutely crucial to
computationalizing meta-ethics and conceptual analysis, which I take to be
prerequisites to the development of Friendly AI Theory and therefore, a safe
Singularity. But that would be a whole other post/essay and I have a more
traditional ethics dissertation I need to work on at the moment.

-------
[1] The plausible view here is that "same" should be read as qualitative
identity rather than numerical identity. If I buy a Toyota Corolla with my
spouse, we have the numerically same car, that is, there is one car that we
both own. If my friend then goes and buys a Toyota Corolla of the same
model, then there's a sense in which we might say we own the same car, but
what we mean is that there are two cars which we respectively own and they
share roughly the same properties. It seems to me that Lanier equivocates on
these two meanings of sameness when he asks,"Is it conscious as being
specifically you, since it implements you?" and perhaps "You should realize
by now that your brain is simultaneously implemented everywhere," seeming to
suggest that, for instance, the existence of a rainstorm somewhere that is
qualitatively identical in its information-processing as your brain is just
as good as the continued existence of an information-processing system
spatio-temporally and perhaps psychologically continuous with your brain,
e.g. the normal way in which our selves persist into the future.

[2] Unfortunately, I think Dretske's work is incomplete at best and from
what I hear, he has lately converted to a theory of intentionality more
focused on adaptive history. On such a view, if it turns out that you did
not in fact have the evolutionary and psychological history you thought you
did but rather spontaneously formed a moment ago in the exact same physical
configuration, you would not in fact have any representations (or
consciousness to the extent that mental representations are a prerequisite
to consciousness) no matter how much it seems to you that you really would
be doing such things as forming a representation of (and being conscious of)
reading these words.

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com

Reply via email to