On 3/7/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
This is so if there is a real physical world as distinct from the
mathematical plenitude.

Do you have any particular reason(s) for believing in a mathematical
plenitude?  If so, I would much appreciate an explanation of these
reasons or citation of one or more papers that do so (other than the
historical/traditional arguments for Platonism/idealism, with which I
am familiar).
Your claims are interesting, but I don't see the point in getting into
too much debate about the consequences of living in a mathematical
universe sans physical reality without some reasons to consider it a
"live option".

If there is no such separate physical world, then it
isn't possible for something to be blessed with this quality of existence,
because everything that is logically consistent exists.

Everything that is logically consistent?  What about logically
paraconsistent universes?  What about relevant logics?  What about
fuzzy-logical consistent universes?  What about any other
non-classical logics?  They're all maths, yet they are for the most
part inconsistent with one another.  The plenitude might contain all
of these possibilities, but then we cannot claim the mathematical
plenitude *in toto* as consistent.

Perhaps the plenitude is better defined otherwise.  All possible
worlds/universes that are internally consistent with at least one
mathematical formalism, but not necessarily with one another.  We can
sum up such a reality by... well... "Everything and Anything", then,
and don't really need to truss it up / attempt to legitimize it by
calling it mathematical, as opposed to linguistic or conceptual or
chaotic/purely-random.

The difficult answer is to try to define
some measure on the mathematical structures in the Plenitude and show that
orderly universes like ours thereby emerge.

Why do you think this is difficult?  Orderly universes like ours are
very clearly contained in a world of all possible mathematical
structures.  Perhaps you meant something else, something more
anthropically flavored.  Clarification appreciated.

See this paper for an example of
this sort of reasoning:

http://parallel.hpc.unsw.edu.au/rks/docs/occam/

Thanks for the link.  I'll read this tonight.

> >Egan). The usual counterargument is that in order to map a computation
onto
> >an arbitrary physical process, the mapping function must contain the
> >computation already, but this is only significant for an external
observer.
> >The inhabitants of a virtual environment will not suddenly cease being
> >conscious if all the manuals showing how an external observer might
> >interpret what is going on in the computation are lost; it matters only
> >that
> >there is some such possible interpretation.

No, no, no.  It is the *act* of interpretation, coupled with the
arbitrary physical process, that gives rise to the relevantly
implemented computation.  You can't remove the interpreter and still
have the arb.phys.proc. be conscious (or computing algebra problems,
or whatever).

Moreover, it is possible to
map
> >many computations to the one physical process. In the limiting case, a
> >single state, perhaps the null state, can be mapped onto all
computations.

When, and only when, coupled with a sufficiently complex computational
agent interpreting the state as such.

Maybe folks overlook this because of their default/implicit
assumption/perspective is that interpreters aren't (or aren't
implementing) computational processes?  Maybe.  Can't think of why
else this error is so common.
If you found
such an alien computer, it would be impossible to determine what it was
thinking about without extra information, like trying to determine what an
alien string of symbols means.

Not impossible at all (we *can* determine what alien strings of
symbols mean -- cf. the field of cryptography).  But the situation is
somewhat analogous, puzzling out what the alien computer did would be
somewhat like deciphering an alien love letter, say.

There is, of course, the originally intended
meaning, but once we remove the constraint of environmental interaction,

You didn't remove the constraint of environmental interaction, you
just changed the environment.

what is there left for the computer itself, or for an external observer, to
distinguish between the original meaning and every other possible meaning it
may have had?

When you changed the environment from "someplace much like the places
humans in 2007 consider 'coal mines'" to "a virtual world that, when
interacted with via appropriate mediation, seems just like those
places deemed 'coal mines'", you added different interpretive /
interface mechanisms to the coal-mining robot.  That's why the
computer itself is distinguishable as having the original meaning
instead of all other meanings.

Consider a variation of your argument.  You could build a computer
that interacts with a coal mine, and which looks like it's mining
coal, but which in actuality is taking the inputs and outputs in that
environment and calculating answers to Diophantine equations with
them.
By looking at the inner workings of the computer and its interface
with its environment, we could possibly determine what it was really
doing when it appeared to be mining coal.  But until we do so,
appearances can be deceiving.  So the problem you seem to believe
exists for computational agents existing in non-traditional
environments (what are currently called virtual or digital
environments by some) also exists for what you call "environment"
(e.g., the coal mine, and the rest of the Earth, too).  Sooo...
yeah... your problem is both meaningless (in much the way that the
problem "TRUE OR FALSE: The king of France is bald." is meaningless
when there is no king of France, because some of its terms, like
environment, are based on false assumptions / ungrounded distinctions
/ nonexistent differences), and universally (non-)applicable, because
it arises even for the environments with which you contrast computers
(e.g., the coal mine).

The problem is with the physical supervenience thesis that usually goes
together with computationalism. If we consider that mind is generated by
computation on an abstract machine,

Yikes, but it isn't.  Nothing's generated on an abstract machine.
"Abstract machine" is another way of saying "schematic" or
"formalism".  Until it is implemented/built, nothing is generated by
it.  But I can see how you would come to this conclusion about
physical supervenience and computationalism if you believe
Searle/Putnam/Schutz/Bringsjord/Maudlin/et al. were right.  But they
weren't and aren't.  (Unless of course this *is* the mathematical
plenitude, and thus there are infinitely many worlds in which they are
right, and infinitely many worlds in which they are wrong, and there
is no answer relative to the plenitude as a whole regarding the nature
of consciousness...)

Ahem.

--
Jeff Medina

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to