On 3/8/07, Jeff Medina <[EMAIL PROTECTED]> wrote:


On 3/7/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
> This is so if there is a real physical world as distinct from the
> mathematical plenitude.

Do you have any particular reason(s) for believing in a mathematical
plenitude?  If so, I would much appreciate an explanation of these
reasons or citation of one or more papers that do so (other than the
historical/traditional arguments for Platonism/idealism, with which I
am familiar).


It is simpler, explains (with the anthropic principle) fine tuning, and is
not contingent on an act of God or a brute fact physical reality ("the real
world just exists, for no particular reason, so there"). Some relevant
papers in addition to the Russell Standish one:

http://www.idsia.ch/~juergen/everything/html.html

http://space.mit.edu/home/tegmark/multiverse.pdf

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm

The last paper goes through an argument purporting to show that if
computationalism is the true theory of mind, then the apparent physical
world emerges from mathematical reality. This crucially depends on the
demonstration in a paper by Tim Maudlin that consciousness cannot supervene
on physical activity, which I gather from below you don't accept.


Your claims are interesting, but I don't see the point in getting into
too much debate about the consequences of living in a mathematical
universe sans physical reality without some reasons to consider it a
"live option".

> If there is no such separate physical world, then it
> isn't possible for something to be blessed with this quality of
existence,
> because everything that is logically consistent exists.

Everything that is logically consistent?  What about logically
paraconsistent universes?  What about relevant logics?  What about
fuzzy-logical consistent universes?  What about any other
non-classical logics?  They're all maths, yet they are for the most
part inconsistent with one another.  The plenitude might contain all
of these possibilities, but then we cannot claim the mathematical
plenitude *in toto* as consistent.


But it's only particular substructures in the plenitude which are
self-aware, and they seem to have a computational structure. The anthropic
principle makes them stand out from the noise.

Perhaps the plenitude is better defined otherwise.  All possible
worlds/universes that are internally consistent with at least one
mathematical formalism, but not necessarily with one another.  We can
sum up such a reality by... well... "Everything and Anything", then,
and don't really need to truss it up / attempt to legitimize it by
calling it mathematical, as opposed to linguistic or conceptual or
chaotic/purely-random.

> The difficult answer is to try to define
> some measure on the mathematical structures in the Plenitude and show
that
> orderly universes like ours thereby emerge.

Why do you think this is difficult?  Orderly universes like ours are
very clearly contained in a world of all possible mathematical
structures.  Perhaps you meant something else, something more
anthropically flavored.  Clarification appreciated.


One of the main problems with ensemble theories is the so-called failure of
induction. If everything that can happen, does happen then why should I not
expect my keyboard to turn into a fire-breathing dragon in the next moment?
There must be a non-zero probability that I will experience this because it
must happen in some universe, but the challenge is to show why the
probability is very low.

See this paper for an example of
> this sort of reasoning:
>
> http://parallel.hpc.unsw.edu.au/rks/docs/occam/

Thanks for the link.  I'll read this tonight.

> > >Egan). The usual counterargument is that in order to map a
computation
> onto
> > >an arbitrary physical process, the mapping function must contain the
> > >computation already, but this is only significant for an external
> observer.
> > >The inhabitants of a virtual environment will not suddenly cease
being
> > >conscious if all the manuals showing how an external observer might
> > >interpret what is going on in the computation are lost; it matters
only
> > >that
> > >there is some such possible interpretation.

No, no, no.  It is the *act* of interpretation, coupled with the
arbitrary physical process, that gives rise to the relevantly
implemented computation.  You can't remove the interpreter and still
have the arb.phys.proc. be conscious (or computing algebra problems,
or whatever).


Of course, without the act of interpretation the computation is useless and
meaningless, like saying that a page covered in ink contains any given
English sentence. But what if the putative computation creates its own
observer? It would seem that this is sufficient to bootstrap itself into
meaningfulness, albeit cut off from interaction with the substrate of its
implementation.


> Moreover, it is possible to
> map
> > >many computations to the one physical process. In the limiting case,
a
> > >single state, perhaps the null state, can be mapped onto all
> computations.

When, and only when, coupled with a sufficiently complex computational
agent interpreting the state as such.

Maybe folks overlook this because of their default/implicit
assumption/perspective is that interpreters aren't (or aren't
implementing) computational processes?  Maybe.  Can't think of why
else this error is so common.


What about a sealed and self-contained virtual environment complete with
observers, but with no possibility of ever interacting with the outside
world?

If you found
> such an alien computer, it would be impossible to determine what it was
> thinking about without extra information, like trying to determine what
an
> alien string of symbols means.

Not impossible at all (we *can* determine what alien strings of
symbols mean -- cf. the field of cryptography).  But the situation is
somewhat analogous, puzzling out what the alien computer did would be
somewhat like deciphering an alien love letter, say.


If we could determine the syntax (impossible if it is encoded using a
one-time pad), we still would not be able to determine the semantics. What
is it about the word "cat" that would suggest a small furry animal with
pointy ears and whiskers? You would have to have some clue external to the
language itself to work this out. Imagine an alien computer with no I/O
implementing a virtual environment with conscious observers, designed and
programmed according to the radioactive decay patterns of a sacred stone.
Are its inhabitants any less conscious because the aliens have all died and
the sacred stone lost? Would we have any luck fathoming this computer if we
found it? If we did come up with several possible interpretations is there
any basis for saying that one of them is correct and the others not?


> There is, of course, the originally intended
> meaning, but once we remove the constraint of environmental interaction,

You didn't remove the constraint of environmental interaction, you
just changed the environment.

> what is there left for the computer itself, or for an external observer,
to
> distinguish between the original meaning and every other possible
meaning it
> may have had?

When you changed the environment from "someplace much like the places
humans in 2007 consider 'coal mines'" to "a virtual world that, when
interacted with via appropriate mediation, seems just like those
places deemed 'coal mines'", you added different interpretive /
interface mechanisms to the coal-mining robot.  That's why the
computer itself is distinguishable as having the original meaning
instead of all other meanings.

Consider a variation of your argument.  You could build a computer
that interacts with a coal mine, and which looks like it's mining
coal, but which in actuality is taking the inputs and outputs in that
environment and calculating answers to Diophantine equations with
them.
By looking at the inner workings of the computer and its interface
with its environment, we could possibly determine what it was really
doing when it appeared to be mining coal.  But until we do so,
appearances can be deceiving.  So the problem you seem to believe
exists for computational agents existing in non-traditional
environments (what are currently called virtual or digital
environments by some) also exists for what you call "environment"
(e.g., the coal mine, and the rest of the Earth, too).  Sooo...
yeah... your problem is both meaningless (in much the way that the
problem "TRUE OR FALSE: The king of France is bald." is meaningless
when there is no king of France, because some of its terms, like
environment, are based on false assumptions / ungrounded distinctions
/ nonexistent differences), and universally (non-)applicable, because
it arises even for the environments with which you contrast computers
(e.g., the coal mine).

> The problem is with the physical supervenience thesis that usually goes
> together with computationalism. If we consider that mind is generated by
> computation on an abstract machine,

Yikes, but it isn't.  Nothing's generated on an abstract machine.
"Abstract machine" is another way of saying "schematic" or
"formalism".  Until it is implemented/built, nothing is generated by
it.  But I can see how you would come to this conclusion about
physical supervenience and computationalism if you believe
Searle/Putnam/Schutz/Bringsjord/Maudlin/et al. were right.  But they
weren't and aren't.  (Unless of course this *is* the mathematical
plenitude, and thus there are infinitely many worlds in which they are
right, and infinitely many worlds in which they are wrong, and there
is no answer relative to the plenitude as a whole regarding the nature
of consciousness...)


I guess I'm taking the Searle et al criticism of (standard) computationalism
seriously. Could you briefly summarise what's wrong with it?

Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to