Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen [EMAIL PROTECTED] wrote:

 It seems that when philosophy is implemented it becomes like nuclear
 physics e.g. break down all the things we essentially understand until
 we come up with pieces, which we give names to, and then admit we don't
 know what the names identify - other than broken pieces of something we
 used to understand when it was whole.  My limited experience with those
 who practice philosophy is that they love to go to the absurd - I
 suspect this is meant as a means of proof, but often comes across as
 macho philosophoso.  Kind of I can prove anything you say is absurd.
 I welcome the thoughts of Philosophers.

I think most or at least many philosophers, myself included, would
actually agree that most of what (usually other) philosophers produce
is garbage. Of course, they won't agree about *which* philosophical
views and methods are garbage. I would propose that the primary
explanation for this is simply that philosophy is really, really hard.
It is almost by definition those areas of intellectual inquiry in
which there is little established methodology. (I think that is a
little overstated since at least in analytic philosophy, there is
broad agreement on the logical structure of arguments and rather less
broad but growing agreement on the nature of conceptual analysis.)

Notice that it is not just philosophers who say stupid stuff in
philosophy. Evolutionary biologists, computer scientists, economists,
scientists, and just people in general can all be found saying stupid
things when they try to venture into ethics, philosophy of mind,
philosophy of science, etc. In fact, I would say that professional
philosophers have a significantly better track record in philosophy
than people in general or the scientific community when they venture
into philosophy (which may not say very much about their track record
on an absolute scale).

By the way, I think this whole tangent was actually started by Richard
misinterpreting Lanier's argument (though quite understandably given
Lanier's vagueness and unclarity). Lanier was not imagining the
amazing coincidence of a genuine computer being implemented in a
rainstorm, i.e. one that is robustly implementing all the right causal
laws and the strong conditionals Chalmers talks about. Rather, he was
imagining the more ordinary and really not very amazing coincidence of
a rainstorm bearing a certain superficial isomorphism to just a trace
of the right kind of computation. He rightly notes that if
functionalism were committed to such a rainstorm being conscious, it
should be rejected. I think this is true whether or not such
rainstorms actually exist or are likely since a correct theory of our
concepts should deliver the right results as the concept is applied to
any genuine possibility. For instance, if someone's ethical theory
delivers the result that it is perfectly permissible to press a button
that would cause all conscious beings to suffer for all eternity, then
it is no legitimate defense to claim that's okay because it's really
unlikely. As I tried to explain, I think Lanier's argument fails
because he doesn't establish that functionalism is committed to the
absurd result that the rainstorms he discusses are conscious or
genuinely implementing computation. If, on the other hand, Lanier were
imagining a rainstorm miraculously implementing real computation (in
the way Chalmers discusses) and somehow thought that was a problem for
functionalism, then of course Richard's reply would roughly be the
correct one.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:

   By the way, I think this whole tangent was actually started by Richard
   misinterpreting Lanier's argument (though quite understandably given
   Lanier's vagueness and unclarity). Lanier was not imagining the
   amazing coincidence of a genuine computer being implemented in a
   rainstorm, i.e. one that is robustly implementing all the right causal
   laws and the strong conditionals Chalmers talks about. Rather, he was
   imagining the more ordinary and really not very amazing coincidence of
   a rainstorm bearing a certain superficial isomorphism to just a trace
   of the right kind of computation. He rightly notes that if
   functionalism were committed to such a rainstorm being conscious, it
   should be rejected.

 Only if it is incompatible with the world we observe.

I think that's the wrong way to think about philosophical issues. It
seems you are trying to import a scientific method to a philosophical
domain where it does not belong. Functionalism is a view about how our
concepts work. It is not tested by whether it is falisified by
observations about the world.

Or if you prefer, conceptual analysis does produce scientific
hypotheses about the world, but the part of the world in question is
within our own heads, something that we ourselves don't have
transparent access to. If we had transparent access to the way our
concepts work, the task of cognitive science and philosophy and along
with it much of AI would be considerably easier. Our best way of
testing these hypotheses at the moment is to see whether a proposed
analysis would best explain our uses of the concept and our conceptual
intuitions.

Sometimes, especially with people who have been in the grip of a
theory, people can (often only partially) switch what concept is
linked to a lexical item and not realize they are (sometimes) using
the word differently from others (including their past selves). Then
the debate gets much more complicated and may among other things, have
to get into the normative issue of which concept(s) we ought to use.
Chances are, though, unless the revision was carefully thought out and
defended rather than accidentally slipped into, it will not serve the
presumably important functions for which we had the original concept.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
 Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
 cited by Kaj Sotola in the original thread -
 http://consc.net/papers/rock.html) have all considered variations on
 the theme. At the very least, this should indicate that the idea
 cannot be dismissed as just obviously ridiculous and unworthy of
 careful thought.

Yes, you've shown either that, or that even some occasionally
intelligent and competent philosophers sometimes take seriously ideas
that really can be dismissed as obviously ridiculous -- ideas which
really are unworthy of careful thought were it not for the fact that
pinpointing exactly why such ridiculous ideas are wrong is so often
fruitful (as in the Chalmers article).

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 If computation is multiply realizable, it could be seen as being
 implemented by an endless variety of physical systems, with the right
 mapping or interpretation, since anything at all could be arbitrarily
 chosen to represent a tape, a one, a zero, or whatever.

Sure, pretty much anything could be used as a symbol to represent
anything else, but the representing would consist in the network of
causal interactions that constitute the symbol manipulation, not in
the symbols themselves. (And certainly not in anyone having to be
around to understand the machinery of symbol manipulation going on.)

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread John Ku
On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Nevertheless we can make similar reductions to absurdity with respect to
 qualia, that which distinguishes you from a philosophical zombie.  There is no
 experiment to distinguish whether you actually experience redness when you see
 a red object, or simply behave as if you do.  Nor is there any aspect of this
 behavior that could not (at least in theory) be simulated by a machine.

You are relying on a partial conceptual analysis of qualia or
consciousness by Chalmers that maintains that there could be an exact
physical duplicate of you that is not conscious (a philosophical
zombie). While he is in general a great philosopher, I suspect his
arguments here ultimately rely too much on moving from, I can create
a mental image of a physical duplicate and subtract my image of
consciousness from it, to therefore, such things are possible.

At any rate, a functionalist would not accept that analysis. On a
functionalist account, consciousness would reduce to something like
certain representational activities which could be understood in
information processing terms. A physical duplicate of you would have
the same information processing properties, hence the same
consciousness properties. Once we understand the relevant properties
it would be possible to test whether something is conscious or not by
seeing what information it is or is not capable of processing. It is
hard to test right now because we have at the moment only very
incomplete conceptual analyses.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread John Ku
On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  I would prefer to leave behind these counterfactuals altogether and
  try to use information theory and control theory to achieve a precise
  understanding of what it is for something to be the standard(s) in
  terms of which we are able to deliberate. Since our normative concepts
  (e.g. should, reason, ought, etc) are fundamentally about guiding our
  attitudes through deliberation, I think they can then be analyzed in
  terms of what those deliberative standards prescribe.

 I agree.  I prefer the approach of predicting what we *will* do as opposed to
 what we *ought* to do.  It makes no sense to talk about a right or wrong
 approach when our concepts of right and wrong are programmable.

I don't quite follow. I was arguing for a particular way of analyzing
our talk of right and wrong, not abandoning such talk. Although our
concepts are programmable, what matters is what follows from our
current concepts as they are.

There are two main ways in which my analysis would differ from simply
predicting what we will do. First, we might make an error in applying
our deliberative standards or tracking what actually follows from
them. Second, even once we reach some conclusion about what is
prescribed by our deliberative standards, we may not act in accordance
with that conclusion out of weakness of will.

Allowing for the possibility of genuine error is one of the big tasks
to be accomplished by a theory of intentionality. Take an example from
our more ordinary concepts, though the same types of problems will
arise for our deliberative standards. If I see a cow in the night and
my concept of horse fires, what makes it the case that this particular
firing of 'horse' is an error? Why does my concept horse really only
correctly refer to horses rather than the disjunction
horses-or-cows-in-the-night? (Although I earlier mentioned that I
think Dretske's information theoretic semantics is probably the most
promising theory of intentionality, it is at the moment unable to
deliver the right semantics in the face of these types of errors.)

I actually think the second difference poses a very similar type of
problem. What makes it the case that we sometimes really do act out of
weakness of will rather than it being the case that our will really
endorsed that apparent exception in this particular case while
presumably endorsing something different the rest of the time?

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Friendly question...

2007-05-27 Thread John Ku

I would feel relieved if there was miscommunication between us.

I was mostly concerned with the issue of what we *should* care about and
what cares we *should* be acting upon. If you are simply talking about some
cognitive biases we have that you concede ought to be overcome (or that we
ought to try to overcome to whatever extent possible), then great, it sounds
like we don't really have much of a disagreement, except perhaps on some
details.

John Ku

On 5/27/07, Samantha Atkins [EMAIL PROTECTED] wrote:



It was not perhaps so simple as you are portraying it.  There is deep EP
programming behind caring about human beings that puts it partially
beyond conscious choice changeable by new information.  However, our EP
also includes quite a bit of xenophobia against those perceived as not us.

EP xenophobic tendencies that haven't been sufficiently overcome do a
good job of explaining racism.   I think you may be overly focusing on
the roles of conscious thought and individual history.   While conscious
thought and work is required to overcome suboptimal responses and
attitudes it is important to acknowledge the less conscious and more
ingrained aspects of the problem.
.
That part of what I wrote would seem to require the least
clarification.  What are you asking?  I was referring to the occassional
intellectual dissociation as if we were already uploads or otherwise
disembodied or no longer human.  From this false Olympian perspective we
reason about what we should care about.   We think we are being
intellectually and ethically cleaner when we do so.  Yet from a more
humble perspective we are literally sawing away at the branch we are
sitting on.

Actually, at this point in our technological development their
well-being obviously does depend on the continuation of biological
humanity.Even with not yet available uber-tech their well-being will
depend on some means of perserving the matrix within which such beings
can exist, whatever that matrix may come to be.
.
I am not confused at all.  Perhaps you are as to what I was writing
about.  My apologies if I did not communicate with sufficient clarity.
.
Genes per se are just mechanism.  Continued existence of humanity is
important regardless of what the mechanism is or becomes.   By
evolutionary dead end I meant something that could perhaps be less
confusing to you if I had wrote developmental dead end  of this
particular species of intelligent being.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8

[singularity] Why We are Almost Certainly not in a Simulation

2007-03-01 Thread John Ku

Hi everyone! I just joined this discussion list, which looks great by the
way. I'm a philosopher by trade (mostly working on what we mean by things
like 'reasons', 'ought's and 'values'), but I read a lot of science,
including singularity stuff, in my spare time.

I actually think there is reason to think we are not living in a computer
simulation. From what I've read, inflationary cosmology seems to be very
well supported. (Early exponential expansion of the universe explains things
like observed flatness, homogeneity across distances, rarity of magnetic
monopoles, and scale invariance of primordial density fluctuations.)
Mathematical models of inflation point towards the process being eternal.
There is some energy in space-time itself, which when dense enough causes
expansion of space-time. But since that space-time will also have that
vacuum energy, it fuels the expansion even more making it an exponential
process. (Total energy is conserved because it is counterbalanced by
gravity.)

The upshot is that you have this multiverse expanding exponentially. Certain
regions of it will, through quantum fluctuations, decay into a lower vacuum
energy state that slows down the expansion and turns that energy into
ordinary matter and energy. Thus, we get a universe like our's. Any
spacetime regions that undergo decay, however, is more than made up for by
the exponential expansion. Every second, there are 10^37 *more* universes
being born than there were before.

Thus, at any given time, the vast majority of the universes that exist are
very young. So, I grant that it is *possible* that we are in a universe in
which some other civilization has evolved enough to run simulations and we
are just living in that simulation. But it will take a *lot* of seconds for
that civilization to evolve. And each second, it will be vastly outnumbered
by younger universes. The anthropic principle says to place an equal
probability on being an observer with the same evidence set as you. Since
there are so many more observers with these observations who are living in
the real world rather than a simulation (given that young universes
predominate), we have most reason to believe we are not in a simulation.

I think this could also explain why we have not seen alien civilizations.
Among all the universes in which there are observers who share our evidence
set about our history, evolution, etc., there will be many more universes in
which we were the first civilization to evolve than in which we came
significantly after some other civilization.

John Ku

Philosophy Graduate Student
University of Michigan
http://www.umich.edu/~jsku

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:



--- Jef Allbright [EMAIL PROTECTED] wrote:

 On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  --- Jef Allbright [EMAIL PROTECTED] wrote:
 
   On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  
What I argue is this: the fact that Occam's Razor holds suggests
that
 the
universe is a computation.
  
   Matt -
  
   Would you please clarify how/why you think B follows from A in your
   preceding statement?
 
  Hutter's proof requires that the environment have a computable
 distribution.
  http://www.hutter1.net/ai/aixigentle.htm
 
  So in any universe of this type, Occam's Razor should hold.  If
Occam's
 Razor
  did not hold, then we could conclude that the universe is not
computable.
 The
  fact that Occam's Razor does hold means we cannot rule out the
possibility
  that the universe is simulated.

 Matt -

 I think this answers my question to you, at least I think I see where
 you're coming from.

 I would say that you have justification for saying that interaction
 with the universe demonstrates mathematically modelable regularities
 (in keeping with the principle of parsimony), rather than saying that
 it's a simulation (which involves additional assumptions.)

 Do you think you have information to warrant taking it further?

 - Jef

There is no way to know if the universe is real or simulated.  From our
point
of view, there is no difference.  If the simulation is realistic then
there is
no experiment we could do to make the distinction.  I am just saying that
our
universe is consistent with a simulation in that it appears to be
computable.

One disturbing implication is that the simulation might be suddenly turned
off
or changed in some radical way you can't anticipate.  You really don't
know
anything about the world in which the simulation is being run.  (The movie
The Matrix is based on this idea).  Maybe the Singularity has already
happened and what you observe as the universe is part of the resulting
computation.

My argument is that if the universe is simulated then these possibilities
are
unlikely.  My reasoning is that if we know nothing about this computation
then
we should assume a universal Solomonoff prior, i.e. a universal Turing
machine
programmed by random coin flips.  This is what Hutter did to solve the
problem
of rational agents