John,


However, it appears that infinite computation is not feasible, certainly
at least not in the short- or medium-term. So, I think what we do is aim at
genuine intelligence instead. But now, *given* that our goal is genuine
intelligence, I think it is important for many purposes to distinguish
between genuine intelligence and infinite computation or Blockhead style
"intelligence."


I don't like this kind of distinction between "intelligence" and "genuine
intelligence".  To me it's
like saying that planes don't have "genuine flight" because they don't have
some property that
birds have.  All I care about with regards to intelligence is how well it
works.  If a machine can
cure me of some disease and speed up the development of technology 1000 fold
and write
computer programs a billion times better than me... and post a few
remarkably insightful emails
to a few email lists on the side, to me it is intelligent.  I really don't
care if the machine is a fancy
quantum computer or has hamsters running around inside of it.

Of course if you want to build a machine with a lot of intelligence (as I
define it), then approaching
the problem based on the assumption of infinite computation power probably
won't get you very far.
What you will need to do is to work out how to get as much intelligence as
possible out of each unit
of computational resource that you have.  Once you have done that, you will
want to apply as much
resource as possible in order to get the maximal intelligence.


I've only taken a very cursory look at the AIXI stuff so I didn't want to
talk in any detail about it, but from what I can gather at the moment, that
*might* be an example of where this distinction can be relevant. If someone
is claiming to be proving some abstract stuff about intelligence but they
are really just talking about infinite computation or Blockheadish stuff,
then it might be important to keep this distinction in mind and take any
claims made about the nature of genuine intelligence with a grain of salt.


For sure.  Indeed my recent paper on whether there exists an elegant theory
of prediction tries
to address that very problem.  In short the paper says that if you want to
convert something
like Solomonoff induction or AIXI into a nice computable system... well you
can't.  Indeed my
own work on building an intelligent machine is taking a neuro science
inspired approach with
just a few bits that are in some sense "inspired" by AIXI.

I think the value of AIXI is that it gives you a relatively simple set of
equations with which
to mathematically study the properties of an ultra intelligent machine.  In
contrast something
like Novamente can't be expressed in a one line equation.  This makes it a
much more difficult
mathematical object to work with if you want to do theoretical analysis.

Shane

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to