On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Among other reasons: Because, in the real world, the scientist with an IQ of
> 200 is **not** a brain in a vat with the inability to learn from the
> external world.
>
> Rather, he is able to run experiments in the external world (which has a far
> higher algorithmic information than him, by the way), which give him **new
> information** about how to go about making the scientist with an IQ of 220.
>
> Limitations on the rate of self-improvement of scientists who are brains in
> vats, are not really that interesting
>
> (And this is separate from the other critique I made, which is that using
> algorithmic information as a proxy for IQ is a very poor choice, given the
> critical importance of runtime complexity in intelligence.  As an aside,
> note there are correlations between human intelligence and speed of neural
> processing!)
>

Brain in a vat self-improvement is also interesting and worthwhile
endeavor. One problem to tackle, for example, is to develop more
efficient optimization algorithms, that will be able to faster find
better plans according to the goals (and naturally apply these
algorithms to decision-making during further self-improvement).
Advances in algorithms can bring great efficiency, and looking at what
modern computer science came up with, this efficiency rarely requires
an algorithm of in the least significant complexity. There is plenty
of ground to cover in the space of simple things, limitations on
complexity are pragmatically void.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to