Shane Legg wrote:
I still haven't given up on a negative proof, I've got a few more
ideas brewing.  I'd also like to encourage anybody else to try
to come up with a negative proof, or for that matter, a positive
one if you think that's more likely.

Merely an offer for the idea brew. . .

Another disappointing thing about FAI professed as a pragmatic tool
(although not disappointing, and very interesting, as abstract art) is
from a view that it's possible for a system to be so rich and
optimized that no more resources are left to be allocated for
increasing motivative amplitude about a possible better system, unless
the current system is deoptimized in vain of a trivial
misrepresentation of a vacuous ideal referent.

Ultimately this could lead to stagnation. Not that stagnation is
intrinsically bad, especially if it's so good, but if it's
*pragmatically* deemed to be bad now, it could be that the better
target is to be sufficiently just short of the maximum likelihood
horizon, saving resources to appropriately feed motivative mechanisms.
"Maximum likelihoods" mean nothing anyway being at the edge becoming
of infinite ignorance, efficiently a property of death.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to