I was eager to debunk your supposed debunking of recursive self-improvement,
but I found that when I tried to open that PDF file, it looked like a bunch
of gibberish (random control characters) in my PDF reader (Preview on OSX
Leopard)

ben g

On Mon, Oct 13, 2008 at 12:19 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> I updated my AGI proposal from a few days ago.
> http://www.mattmahoney.net/agi2.html
>
> There are two major changes. First I clarified the routing strategy and
> justified it on an information theoretic basis. An organization is optimally
> efficient when its members specialize with no duplication of knowledge or
> skills. To achieve this, we use a market economy to trade messages where
> information has negative value. It is mutually beneficial for peers to trade
> messages when the receivers can compress them more tightly than the senders.
> This results in convergence to an optimal mapping of peers to clusters of
> data in semantic space.
>
> The routing strategy is for a peer to use cached messages from its
> neighbors as estimates of the neighbor's database. For a message X and each
> neighbor j, it computes the distance D(X,Y_j) where Y_j is a concatenation
> of cached messages from peer j. Then it routes X to j because it estimates
> that j can store X most efficiently. Routing stops when j is itself.
>
> The distance function is non-mutual information: D(X,Y) = K(X|Y) + K(Y|X)
> where K is Kolmogorov complexity, the size of the shortest program that can
> output X or Y given the other message as input. When I wrote my thesis, I
> assumed a vector space language model, but I just now realized that D is a
> measure, compatible with Euclidean distance in the vector space model. K is
> not computable, but we can approximate K using the output size of a text
> compressor. The economic model rewards good compression algorithms.
>
> The second change is a new section (5) addressing long term safety. I think
> I have debunked RSI, proving that the friendly seed AI approach could not
> work even in theory. This leaves an evolutionary improvement model in which
> peers compete for resources in a hostile environment. The other risks I have
> identified are competition from uploads with property rights, intelligent
> worms, and a singularity that redefines humanity making the question of
> human extinction moot. I don't have good solutions to these risks. I did not
> mention all possible risks, e.g. gray goo.
>
> To answer Mike Tintner's remark, yes, $1 quadrillion is expensive, but I
> think that AGI will pay for itself many times over. It won't address the
> basic instability and unpredictability of speculative investment markets. It
> will probably make matters worse by enabling nonstop automated trading and
> waves of panic selling traveling at the speed of light.
>
> As before, comments are welcome.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to