Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Daniel Yokomizo
On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Steve, what is the purpose of your political litmus test? If you are trying
> to assemble a team of seed-AI programmers with the "correct" ethics, forget
> it. Seed AI is a myth.
> http://www.mattmahoney.net/agi2.html (section 2).

(I'm assuming you meant the section "5.1. Recursive Self Improvement")

Why do you call it a myth? Assuming that an AI (not necessarily
general) that is capable of software programming is possible and such
AI is created using software, it's entirely plausible that it would be
able to find places for improvement in its source code, be it in time
or space usage, concurrency and parallelism missed opportunities,
improved caching, more efficient data-structures, etc.. In such
scenario the AI would be able to create a better version of itself,
how many times this process can be done depend heavily on the
cognitive capabilities of the AI and it's performance.

If we move to an AGI, it would be able to come up with better tools
(e.g. compilers, type systems, programming languages), improve it's
substrate (e.g. write a better OS, rewrite its the performance
critical parts in FPGA), come up with better chips, etc., without even
needing to come up with new theories (i.e. there's sufficient
information already out there that, if synthesized, can lead to better
tools). This will result in another version of the AGI with better
software and hardware, reduced space/time usage and more concurrent.

We can come up with the argument that it'll only be a faster/leaner
AGI and it will get stuck coming up with bad ideas very quickly. But
if it's truly general it would, at least be able to come up with all
science/tech human beings are eventually capable of and if the AGI can
progress further it means humans can't also progress further. If
humans are able to progress than an AGI would be able to progress, at
least as quickly as humans but probably much faster (due to it's own
performance enhancements).

I am really interested to see your comments on this line of reasoning.

> -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Daniel Yokomizo
On Wed, Nov 19, 2008 at 1:21 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:
>
>> On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
>> <[EMAIL PROTECTED]> wrote:
>> > Seed AI is a myth.
>> > http://www.mattmahoney.net/agi2.html (section 2).
>>
>> (I'm assuming you meant the section "5.1.
>> Recursive Self Improvement")
>
> That too, but mainly in the argument for the singularity:
>
> "If humans can produce smarter than human AI, then so can they, and faster"
>
> I am questioning the antecedent, not the consequent.
>
> RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ 
> of 190.

I just want to be clear, you agree that an agent is able to create a
better version of itself, not just in terms of a badly defined measure
as IQ but also as a measure of resource utilization.


> Individual humans can't produce much of of anything beyond spears and clubs 
> without the global economy in which we live. To count as self improvement, 
> the global economy has to produce a smarter global economy. This is already 
> happening.


Do you agree with the statement: "the global economy in which we live
is a result of actions of human beings"? How would it be different for
AGIs? Do you disagree that better agents would be able to build an
equivalent global economy much faster than the time it took humans
(assuming all the centuries it took since the last big ice age)?


> My paper on RSI referenced in section 5.1 (and submitted to JAGI) only 
> applies to systems without external input. It would apply to the unlikely 
> scenario of a program that could understand its own source code and rewrite 
> itself until it achieved vast intelligence while being kept in isolation for 
> safety reasons. This scenario often came up on the SL4 list. It was referred 
> to AI boxing. It was argued that a superhuman AI could easily trick its 
> relatively stupid human guards into releasing it, and there were some 
> experiments where people played the role of the AI and proved just that, even 
> without vastly superior intelligence.
>
> I think that the boxed AI approach has been discredited by now as being 
> impractical to develop for reasons independent of its inherent danger and my 
> proof that it is impossible. All of the serious projects in AI are taking 
> place in open environments, often with data collected from the internet, for 
> simple reasons of expediency. My argument against seed AI is in this type of 
> environment.


I'm asking for your comments on the technical issues regardind seed AI
and RSI, regardless of environment. Is there any technical
impossibilities for an AGI to improve its own code in all possible
environments? Also it's not clear to me which types of environments
(if it's the boxing that makes it impossible, if it's an open
environment with access to the internet, if it's both or neither) you
see problems with RSI, could you ellaborate it further?


> It is extremely expensive to produce a better global economy. The current 
> economy is worth about US$ 1 quadrillion. No small group is going to control 
> any significant part of it.

I want to keep this discussion focused on the technical
impossibilities of RSI, so I'm going to ignore for now this side
discussion about the global economy but later we can go back to it.

> -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com