2008/6/23 Vladimir Nesov <[EMAIL PROTECTED]>:
> On Mon, Jun 23, 2008 at 12:50 AM, William Pearson <[EMAIL PROTECTED]> wrote:
>> 2008/6/22 Vladimir Nesov <[EMAIL PROTECTED]>:
>>
>>>
>>> Two questions:
>>> 1) Do you know enough to estimate which scenario is more likely?
>>
>> Well since intelligence explosions haven't happened previously in our
>> light cone, it can't be a simple physical pattern, so I think
>> non-exploding intelligences have the evidence for being simpler on
>> their side.
>
> This message that I'm currently writing hasn't happened previously in
> out light code. By your argument, it is evidence for it being more
> difficult to write, than to recreate life on Earth and human
> intellect, which is clearly false, for all practical purposes. You
> should state that argument more carefully, in order for it to make
> sense.

If your message was an intelligent entity then you would have a point.
I'm looking at classes of technologies and their natural or current
human created analogues.

Let me give you an example. You have two people claiming to be able to
give you an improved TSP solver. One person claims to be able to do
all examples in polynomial time the other simply has a better
algorithm which can do certain types of graphs in polynomial time, but
resorts to exponential time for random graphs.

Which would you consider more likely if neither of them have detailed
proofs and why?

>
>> So we might find them more easily. I also think I have
>> solid reasoning to think intelligence exploding is unlikely, which
>> requires paper length rather than post length. So it I think I do, but
>> should I trust my own rationality?
>
> But not too much, especially when the argument is not technical (which
> is clearly the case for questions such as this one).

The question is one of theoretical computer science and should be able
to be decided as well as the resolution to the halting problem.
I'm leaning towards something like Russell Wallace's resolution, but
there maybe some complications when you have a program that learns
from the environment. I would like to see it done in formally at some
point.

> If argument is
> sound, you should be able to convince seed AI crowd too

Since the concept is their idea they have to be the ones to define it.
They won't accept any arguments against it otherwise. They haven't as
yet formally defined it, or if they have I haven't seen it.


> I agree, but it works only if you know that the answer is correct, and
> (which you didn't address and which is critical for these issues) you
> won't build a doomsday machine as a result of your efforts, even if
> this particular path turns out to be more feasible.

I don't think a doomsday machine is possible. But considering I would
be doing my best to make the system incapable of modifying it's own
source code *in the fashion that eliezer wants/is afraid of* anyway, I
am not too worried. See http://www.sl4.org/archive/0606/15131.html

 Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to