Jim Bromer wrote:


----- Original Message ----
From: Richard Loosemore <[EMAIL PROTECTED]>
Richard Loosemore said:

If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to think through, these all stem from
the fact that he has a particular kind of mechanism in mind (one which
has a goal stack and a utility function).  There are so many of these
assertions pulled out of think air that I found it gave me a headache
just to read the paper. ...

But this is silly:  where was his examination of the systems various
motives?  Where did he consider the difference between different
implementations of the entire motivational mechanism (my distinction
between GS and MES systems)?  Nowhere.  He just asserts, without
argument, that the system would be obsessed, and that any attempt by us
to put locks on the system would result in "an arms race of measures and
countermeasures."

That is just one example of how he pulls conclusions out of thin air.
-------------------------------------------

Your argument about the difference between a GS and an MES system is a strawman argument. Omohundro never made the argument, nor did he touch on it as far as I can tell. I did not find his paper very interesting either, but you are the one who seems to be pulling conclusions out of thin air.

You can introduce the GS vs MES argument if you want, but you cannot then argue from the implication that everyone has to refer to it or else stand guilty of pulling arguments out of thin air.

His paper Nature of Self Improving Artificial Intelligence September 5, 2007, revised January 21, 2008 provides a lot of reasoning. I don't find the reasoning compelling, but the idea that he is just pulling conclusions out of thin air is just bluster.

Taking things in reverse order: that last paper you refer to is not one that I have, and I hope that you do not think I was referring to that in my criticism. Why do make reference to it, and imply that my comments apply to that paper .... and then calling my non-existent comments on that paper "bluster"? I am confused: the paper I was critiquing was clearly the "ai drives" paper from the 2008 conference.

When I accused Omohundro of pulling conclusions out of thin air, I went through a careful process: I quoted a passage from his paper, then I analysed it by describing cases of two hypothetical AGI systems, then I showed how those cases falsified his conclusion. I went to a great deal of trouble to back up my claim.

Now you come along and make several claims (that my argument was a "strawman argument", that I am pulling conclusions out of thin air, that I am guilty of bluster, etc.), but instead of explaining what the justification is for these claims, you offer nothing. And you offer no reply to the argument that I gave before: you ignore this as if it did not exist. I gave details. Address them!

HOWEVER, since you have raised these questions, let me try to address your concerns. The argument I gave was not a strawman, because when I said that "Omuhundro's arguments assume a Goal Stack architecture" this was just a shorthand for the longer, but equivalent claim:

"Omohundro made many statements about how an AI 'would' behave, but in each of these cases it is very easy to imagine a type of AI that would not do that at all, so his claims are only about a very particular type of AI, not the general case. Now, as it happens, the best way to describe the 'very particular type of AI' that Omohundro had in the back of his mind when he made those statements, would be to say that he assumed a 'Goal Stack' type of AI, and he seemed to have no idea that other types of motivation mechanism would be just as feasible. In fact he did not *need* to know about the distinction between Goal-Stack mechanisms and Motivational-Emotional Systems (and I did NOT criticize him because he failed to refer to the GS-MES distinction) .... he could have realized that all of his 'An AI would do this..." statements were not really valid if he had simply taken the trouble to think through the implications properly. I believe that the reason he did such a sloppy job of thinking through the implications of his statements was because he had a narrow, GS-type of mechanism in mind - but in a sense that is neither here nor there."

It is the SLOPPY REASONING that I was targetting, not the fact that he considered only a GS architecture. I believe that if he had been aware of the distinction between GS and MES architecture, he would have been much less likely to have engaged in such sloppy reasoning, and that is what I said to Kaj Sotala when this thread started.... I simply said that Omohundro's examples of what an AI 'would' do did not work if you considered the case of general MES systems, rather than just GS systems.



Richard Loosemore




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to