Ben Goertzel wrote:
Hi Richard,

Let me go back to start of this dialogue...

Ben Goertzel wrote:
Loosemore wrote:
> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the
> likelihood of them becoming unfriendly would be similar to the
> likelihood of the molecules of an Ideal Gas suddenly deciding to split
> into two groups and head for opposite ends of their container.

Wow!  This is a verrrry strong hypothesis....  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?

Your subsequent responses have shown that you do have an argument, but
not anything close to a proof.

And, your argument has not convinced me, so far.  Parts of it seem
vague to me, but based on my limited understanding of your argument, I
am far from convinced that AI systems of the type you describe, under
conditions of radically improving intelligence, "can be made so
reliable that the likelihood of them becoming unfriendly would be
similar to the likelihood of the molecules of an Ideal Gas suddenly
deciding to split into two groups and head for opposite ends of their
container."

At this point, my judgment is that carrying on this dialogue further
is not the best expenditure of my time.  Your emails are long and
complex mixtures of vague and precise statements, and it takes a long
time for me to read them and respond to them with even a moderate
level of care.

I remain interested in your ideas and if you write a paper or book on
your ideas I will read it as my schedule permits.  But I will now opt
out of this email thread.

Thanks,
Ben

Ben,

That is fine, of course: I think it better to have a full debate on the issue in a proper forum (publications and responses to them).

I feel a little sad, however, that you simultaneously bow out of the debate AND fire some closing shots, in the form of a new point (the issue of whether or not this is "proof") and some more complaints about the "vague statements" in my emails. I clearly cannot reply to these, because you just left the floor.

However, thanks for giving it the time and effort that you have done.

*********

Some closing remarks for anyone else who has been following the debate:

I think that I have fulfilled my obligations to at least give an outline of how friendliness could be guaranteed.

Please be clear about the scope of what I have tried to do, though: I cannot yet produce a complete system to show you that the approach works, but what I have done is to show something that has the potential to be as reliable as I stated in my original claim (quoted above).

In other words, I claim that the approach I described has the potential to be developed into something that is as stable as the ideal-gas example I cited, and I have done this by showing that the structure of the approach has many things in common with the reasons why an ideal gas is so predictable. I have said: "The solution lies in this direction, and does anyone have specific reasons why this would not work, or why a clear proof (or as near as anyone can get to a proof) cannot be achieved this way?"

I have not seen any such counterarguments, so it seems to me that the approach stands as the only proposal still on the table for achieving the goal of creating a guaranteed friendly AI.

The argument is not vague, but it is technically a little dense (i.e. I have had to pack some ideas into shorthand that experienced cognitive science/AI/complex systems people would recognize), and for this I take full responsibility. However, sometimes it is possible to get ideas across by giving the dense form and then fielding questions so as to clarify the points that people find unclear. That is what I hoped would happen.

The proposal is based on an entire approach to AI that is very different from the conventional one. I would actually go much further than my original assertion (that this is the way to build a Friendly AI) and say that I think this alternative approach is probably the only viable way to build *any* kind of AI, if what you want is a system that can think at a human level of intelligence. [I state this as my position/opinion, not as the opener for a new debate] I will write those ideas up and show examples as quickly as I get the time, but I did want people to know that this stuff is not just some bunch of thoughts that I cooked up on the spur of the moment (I have been doing this kind of research, off and on, since c. 1981).

And, for the record, I will not be doing anything reckless with these ideas: I have no intention of building an AI, or releasing the ideas necessary to allow someone else do that, unless the friendliness issue has been worked out in excruciating detail. I not only do not *intend* to do this, I have thought very carefully about the procedures and research plans needed to ensure that development happens in a safe way. As far as I can see (and this, again, is just personal judgment) I am the only one whose project is based on principles that, from their very roots are designed to lead to a stable, predictable, controllable and friendly system. I see elements in other people's approaches that look like a recipe for disaster. I also see some people with such an appalling mixture of closed-mindedness, arrogance, aggression and naivete (obviously I am NOT referring to Ben Goertzel here) that I can only shake my head and wonder at their behavior.

The one thing I need now is the time to gradually build (in my own spare time) a set of proof-of-concept programs that allow investors to find this kind of project believeable. That should take, oh, about 10 years. If I had a Nobel Prize in beekeeping or telephone-sanitizing, or if I had started a successful lemonade-stand business, it would of course only take ten minutes to convince an investor, given the way investors operate, but, hey ho: ten years it is..... :-(

Enough for now.


Richard Loosemore.






-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to