Hi Richard,
Let me go back to start of this dialogue...
Ben Goertzel wrote:
Loosemore wrote:
> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the
> likelihood of them becoming unfriendly would be similar to the
Ben,
I guess the issue I have with your critique is that you say that I have
given no details, no rigorous argument, just handwaving, etc.
But you are being contradictory: on the one hand you say that the
proposal is vague/underspecified/does not give any arguments but
then having sai
Hi,
There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it: I am proposing a general
*class* of architectures for an AI-with-motivational-system. I am not
saying that this is a specific instance (with all the details nailed
down) of