>
>
> So a VPOP is defined to be a safe AGI.  And its purpose is to solve the
> problem of building the first safe AGI...
>


No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal
of carrying out a certain kind of extrapolation

What you are doubting, perhaps, is that it is possible to create a suitably
powerful optimization process using a narrow-AI methodology, without
giving this optimization process a flexible, AGI-style motivational
system...

You may be right, I'm really not sure...

As expressed many times before, I consider CEV a fascinating thought-
experiment, but I'm not even sure it's a well-founded concept (would the
sequence be convergent? how different would the results be for different
people?), nor that it will ever be computationally feasible ... and I really
doubt it's the way the Singularity's gonna happen!!

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57873034-fffb7b

Reply via email to