On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson <
[EMAIL PROTECTED]> wrote:

> I think that you need to look into the simulations that have been run
> involving Evolutionarily Stable Strategies.  Friendly covers many
> strategies, including (I think) Dove and Retaliator.  Retaliator is
> almost an ESS, and becomes one if the rest of the population is either
> Hawk or Dove.  In a population of Doves, Probers have a high success
> rate, better than either Hawks or Doves.  If the population is largely
> Doves with an admixture of Hawks, Retaliators do well.  Etc.  (Note that
> each of these Strategies is successful depending on a model with certain
> costs of success an other costs for failure specific to the strategy.)
> Attempts to find a pure strategy that is uniformly successful have so
> far failed.  Mixed strategies, however, can be quite successful, and
> different environments yield different values for the optimal mix.  (The
> model that you are proposing looks almost like Retaliator, and that's a
> pretty good Strategy, but can be shown to be suboptimal against a
> variety of different mixed strategies.  Often even against
> Prober-Retaliator, if the environment contains sufficient Doves, though
> it's inferior if most of the population is simple Retaliators.)
>

I believe Mark's point is that the honest commitment to Friendly as an
explicit goal is an attempt to minimize wasted effort achieving all other
goals.  Exchanging information about goals with other Friendly agents helps
all parties invest optimally in achieving the goals in order of priority
acceptable to the consortium of Friendly.  I think one (of many) problems is
that our candidate AGI must not only be capable of self-reflection when
modeling its goals, but also capable of modeling the goals of other Friendly
agents (with respect to each other and to the goal-model of the collective)
as well as be able to decide when an UnFriendly behavior is worth declaring
(modeling the consequences and impact to the group of which it is a member)
That seems to be much more difficult than a selfish or ignorant Goal Stack
implementation (which we would typically attempt to control via an
imperative Friendly Goal)

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to