On Mon, Sep 12, 2016 at 8:37 PM, Brent Meeker <meeke...@verizon.net> wrote:
>
>
> On 9/12/2016 8:50 AM, Telmo Menezes wrote:
>>
>> On Sun, Sep 11, 2016 at 8:52 PM, Brent Meeker <meeke...@verizon.net>
>> wrote:
>>>
>>>
>>> On 9/11/2016 4:07 AM, Telmo Menezes wrote:
>>>>
>>>> Hi Brent,
>>>>
>>>> On Sat, Sep 10, 2016 at 8:29 PM, Brent Meeker <meeke...@verizon.net>
>>>> wrote:
>>>>>
>>>>> Good paper.
>>>>
>>>> Thanks!
>>>>
>>>>> Many of the thoughts I've had about the subject too.  But I
>>>>> think your use of persistence is misleading.  There are different ways
>>>>> to
>>>>> persist.  Bacteria persist, mountains persist - but very differently.
>>>>
>>>> Ok, I talk about persistence in the very specific sense of Dawkin's
>>>> selfish gene. Forward propagation of information in a system of
>>>> self-replicators.
>>>>
>>>>>    The
>>>>> AI that people worry about is one that modifies it's utility function
>>>>> to
>>>>> be
>>>>> like humans, i.e. to compete for the same resources and persist by
>>>>> replicating and by annihilating competitors.
>>>>
>>>> That is one type of worry. The other (e.g.: the "paper clip" scenario)
>>>> does not require replication. It is purely the worry that side-effects
>>>> of maximizing the utility function will have catastrophic
>>>> consequences, while the AI is just doing exactly what we ask of it.
>>>>
>>>>> You may say that replicating
>>>>> isn't necessarily a good way to persist and a really intelligent being
>>>>> would
>>>>> realize this; but I'd argue it doesn't matter, some AI can adopt that
>>>>> utility function, just as bacteria do, and be a threat to humans, just
>>>>> as
>>>>> bacteria are.
>>>>
>>>> I don't say that replication is the only way to persist. What I say is
>>>> that evolutionary pressure is the only way to care about persisting.
>>>
>>>
>>> I see caring about persisting and evolutionary pressure as both
>>> derivative
>>> from replication.
>>
>> Ok, provided it is replication with variance.
>>
>>>   I'm not sure an AI will care about replication or
>>> persistence,
>>
>> I'm not sure either. I just say that it's a possibility (evolution
>> being bootstrapped by a designed AI).
>>
>>> or that it can modify it's own utility function.  I think JKC
>>> makes a good point that AI cannot forsee their own actions and so cannot
>>> predict the consequences of modifying their own utility function - which
>>> means they can't apply a utility value to it.
>>
>> I also agree with the JKC that the superintelligence cannot model
>> itself and predict its actions in the long term. On the other hand,
>> I'm sure it can predict the outcome of it's next action.
>
>
> The "outcome" is open-ended in general: not
> open-ended=death=non-persistence.

My point is that this is only true under evolutionary dynamics.
Without evolution, the goal is determined by an utility function, that
a sufficiently advanced AI can modify. In this latter case, there is
no reason to assume a preference for persistence. I argue that this
type of AI will simply exit the meta-game of utility functions by
assigning itself the highest possible reward, being left with nothing
else to do.

>> If modifying
>> its own utility function is a viable action, then it can predict that
>> modifying it to constant infinity leads to a state of the world with
>> infinite utility, so it will move there.
>
>
> Even if it realizes it's equivalent to death?

Yes. People tend to assume that a preference for persistence is a
universal of intelligent entities, but there is really no reason to
think that.

We know that humans are capable of choosing self-destruction. It is
also obvious that most don't, and as a human you probably feel a
strong resistance against harming yourself. Where does this resistance
come from? Our brains where evolved to have it. Mutations that go
against this feature are weeded out.

Without evolution, what is the mechanism that prevents a sufficiently
capable intelligence from sacrificing its own continuation to maximize
utility? Utility maximization attempts are the only conceivable
driving force behind the actions of such an entity.

> People have kind of
> hierarchical utility functions, per Maslow.  So if their short-term
> persistence is assured, they turn to satisfying other values and which ones
> they turn to are not just internally determined, but also depend on external
> events and circumstances.  Hence it is likely to chaotic in the mathematical
> sense.

Sure, but these are the heuristics that I allude to in the
evolutionary case. Of course, the utility function of a designed AI
can be created to mimic Maslow's hierarchy. But then the designed AI
can substitute this utility function for something easier to maximize:
ultimately constant infinity. In this case, there is no evolutionary
process to overcome such a move.

Telmo.

> Brent
>
>
>> No deep self-understanding is
>> necessary to reach this conclusion. Just the same sort of predictions
>> that it would do to solve the sliding blocks problem.
>>
>>> Since we're supposing they
>>> are smarter than humans (but not super-Turing) they would realize.  On
>>> the
>>> other hand humans do have their utility functions change, as least on a
>>> superficial level: drugs, religion, age, love... seem to produce changes
>>> in
>>> people.
>>
>> In the model that I propose in the paper, humans have evolved
>> heuristics. The utility function (gene propagation) is a property of
>> reality, as explained by evolutionary theory. We can hack our
>> heuristics in the ways you describe. So could evolved machines.
>>
>>>   It think AI's will be the same.  Even if they can't or won't change
>>> their utility functions as some kind of strategy, they may be changed by
>>> accident and circumstance or even a random cosmic ray.   IF such a change
>>> puts replication on the list of valuable things to do, we'll be off to
>>> the
>>> Darwinian races.
>>
>> Agreed.
>>
>>> AI's valuing replication will want to persist up to the
>>> point of replicating - not necessarily beyond.  Evolutionary pressure is
>>> just shorthand for replicators competing for finite resources needed to
>>> replicate.  So my point is the replication is basic: not persistence and
>>> not
>>> utility functions.
>>
>> Ok, I agree that replication (with variance) is the first principle,
>> and will take that into account in the next version. I also agree on
>> the utility function: under evolutionary dynamics, it's just an
>> emergent property of a system of self-replicators. In the paper, I
>> place it as a property of the environment.
>>
>>> Brent
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to