I think different views may arise like this.

1) Agent runs all the time and has an active set of triggers and actions 
associated with these triggers.   When triggers are too sensitive or actions 
too consequential, they are changed.  Triggers and actions are being added and 
changed all the time.  I'll call this the interpreter model.

2) Agent has an offline planning mode, and compiles the set of triggers and 
action, and periodically puts them into production.   The planning mode is 
reflective and may use abstracted/specialized language to do the planning.   
Ethical thinking occurs in planning mode.   I'll call this the metaprogramming 
model.   The advantage of the metaprogramming model is that the triggers and 
action can operate at high speed.   Don't think about dancing, dance.  The 
metaprogrammer observes the consequences of reprogramming its reptile brain, 
but only a period of letting the reptile brain operate in the wild.

Marcus

On 2/22/20, 1:41 PM, "Friam on behalf of uǝlƃ ☣" <friam-boun...@redfish.com on 
behalf of geprope...@gmail.com> wrote:

    
    
    On 2/22/20 7:45 AM, Marcus Daniels wrote:
    > Glen writes:
    > 
    > < By asking for more examples, it seems the original one (Ellison's Trump 
support) isn't meaningful for you? Another example might be learning that your 
organization accepted money from a convicted sex offender like Epstein. These 
are triggers for some people. They'd trigger me, too. >
    > 
    > A reason I can see for avoiding a term like EI is because others might 
not have a binding for it, or there are too many different bindings observed 
for it.   And, specifically, that it is "pompous" to use the term if it is 
expected there is no binding -- a way to bully the  conversation in some 
direction putting the other party at a disadvantage.   But it is hypocritical 
if one turns around and assumes there are shared values and that we should or 
do all have them.   This is arguing in bad faith because some values are 
assumed to be mandatory and other optional, rather than all things being 
optional. 
    
    Well, a) I didn't assume any shared values. I explicitly stated that such 
things are triggers for *some* people. I didn't say *all* people should be 
triggered by getting money from Epstein. And, given the popular culture at the 
moment I said I would *advise* Pinker to install a trigger, not that he must or 
even *should*. So, b) if you're accusing me of arguing in bad faith for 
rejecting the need for a sophisticated concept like EI, I think it's a false 
accusation.
    
    Even in my first post, I think I made the explicit comment that it doesn't 
matter whether the Oracle employee likes or dislikes that Ellison supports 
Trump. What matters is that the employee knows that Ellison = Oracle, hence 
Oracle supports Trump. And the question was whether that's a good trigger to 
have, regardless of how you react to the trigger.
    
    So, there are no shared values, here, only a rejection that we need 
sophisticated rhetoric like EI.
    
    -- 
    ☣ uǝlƃ
    
    ============================================================
    FRIAM Applied Complexity Group listserv
    Meets Fridays 9a-11:30 at cafe at St. John's College
    to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
    archives back to 2003: http://friam.471366.n2.nabble.com/
    FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
    

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to