Yes, I think so; except the goals need not be underspecified or contradictory.  
The condition (or action or assertion) made by one of the anticipatory agents 
within the system can be an unambiguous member of the set defined by the 
policy.  The loopiness comes in because that condition is defined in reference 
to the policy that defines it.  Perhaps an example would be "A staff member is 
an employee (not a contractor) iff that staff member has all the properties 
generally thought to belong to employees (as opposed to contractors)."  This is 
a common ploy used by state tax agencies to extract extra taxes from 
corporations.  A book keeper in such a company might assert that some staffer 
should not be paid as an employee because they're the only employee who, say, 
telecommutes.

The binding/grounding/meaning in such a circular system can be volatile.  But 
it's not (necessarily) due to being unspecified/vague.  I suppose it's 
technically ambiguity, multi-valued.  The underlings, including _any_ member of 
the corporation like owners, board members, *EOs, etc., can use their influence 
to knead the corporation-level grounding of the policy in whatever way they 
can.  And it stays this way until the corporation officially files a request 
with the state agency to get a ruling on that particular condition 
(telecommuting).  Once the state makes the ruling, then that property 
(telecommuting) is "hard" bound/grounded one way or the other.


On 07/13/2016 06:58 AM, Marcus Daniels wrote:
> I'm not sure if this is what you are getting at, but would the following 
> scenario also be an instance of this:   An organization has a set of goals 
> and let's say they are underspecified or the goals compete with one another.  
> Now, whenever someone  is upset about whatever they might be upset about (or 
> sees some opportunity to schmooze to their superiors), they make an appeal, 
> and, in some circumstances, their superiors make reference to the policies to 
> bring order.    In doing so, they produce something of the form of an inverse 
> problem.   "You all should see this is in an instance of a class of policy 
> X."   Let's just say, for the sake of argument, that is not at all clear or 
> is debatable.   Where does that leave the reader?   One probably does not 
> question the superior court, as it were, because it is stated as fact.   
> Instead I think what happens is that the underlings search for generating 
> functions to fit the set of constraints and then socialize the solutions to 
> reduce future r
>  isk.   That is, they seek (effective) unification that is deterministic.   1 
> answer, not 0, not more than 1.    But it isn't unification in the logic 
> programming sense, it just looks that way.
> 
> This sort of system leads to each agent making a sort of master equation of 
> the environment and using it to predict risk (and reward) from above.   
> Meaning does not exist until the underlings scurry around filling in the free 
> variables with a self-consistent (and consensus) set of values.   I would 
> claim that in the real world there is usually no shared typing system except 
> in exceptional cases like the U.S. Judicial system.   Mostly it is just the 
> evolution of anticipatory behaviors from high or low fitness.  Mommy and the 
> kids just try to keep Daddy the tyrant from losing his cool, and in doing so 
> evolve an effective control system.


-- 
☢ glen
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to