Hussein,
I love the example!

I don't think the idea here is to look at anything novel, but to look at a well
known problem from the point of view of complexity / agent based modeling. This
combining the accusation that we do not talk about complexity enough, and the
relatively mundane observation that rule-based social systems get corrupted
fairly reliably. Presumably a well made model would begin to answer your
question: Which parameters at which values would resist such corruption, or
perhaps even reverse it. 

P.S. Being on a mid-sized college campus, the solution to the Dr. Clever
problem is to get people to talk to each other... a lot... beer helps. As I
tell my friends, the soul of the campus is won or lost by the social committee.

Eric

Chair, Faculty Senate Social Committee ;- )


On Sun, May  8, 2011 09:36 PM, "Hussein Abbass" <h.abb...@adfa.edu.au> wrote:
>
>
> 
>
> 
>  
> 
>
>
>>
>
>
>
>Before I propose a model, let me share my feeling of the
>whole exercise.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>For some reason, I am not sure why do we see this as a
>new problem. I would argue that any political system by definition must have a
>mechanism for hiding information. An autocratic society will have the hidden
information
>centralized in a very small group. A democratic society, by definition, is a
distributed
>hidden information system.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Any definition of democracy in a constitution is not a
>functioning definition of democracy, simply it is too idealistic to function.
>Any functioning definition of democracy can’t be constitutionallized, no one
>would have the guts to propose it. Even if someone does, politics understand
>that decisions in an idealistic democratic society rely on frequencies, and
high
>frequency is always controlled by the simple minded people – we have many of
>them, they are the most successful political tool after all! So no one of them
>would agree on a functioning definition of democracy; it is too complicated and
>non-idealistic, Utopia is the ultimate aim for dreamers and the majority are. 


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>If I understood the problem correctly, it boils down in
>my mind to two conditions defining sufficient conditions for obfuscation to
>emerge in a democratic society. Although we really need to define what type of
>democracy we are talking about here, let us - for simplicity - assume that
democracy
>as we all think of it, is one – a big assumption indeed!


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>The two sufficient conditions are localization and
>isolation. I can get lots of inspiration easily from areas such as control of
>virus spread and the communication literature. But let me give you an example
>where we can create perfect obfuscation. Imagine a social system as a social
>network


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>(1)   Localization: Here, localization will simply be
>achieved through a social value. Imagine in one society that confrontation is
>not perceived as a good attitude. Imagine we are modelling Obfuscation in a
>Faculty. We can imagine the consequence of this social value of avoiding
>confrontation; discussions will tend to be done within small groups – maybe
of
>size 2 - most of the time. Issues are solved that way, so large group
>discussions are not needed and confrontation is nicely avoided. Is not this how
>universities are structured anyhow! This simple behaviour will localize
>information. 


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>(2)   Isolation: the trick now is how to get these groups
>to stop communicating to one another. We know from virus spread, idea spread,
>etc, that we can spread the virus very quickly in a network from a single
initially
>infected node. To isolate ideas in a social system, we need a powerful social
>value! Here, let us call it trust or confidentiality or anything similar. The
>objective is to promote everything as important and we all need to trust each
>other in keeping a secret. Confidentiality would stop members of a group to
>discuss the topic with other group members. It is a shield to protect the
>spread even when groups overlap!


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>The previous two conditions will create obfuscation.
>Localization will cause information to be discussed locally, while isolation
>will reduce the probability that information will travel across the network.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Ok, so far, we defined two sufficient conditions for
>obfuscation to emerge. Can we take this one step further to create a dictator
>who appears to be a democratic decision maker in a democratic society? 


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>My answer is yes, this is an old piece of news or political
>trick! The previous setup is perfect for that. We just need this dictator to be
>the head of the Faculty. If we wish to have an invisible dictator, the head of
>the Faculty can simply be a useless figure! Obviously, it can be a small group
>of size 2-5 as well, but they need to be fully and directly connected to each
>other and almost connected to everyone else.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Let me put this in a simple story. Prof. Clever is the
>dean of Faculty of Idiots. Prof Clever would like to be a dictator in a
>democratic society. He appoints 3 other Professors to form a strategy
>committee. He believes in separating strategy from execution, thanks to all the
>wonderful literature in management on that topic. Prof. Clever cancelled most
>Faculty public meetings and created many committees. These committees seek
>people opinion to have a truly democratic environment. He told the people we
>are a civilized society. We should not confront each other in public. Issues
>can be solved smoothly in a better environment and within a small group. Public
>meetings are now to simply give presentations that no controversial issue is
>discussed; their information content is 0 to anyone attending them. But they
>demonstrate democracy and support the members of the Faculty of Idiots’ right
>for dissemination of information. Prof. Clever promotes good values. Important
>values that Prof. Clever is promoting are trust and confidentiality. In
>meetings, people need to trust each other to facilitate exchange of
>information. But this requires confidentiality; otherwise problems will emerge.
>Obviously, meetings are called by management, members of the meetings are
engineered
>by management, the whole social network is well-engineered such that different
>type of information do not get crossed from one sub-graph to another. The
>faculty of Idiots is the happiest faculty on earth. No public confrontation
>means no fights, a well-engineered civilized society. Small group meetings are
>dominated with Prof. Clever or simply take place to tick a box in a report.
>There is only one person in the Faculty of Idiots who knows everything, Prof.
>Clever. No one else knows more than anyone else to the extent that everyone
>simply knows nothing. But everyone is happy, everyone feels important because
>he/she is trusted and everyone feels they are well-informed of the task they
>are performing! Prof. Clever eliminated competition, no leader can emerge in
>this social system that he does not approve. Prof. Clever is the nice guy that
>everyone loves and respect. He listens, he is socially friendly, and after all
>is indeed Clever!


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>So!!!!! we can get obfuscation to emerge. There are so
>many old political tools to do so; take political propaganda as a powerful one
>among many others! There are many different variations to do it, not just the
>above model.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>The harder question for me is, how can we undo it if it
>is engineered as above?


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Why is it hard to break it? Because the two principles
>representing the sufficient conditions for its emergence rely on social values!
>Any attempt to break it, will be met with resistance in a part of the
>population and will be called unethical, if not illegal! It is a robust
>self-regulating strategy.


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Another hard question is how can we get the social
>network to recognize obfuscation in the previous setup? If they can’t
recognise
>it, they can’t do anything about it!


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Finally, notice what I defined above as obfuscation and
>framed it as a bad thing is indeed a form of democracy!!!!!!!!!!


>
>


>
>


>
>
>
>


>
> 


>
>


>
>
>
>Cheers


>
>


>
>


>
>
>
>Hussein


>
>


>
>


>
>
>
>
>
> 




>
>
>
>
>
> 




>
>>
>
>>
>
>
>
>From: friam-boun...@redfish.com
>[mailto:friam-boun...@redfish.com] On Behalf Of Vladimyr Burachynsky
>Sent: Monday, 9 May 2011 9:17 AM
>To: 'The Friday Morning Applied Complexity Coffee Group'
>Subject: Re: [FRIAM] Modeling obfuscation (was - Terrorosity and it's Fruits)
>
>




>
>
>
>
>
>
>
>


>
> 


>
>


>
>
>
>Eric and Mohammed,
>
>




>
>
>
>
>
> 




>
>
>
>I don’t think anyone can be Off base at this point in sketching
>out a scenario.  But you might be trying to tackle Goliath in the first
>round!
>
>




>
>
>
>
>
> 




>
>
>
>Firstly I assume human beings are not very bright, They seem to
>use extremely simple rules of self satisfaction, though the emotions might be
>more complicated.
>
>




>
>
>
>It is not widely accepted but dogs can figure things out as
>quickly as humans on occasion and there is no wearisome Narrative. 
>
>




>
>
>
>I look at it from the point of view that agents are simple
> but Stupid . This gave me a headache until I realized that many human
>beings actually do not know why they did something in particular, then and only
>then do they invent the Narrative. They are not actually attempting to deceive
>anyone  but simply wish to convince me that they did something for a Good
>reason. They avoid acknowledging the fact that they did not think.They then
>drop into the socially acceptable lexicon to explain everything. Often I have
>remarked that the act of speaking out loud convinces others as well as most
>importantly  the speaker himself.. So the speaker is lying to himself
>first and then accepts this as his story and probably could pass a lie detector
>test afterwards.
>
>




>
>
>
>
>
> 




>
>
>
>The fact that narratives are spun is a red herring. They did not
>know how they made the decision. That frightened the hell out of me in complex
>engineering projects. I had no way to anticipate human error  of this
>sort. People actually can construct insane scenarios to motivate themselves and
>then totally forget them. This form of misperception is internal to the brain.
>I have watched audiences fall for magicians tricks so completely that I have
>been stunned into disbelief. Yet it is so repeatable. I have seen some
>references to hidden Blind spots in reason explored by neurologists. Generally
>I think Biology was too cheap and lazy to give us a completely functional
>brain. I will be the first to admit to having difficulty with my brain at
>times.
>
>




>
>
>
>
>
> 




>
>
>
>To cope we have a pervasive belief that we are intelligent in
>spite of many serious flaws. As a scientist I consider determining the extent
>of thinking important. I am forced by language to say what I Think for lack of
>an alternative. I repeat the phrase for more than half a century but still do
>not understand what it actually means, nor do the philosophers directly address
>the act. Seems they were more preoccupied by passion in contradiction.
>
>




>
>
>
>
>
> 




>
>
>
>We say Man  is a learning animal which implies it progresses
>somewhat. But I suspect culturally we have found many insidious means to
>prevent learning. Why ? Is it unconscious. Somewhat like the vexed mother fed
>up answering questions about the color of the sky and butterflies and moths.
>Ignorant people are easier to control, suggests history but why?
>
>




>
>
>
>
>
> 




>
>
>
>Let’s build something Stupid (Whimsical and arrogant)rather than
>Intelligent. If we have no idea what one is how can we answer what the opposite
>actually entails. An agent should have more than one choice of action and some
>of those should be utterly insane.
>
>




>
>
>
>
>
> 




>
>
>
>Your institutional Review boards you describe sound  as
>nasty as a Byzantine Palace Intrigue. So let’s start much simpler. For the
>present the agent should not know what is in his best interest , that is only
>to be determined by which emotion dominates at any moment. He can make up
>stories afterwards. I often consider the role of Historians that of making
>reasonable explanations out of stupid events. The conspiracy theorist will hate
>this if it bears out.
>
>




>
>
>
>
>
> 




>
>
>
>As for the gains  first we waste time looking for reasons
>where there are none. Next we can find some way of warning individuals not to
>encourage group think. With near to 7 Billion on this planet maybe it is time
>to alert ourselves to the flaws in our own brains.; Fear,  Gullibility,
>Conformity, short sighted self interest emotional reasoning. In the early
>stages I would limit the agents to simply responding and not have them try to
>become operators of other agents, but that seems to be the goal. Jochen
forwarded
>an interesting article to the group on the ecology of the mind, I have yet to
>study the material but it looks intriguing .
>
>




>
>
>
>
>
> 




>
>
>
>It is an old joke , but the more people in the room the dumber
>it gets.
>
>




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
>Vladimyr Ivan Burachynsky PhD
>
>




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
><#>
>
>




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
>120-1053 Beaverhill Blvd.
>
>




>
>
>
>Winnipeg,Manitoba, R2J3R2
>
>




>
>
>
>Canada 
>
>




>
>
>
> (204) 2548321 Land
>
>




>
>
>
>(204) 8016064  Cell
>
>




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
>
>
> 




>
>
>
>From: friam-boun...@redfish.com
>[mailto:friam-boun...@redfish.com] On Behalf Of ERIC P. CHARLES
>Sent: May-08-11 4:00 PM
>To: Mohammed El-Beltagy
>Cc: The Friday Morning Applied Complexity Coffee Group
>Subject: Re: [FRIAM] Modeling obfuscation (was - Terrorosity and it's
>Fruits)
>
>




>
>
>
>
>
> 




>
>>
>
>
>
>I think I know
>what you are talking about, but I'm not sure what the best way to model it
>would be, or what we would gain from the modeling exercise. Are you talking
>about something like this?
>
>
>Institutional review boards (IRBs) oversee research that involves human
>participants. This body was formed due to laxness/nastiness on the part of
>biomedical researchers. It was later extended due to (perceived)
>laxness/nastiness on the part of social science researchers. At first, all they
>did was to declare studies ethically alright, or not. Later, they were taken
>over by a number of outside forces, including university's
>"risk-management" departments. Their main function is now to try to
>avoid lawsuits, with secondary functions of promoting arbitrary bureaucratic
>rules and arbitrary whims of committee members. Giving a "pass or
>fail" on ethics is, at best, a tertiary goal.  To make things worse,
>the lawyers and bureaucracy have actually done a lot to undermine the semblance
>of ethical stricture they produce.
>
>
>If this is the type of thing you are talking about, it seems an oddly complex
>thing to try to model, mostly because it is extremely open-ended. You need 1)
>agents with different agendas, 2) the ability to assess and usurp rules created
>by other agents, 3) the ability to force other agents to adopt your rules.
>Note, also, that in this particular case, the corruption is accomplished by
>stacking contradictory rules on top of each other. Thus you need 4) an ability
>to implement contradictory rules, or at least choose between so-called rules.
>The bigger challenge seems to be figuring out a way to accomplish such a model
>without in some essential way, pre-programing the outcome (for example, in the
>way you set agent agendas and allow agents to form new rules).
>
>
>What variables would be manipulated in the modeling space? What is to be
>discovered beyond "agents programmed to be self-interested act in their
>own best interest"? I'm also not sure what this has to do with agents that
>"actively obfuscate the participatory nature of the democratic
>decision." So... maybe I'm completely off base. Can you give a concrete
>example?
>
>
>Eric 
>
>
>On Sun, May 8, 2011 06:56 AM, Mohammed El-Beltagy <<#>> wrote:
>
>




>
>>
>
>
>
>Eric, 
>
>




>
>>
>
>
>
>
>
> 




>
>
>
>>
>
>
>
>Thats an interesting way of looking at
>it. As complex game of information hiding. 
>
>




>
>
>
>>
>
>
>
>
>
> 




>
>
>
>>
>
>
>
>I was thinking along the line of of having
>a schema for rule creation.  The schema here is like a constitution, and
>players can generate new rules based on that schema to promote their self
>interest. For rules to become "laws" they have to be the choice on
>the majority (or subject to some other social choice mechanism), this system
> allows  for group formation and coalition building to get the
>new rules passed into laws. The interesting bit is how the drive for self
>interest amongst some of those groups and their coalitions can give
>rise to rules renders the original schema and/or the social choice
>mechanism ineffective. By "ineffective", I mean that they yield
>results and behavior that run counter to the purpose for which they
>were  originally designed. 
>
>




>
>
>
>>
>
>
>
>
>
> 




>
>
>
>>
>
>
>
>What do you think?
>
>




>
>
>
>>
>
>
>
>
>
> 




>
>
>
>>
>
>
>
>Cheers, 
>
>




>
>
>
>>
>
>
>
>
>
> 




>
>
>
>>
>
>
>
>Mohammed 
>
>




>
>
>
>>
>
>
>
>
>
> 




>
>>
>
>
>
>On Sun, May 8, 2011 at 2:44 AM, ERIC P.
>CHARLES <e...@psu.edu> wrote:
>
>




>
>>
>
>
>
>I can't see that this posted, sorry if it
>is a duplicate --------
>
>




>
>>
>
>
>
>
>
> 




>
>>
>
>
>
>Mohammed,
>
>Being totally unqualified to help you with this problem... it seems interesting
>to me because most models I know of this sort (social systems models) are about
>information acquisition and deployment. That is, the modeled critters try to
>find out stuff, and then they do actions dependent upon what they find. If we
>are modeling active obfuscation, then we would be doing the opposite - we would
>be modeling an information-hiding game. Of course, there is lots of game theory
>work on information hiding in two critter encounters (I'm thinking
>evolutionary-game-theory-looking-at-deception). I haven't seen anything,
>though, looking at distributed information hiding. 
>
>
>The idea that you could create a system full of autonomous agents in which
>information ends up hidden, but no particular individuals have done the hiding,
>is kind of cool. Seems like the type of thing encryption guys could get into
>(or already are into, or have already moved past).
>
>
>Eric
>
>
>On Fri, May 6, 2011 10:05 PM, Mohammed El-Beltagy <moham...@computer.org>
wrote:
>
>




>
>

>
> 


>
I have a question I would like to pose to the group in that regard:
>
>


>

>
> 


>
Can we model/simulate how in a democracy that is inherently open (as
>
>


>
stated in the constitution: for the people, by the people etc..) there
>
>


>
emerges "decision masking  structures" emerge that actively obfuscate
>
>


>
the participatory nature of the democratic decision making for their
>
>


>
ends?
>
>


>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
============================================================
>FRIAM Applied Complexity Group listserv
>Meets Fridays 9a-11:30 at cafe at St. John's College
>lectures, archives, unsubscribe, maps at http://www.friam.org
>

Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to