Eric and Mohammed,

 

I don’t think anyone can be Off base at this point in sketching out a scenario. 
 But you might be trying to tackle Goliath in the first round!

 

Firstly I assume human beings are not very bright, They seem to use extremely 
simple rules of self satisfaction, though the emotions might be more 
complicated.

It is not widely accepted but dogs can figure things out as quickly as humans 
on occasion and there is no wearisome Narrative. 

I look at it from the point of view that agents are simple  but Stupid . This 
gave me a headache until I realized that many human beings actually do not know 
why they did something in particular, then and only then do they invent the 
Narrative. They are not actually attempting to deceive anyone  but simply wish 
to convince me that they did something for a Good reason. They avoid 
acknowledging the fact that they did not think.They then drop into the socially 
acceptable lexicon to explain everything. Often I have remarked that the act of 
speaking out loud convinces others as well as most importantly  the speaker 
himself.. So the speaker is lying to himself first and then accepts this as his 
story and probably could pass a lie detector test afterwards.

 

The fact that narratives are spun is a red herring. They did not know how they 
made the decision. That frightened the hell out of me in complex engineering 
projects. I had no way to anticipate human error  of this sort. People actually 
can construct insane scenarios to motivate themselves and then totally forget 
them. This form of misperception is internal to the brain. I have watched 
audiences fall for magicians tricks so completely that I have been stunned into 
disbelief. Yet it is so repeatable. I have seen some references to hidden Blind 
spots in reason explored by neurologists. Generally I think Biology was too 
cheap and lazy to give us a completely functional brain. I will be the first to 
admit to having difficulty with my brain at times.

 

To cope we have a pervasive belief that we are intelligent in spite of many 
serious flaws. As a scientist I consider determining the extent of thinking 
important. I am forced by language to say what I Think for lack of an 
alternative. I repeat the phrase for more than half a century but still do not 
understand what it actually means, nor do the philosophers directly address the 
act. Seems they were more preoccupied by passion in contradiction.

 

We say Man  is a learning animal which implies it progresses somewhat. But I 
suspect culturally we have found many insidious means to prevent learning. Why 
? Is it unconscious. Somewhat like the vexed mother fed up answering questions 
about the color of the sky and butterflies and moths. Ignorant people are 
easier to control, suggests history but why?

 

Let’s build something Stupid (Whimsical and arrogant)rather than Intelligent. 
If we have no idea what one is how can we answer what the opposite actually 
entails. An agent should have more than one choice of action and some of those 
should be utterly insane.

 

Your institutional Review boards you describe sound  as nasty as a Byzantine 
Palace Intrigue. So let’s start much simpler. For the present the agent should 
not know what is in his best interest , that is only to be determined by which 
emotion dominates at any moment. He can make up stories afterwards. I often 
consider the role of Historians that of making reasonable explanations out of 
stupid events. The conspiracy theorist will hate this if it bears out.

 

As for the gains  first we waste time looking for reasons where there are none. 
Next we can find some way of warning individuals not to encourage group think. 
With near to 7 Billion on this planet maybe it is time to alert ourselves to 
the flaws in our own brains.; Fear,  Gullibility, Conformity, short sighted 
self interest emotional reasoning. In the early stages I would limit the agents 
to simply responding and not have them try to become operators of other agents, 
but that seems to be the goal. Jochen forwarded an interesting article to the 
group on the ecology of the mind, I have yet to study the material but it looks 
intriguing .

 

It is an old joke , but the more people in the room the dumber it gets.

 

 

Vladimyr Ivan Burachynsky PhD

 

 

 <mailto:vbur...@shaw.ca> vbur...@shaw.ca

 

 

 

120-1053 Beaverhill Blvd.

Winnipeg,Manitoba, R2J3R2

Canada 

 (204) 2548321 Land

(204) 8016064  Cell

 

 

 

From: friam-boun...@redfish.com [mailto:friam-boun...@redfish.com] On Behalf Of 
ERIC P. CHARLES
Sent: May-08-11 4:00 PM
To: Mohammed El-Beltagy
Cc: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Modeling obfuscation (was - Terrorosity and it's Fruits)

 

I think I know what you are talking about, but I'm not sure what the best way 
to model it would be, or what we would gain from the modeling exercise. Are you 
talking about something like this?

Institutional review boards (IRBs) oversee research that involves human 
participants. This body was formed due to laxness/nastiness on the part of 
biomedical researchers. It was later extended due to (perceived) 
laxness/nastiness on the part of social science researchers. At first, all they 
did was to declare studies ethically alright, or not. Later, they were taken 
over by a number of outside forces, including university's "risk-management" 
departments. Their main function is now to try to avoid lawsuits, with 
secondary functions of promoting arbitrary bureaucratic rules and arbitrary 
whims of committee members. Giving a "pass or fail" on ethics is, at best, a 
tertiary goal.  To make things worse, the lawyers and bureaucracy have actually 
done a lot to undermine the semblance of ethical stricture they produce.

If this is the type of thing you are talking about, it seems an oddly complex 
thing to try to model, mostly because it is extremely open-ended. You need 1) 
agents with different agendas, 2) the ability to assess and usurp rules created 
by other agents, 3) the ability to force other agents to adopt your rules. 
Note, also, that in this particular case, the corruption is accomplished by 
stacking contradictory rules on top of each other. Thus you need 4) an ability 
to implement contradictory rules, or at least choose between so-called rules. 
The bigger challenge seems to be figuring out a way to accomplish such a model 
without in some essential way, pre-programing the outcome (for example, in the 
way you set agent agendas and allow agents to form new rules).

What variables would be manipulated in the modeling space? What is to be 
discovered beyond "agents programmed to be self-interested act in their own 
best interest"? I'm also not sure what this has to do with agents that 
"actively obfuscate the participatory nature of the democratic decision." So... 
maybe I'm completely off base. Can you give a concrete example?

Eric 

On Sun, May 8, 2011 06:56 AM, Mohammed El-Beltagy <moham...@computer.org> wrote:



Eric, 

 

Thats an interesting way of looking at it. As complex game of information 
hiding. 

 

I was thinking along the line of of having a schema for rule creation.  The 
schema here is like a constitution, and players can generate new rules based on 
that schema to promote their self interest. For rules to become "laws" they 
have to be the choice on the majority (or subject to some other social choice 
mechanism), this system  allows  for group formation and coalition building to 
get the new rules passed into laws. The interesting bit is how the drive for 
self interest amongst some of those groups and their coalitions can give rise 
to rules renders the original schema and/or the social choice mechanism 
ineffective. By "ineffective", I mean that they yield results and behavior that 
run counter to the purpose for which they were  originally designed. 

 

What do you think?

 

Cheers, 

 

Mohammed 

 

On Sun, May 8, 2011 at 2:44 AM, ERIC P. CHARLES <e...@psu.edu> wrote:

I can't see that this posted, sorry if it is a duplicate --------

 

Mohammed,
Being totally unqualified to help you with this problem... it seems interesting 
to me because most models I know of this sort (social systems models) are about 
information acquisition and deployment. That is, the modeled critters try to 
find out stuff, and then they do actions dependent upon what they find. If we 
are modeling active obfuscation, then we would be doing the opposite - we would 
be modeling an information-hiding game. Of course, there is lots of game theory 
work on information hiding in two critter encounters (I'm thinking 
evolutionary-game-theory-looking-at-deception). I haven't seen anything, 
though, looking at distributed information hiding. 

The idea that you could create a system full of autonomous agents in which 
information ends up hidden, but no particular individuals have done the hiding, 
is kind of cool. Seems like the type of thing encryption guys could get into 
(or already are into, or have already moved past).

Eric

On Fri, May 6, 2011 10:05 PM, Mohammed El-Beltagy <moham...@computer.org> wrote:

 
I have a question I would like to pose to the group in that regard:
 
Can we model/simulate how in a democracy that is inherently open (as
stated in the constitution: for the people, by the people etc..) there
emerges "decision masking  structures" emerge that actively obfuscate
the participatory nature of the democratic decision making for their
ends?
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to