Glen,
... 
> You're right that agility helps one avoid an avoidable change ... e.g.
> like a big fish snapping at a small fish.  And you're right that such
> avoidable changes are only avoidable if one can sense the change
> coming.
> 
> But, what if the change is totally unavoidable?  I.e. it's going to get
> you regardless of whether or not you sense it?  In such cases, the
> canalizing ability is agility.  Its sibling sensory ability _helps_,
> for
> sure.  But when the unavoidable change is in full swing and you cannot
> predict the final context that will obtain, then agility is the key.

[ph] trying to understand, if you're surprised and unable to respond to
change because you were not able to respond in time (or in kind), so the
circumstances exceeded the range of your agility, how did the agility become
the key?

> > The clear evidence, [...], is that we are missing the
> > signals of approaching danger.  We read 'disturbances in the force'
> (i.e.
> > alien derivatives like diminishing returns) very skillfully in one
> > circumstance and miss them entirely in others.  We constantly walk
> smack
> > into trouble because we do something that selectively blocks that
> kind of
> > information.
> 
> I disagree.  We don't continually walk smack into trouble _because_ we
> selectively block a kind of information.  Our trouble is two-fold: 1)
> we
> are _abstracted_ from the environment and 2) we don't adopt a manifold,
> agnostic, multi-modeling strategy.

[ph] how is that not stated in more general terms in saying we're often
clueless and get caught flat footed?

> 
> If we were not abstracted, then we'd be something like hunter-
> gatherers,
> destroying our local environments willy-nilly, but never deeply enough
> in a single exploitation such that the environment (including other
> humans) can't compensate for our exploitation.

[ph] I was sort of thinking you used _abstracted_ to refer to our use of an
artificial environment in our minds to guide us in navigating the real one.
All our troubles with the environment come from the ignorant design of our
abstractions it seems to me.  I can identify a number in particular having
to do with the inherent design of modeling, but I mean, it's tautological.
If our abstraction worked well we wouldn't be an endangered species.

> 
> But we _are_ abstracted.
> 
> If we were to adopt a manifold, agnostic, multi-modeling strategy to
> integrating with the environment, then we'd be OK because most of our
> models would fail but our overall suite would find some that work.
> 
> But we do NOT use such a strategy.

[ph] well, and we also don't look where we're going.  That is actually the
first step in any strategy isn't it?  If we have functional fixations that
redefine what's in front of us as something that should never be looked at,
like saying that switching ever more land from making food to making fuel is
'renewable energy', say, then we run smack into things without any chance to
engage any strategy no matter how good our strategy might have been had we
developed one.

> 
> Instead, primarily because of cars, airplanes, the printing press, and
> the internet, we succumb to rhetoric and justification of some
> persuasive people, and we all jump on the same damned bandwagon time
> and
> time again.  That commitment to a single (or small set of) model(s)
> condemns us to failure, regardless of the particular model(s) to which
> we commit.

[ph] and to correct a lack of models do you not first need to look around to
see what you might need a model for before making them?

> 
> > [ph] again, agility only helps avoid the catastrophe *before* the
> > catastrophe.  Here you're saying it mainly helps after, and that
> seems to be
> > incorrect.
> 
> Wrong.  Agility helps keep you in tune with your environment, which
> percolates back up to how embedded you _can_ be, which flows back down
> to how _aware_ you can be.  The more agile you are, the finer your
> sensory abilities will be and vice versa, the more sensitive you are,
> the more agile you will be.

[ph] agility is technically the versatility of your response to a signal,
not the listening for or recognition of the signal.   The listening part is
what's missing in our not knowing what models we'll need, and so having no
response.  A dog sleeping with one ear open is alert and listening, but not
displaying his agility.  He's sleeping.  The chain of events from alertness,
to recognizing a signal, to developing a response and then doing it, is
complex.   Maybe you mean to have that whole chain of different things as
'agility'?

The limits to growth signal is thermodynamic diminishing returns on
investment which started long ago... and then it proceeds to an ever steeper
learning curve on the way to system failure, which has now begun.  If people
saw that as something a model was needed for I could contribute a few of my
solutions to begin the full course correction version.  It seems the
intellectual community is not listening for the signal yet though... having
some functional fixation that says none will ever be needed.

> 
> You seem to be trying to linearize the problem and solution and say
> that
> maximizing awareness, knowledge, and information is always the
> canalizing method for avoiding unanticipated potentially catastrophic
> change.  I'm saying that embeddedness is the general key and when the
> coming change is totally unavoidable, agility is the specific key.

[ph] You leave 'embeddedness' undescribed. How do you achieve it without
paying attention to the things in the world for which you have no model?
How would you know if there are things for which you have no model?

> Further, the less avoidable the change, the more agility matters.  The
> more avoidable the change, the more sensitivity matters.  But they are
> not orthogonal by any stretch of the imagination.  So, I'm not "making
> it complicated", I'm saying it is complex, intertwined.  You can't
> _both_ separate/isolate your abstract self and be agile enough to
> handle
> unanticipated potentially catastrophic change.
> 
> You _can_ separate/isolate your abstract self and handle unanticipated
> potentially catastrophic change if you use a multi-modeling strategy so
> that any change only kills off a subset of your models.  The problems
> with that are: a) as technology advances, our minds are becoming more
> homogenous, meaning it's increasingly difficult for _us_ to maintain a
> multi-modeling strategy, and b) we really don't have the resources to
> create and maintain lots of huge models.
> 
> Hence, agility is the key.

[ph] Maybe I'm being too practical.  You're not being at all clear how you'd
get models for things without a way of knowing you need to making them.
What in your system would signal you that the systems of the world your
models describe were developing new behavior?

Phil

> 
> --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Philosophy is a battle against the bewitchment of our intelligence by
> means of language. -- Ludwig Wittgenstein
> 
> 
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to