Glen,
It's interesting how you're approaching this whole thing coming to many of
the same questions with different branches.  That's what independent
learning processes do.  If I mostly divide things in more pieces, like
having 'sensing'  before  'acting' as two steps in sequence, it's just
because they do start that way and not because they don't sometimes become
integrated as with becoming one with your world (embedded in as you'd say).
I also am quite concerned about the very major functional fixations that are
not being acknowledged.   Did you look a bit at either of my new short
papers on how to use our more visible fixations (blinders) to help us see
where the others are, and help reveal the amazing world that's been hidden
from us by them?    
Less formal http://www.synapse9.com/drafts/Hidden-Life.pdf 
More theory http://www.synapse9.com/drafts/SciMan-2Draft.pdf 

> 
> phil henshaw wrote:
> > [ph] trying to understand, if you're surprised and unable to respond to
> > change because you were not able to respond in time (or in kind), so the
> > circumstances exceeded the range of your agility, how did the agility
become
> > the key?
> 
> Sorry if I'm not being clear.  I'd like to leave out terms like
> "surprised" and "unable to respond".  Think of senses and actions as a
> continuum.  We _always_ sense a coming change.  Sometimes we sense it
> early and sometimes late.  We _always_ have time to act.  Sometimes we
> have enough time for complicated action and sometimes we can only
> instinctively twitch (as we're being eaten).
> 
> _If_ we sense a coming event too late to perform a complicated action,
> _then_ the more agile we are, the more likely we are to survive.

[ph] in sensing we start with a kind of radically impossible learning task
of matching unrecognized data to an infinite variety of possible models with
which to read it.  The hazard in taking unknown information as hints because
people are *highly* suggestible, is we almost always 'round up the usual
suspects' as Captain Renault in Casablanca is famous for demonstrating the
universal police investigation method.

One of those 'usual suspects' is the functional fixation that we should push
harder when we meet a little resistance.   When a direction of progress runs
into increasing difficulty there are *two* choices, to increase effort or
look for a new path.  With paths of limitlessly increasing difficulty, like
the attempt to get ever increasing returns from steadily diminishing
resources, the evidence is somewhat similar to the 'hump' of difficulty that
problem solvers like to define every problem as being.   The error is fatal,
though.    Now that the little fish is essentially wiggling around in the
closing jaws of the big fish, how would you suggest we disgust the big fish
enough for it to spit us out and go off looking elsewhere?   Could we tickle
his gills or something??

We've had the terminal diminishing returns warning for natural resources for
50 years, and the price explosions are the clear message that we missed the
opportunity to hit the physical limits at less than the highest accelerating
rate we could muster.   The confirming evidence is how the signals to speed
up our learning rate as the curve gets steeper and steeper are all being
ignored.   We're in nearly a complete learning stall and the physical
systems are coming apart.  So... for a problem solving exercise, How do we
learn fast when the symptom is that the problem we chose is slowing down
everyone's learning to a crawl?

> 
> To be concrete, as the big fish snaps at the little fish, if the little
> fish can wiggle fast enough (agility), he may only lose a small section
> of his tail fin.  If the little fish cannot wiggle fast enough, he'll
> end up halfway inside the big fish's mouth.
> 
> Or, more precisely, let's say an event will occur at time T and the
> event is sensed delta_T before the event.  Then as delta_T decreases
> (to
> zero), agility becomes more important than sensitivity.
> 
> _Yes_ sensitivity clued us in to the event in the first place; but we
> continue to sense our environment all through delta_T.  Likewise, we
> continue to _act_ all through delta_T.  These two abilities are not
> disjoint or decoupled.  They are intertwined and (effectively)
> continuous.
> 
> My point is that _after_ we know the event is coming, as delta_T
> shrinks, agility becomes most important.
> 
> >>> The clear evidence, [...], is that we are missing the signals of
> >>> approaching danger. We read 'disturbances in the force' (i.e.
> >>> alien derivatives like diminishing returns) very skillfully in
> >>> one circumstance and miss them entirely in others. We constantly
> >>> walk smack into trouble because we do something that selectively
> >>> blocks that kind of information.
> >>>
> >> I disagree. We don't continually walk smack into trouble _because_
> >> we selectively block a kind of information. Our trouble is
> >> two-fold: 1) we are _abstracted_ from the environment and 2) we
> >> don't adopt a manifold, agnostic, multi-modeling strategy.
> >
> > [ph] how is that not stated in more general terms in saying we're
> often
> > clueless and get caught flat footed?
> 
> My statement is more precise.  Specifically, I _disagree_ with the idea
> that this happens because, as you said, we "selectively block that kind
> of information".  We do NOT selectively block that kind of information.
>  Rather we are abstracted (removed from the concrete detail) from the
> environment, which means we cannot be agile.

[ph] I call that not knowing what's happening and so not being aware of the
choices.  I don't see that as having much to do with how vigorously we might
investigate new solutions if we had any idea what to solve.

> 
> That's why we're often clueless and get caught flat footed.  It's not
> because information is _blocked_.  It's because we're not even
> involved.

[ph] What sort of 'dis-involvement' is not 'dis-information'?  I would have
thought you'd consider the physical world to be made of information, or at
least what it could mean to us to be.  I think the world is also full of
physical things that for many reasons are better observed for their own new
behavior rather than treated as abstractions.

>  That information is literally _below_ the level of sensitivity of our
> sensors.  It's like not being able to see microscopic objects with our
> naked eye or when we can't see people on the ground from an airplane
> window at 50,000 feet.  We're flying way up here and the info we need
> in
> order to be agile is way down there.  In order to be agile, we need to
> be embedded, on the ground, where the rubber meets the road, as it
> were.
> 
> I'm really confused as to why this concept isn't clear. [grin]

[ph] That seems out of time sequence as if you could recognize the model
before interpreting the signal.  When two things are 'in synch' perhaps, but
not in general.   Early recognition of new behavior is the thing that is
easiest for functional fixations to block.  It happens because they block
you from asking exploratory questions.

> 
> > [ph] I was sort of thinking you used _abstracted_ to refer to our use of
an
> > artificial environment in our minds to guide us in navigating the real
one.
> > All our troubles with the environment come from the ignorant design of
our
> > abstractions it seems to me.  I can identify a number in particular
having
> > to do with the inherent design of modeling, but I mean, it's
tautological.
> > If our abstraction worked well we wouldn't be an endangered species.
> 
> Sorry.  By "abstracted", I mean: "taken away, removed, remote, ignorant
> of particular or concrete detail".  This is the standard definition of
> the word, I think.  It's antonym is "concrete" or "particular".

[ph] To me "abstracted" means thinking in terms of "abstractions" which my
dictionary has as "a general concept formed by extracting common features
from specific examples".  How it disconnects your thinking is by replacing
references to complex and possibly changing things with simple fixed ones.
That does disconnect, but as a consequence of the cognitive process, not the
environment.  Fixation is a 'do it yourself' thing...

> "Sustainability" is a _great_ example.  The word is often used in a very
> abstract way.  Sustain what?  Sustain forests?  Sustain grasslands?
> Sustain the current panoply of species?  Sustain low human population so
> that we don't swamp the earth with humans?  Sustain our standard of
> living?  Of course, in some sense "sustainability" means all of these
> things and many more.  And that is what makes it abstract.

[ph] right, precisely. And because people use it as a simple culture-laden
image instead of a learning task about complex relationships of change, we
get the great majority of users of the term to mean 'sustaining prosperity',
as the easy way to combine 'goodness' with 'goodness', not because it makes
the least bit of sense. 

> 
> When you add the concrete detail, it shifts from being "sustainability"
> into something like logistics, epidemiology, ecology, etc.  The term is
> used not to mean a particular effort or method.  The term is used to
> describe a meta-method (or even a strategy) that helps organize
> particular efforts so that the whole outcome has some certain character
> to it.

[ph] that the popular meaning is useless does not mean the term does not
have useful meanings if you actually use it to refer to reducing our impacts
on the earth and making it a good home (or as you'd say 'embedding' in the
earth).  Ever since the success of the word 'sustainability' the
acceleration of increasing impacts has been increasing, and coincidentally
the people leading the organizations involved have *all* been *fiercely*
resistant discussing whether their measures showed the totals...  

> 
> That's an example of what I mean by "abstracted".  I'm not saying it's
> bad.  In fact, abstraction is good and necessary.  But one can not be
> both embedded and abstracted at the same time.

[ph] I'm slowly getting your word usage, and I'd be amazed if anyone else
would not have the same difficulty.  You seem to use 'embedded' to mean
'aware' and 'abstracted' to mean 'unaware'.   At least there are a great
many kinds and levels of 'awareness' not covered by the ends of only one
polarity.   

> 
> > [ph] well, and we also don't look where we're going.  That is
> actually the
> > first step in any strategy isn't it?
> 
> Not necessarily.  Often a strategy requires a reference point.  In such
> cases, we often take some blind action _first_ and only _then_ can we
> look at the effect of the blind action and refine things so that our
> second action is more on target.  "Reconnaissance" might be a good term
> for that first blind action, except there is an expertise to good
> recon... it's largely an introspective expertise, though.  "What types
> of patterns am I prepared to recognize?"

[ph] Oh sure, if you've gotten the signal that a complex process is
underway, and starting out 'blind' as to how to respond, and needing to
discover what to do, people can be highly creative in inventing unexpected
good solutions as you suggest.   If you're living an abstraction ('embedded'
in the joy of fanning flames like the economists) and denying the signal of
your house being on fire...  then you don't get the advantage of our natural
agility in discovering and inventing solutions.   

> 
> > [ph] and to correct a lack of models do you not first need to look
> around to
> > see what you might need a model for before making them?
> 
> "To look" is an action, not a passive perception.  The two are
> inextricably coupled.  You can't observe without _taking_ an
> observation.  Chicken or egg?  All data is preceded by a model by which
> the data was taken and all models are preceded by data from which the
> model was inferred.

[ph] well, observation invariably starts with not knowing what you're
looking for.  It's just dropping your pretenses, nothing more.  The object
of the observation may well get abstracted fairly quickly, or take a long
time and keep you in a quandary nearly forever.   People often take a long
time to recognize what they're seeing or hearing and the reaction may be
either to listen ever more intently, or just check back now and then, or not
able to place it right off and dismiss it with no concern.   One of the
interesting things about elephant behavior is how they periodically all
'freeze' and stand stock still for minutes at a time. Ethologists puzzling
over this apparent group dysfunction discovered they were 'listening' to the
low frequency messages, apparently from other elephants over the horizon.

> 
> That's why I say that agility cannot be decoupled from sensitivity.
> They are both abilities intertwined in what I'm calling embeddedness.

[ph] To de-abstract that would seem to make it more useful.  You mean
agility in problem solving.  Some would call becoming 'embedded' in a
problem 'engaged' or 'immersed' in the problem.  It would imply having
previously avoided being blocked by your abstractions.  I see that as a
state in which you have access to both the situations complexities and
simplicities at the same time.  The kind of high level involvement with a
real problem and all its variables (the period of peak creative intensity)
is the culmination of meeting the problem, not the beginning.  The lead-up
to that when defining the problem and assigning resources to it is where the
problem solving research suggests all the big mistakes are made...  The
intense period of creative 'flow' is where the full realization of the
solution comes about.   I expect creative programming tasks are somewhat
similar to building design tasks in having that intense creative peak moment
just before the deadline, and the quality of early decisions being the real
determinants of success or failure.  

> 
> >>> [ph] again, agility only helps avoid the catastrophe *before* the
> >>> catastrophe.  Here you're saying it mainly helps after, and that
> >>> seems to be incorrect.
> >>>
> >> Wrong.  Agility helps keep you in tune with your environment, which
> >> percolates back up to how embedded you _can_ be, which flows back
> down
> >> to how _aware_ you can be.  The more agile you are, the finer your
> >> sensory abilities will be and vice versa, the more sensitive you
> are,
> >> the more agile you will be.
> >
> > [ph] agility is technically the versatility of your response to a
> signal,
> > not the listening for or recognition of the signal.
> 
> You cannot decouple, isolate, linearize, simplify them like this.  Or,
> I
> suppose you _can_... [grin] ... but you'd be _abstracting_ out the
> concrete reality.
> 
> > Maybe you mean to have that whole chain of different things as
> > 'agility'?
> 
> No.  You and I agree on the definition of "agility".  What we disagree
> on is whether or not agility can be separated from sensitivity.  I
> claim
> it cannot.  They are part and parcel of each other.

[ph] listening requires no plan or problem or anything, and is the beginning
of raising the question of whether one might look for an explanation to
answer some unclear potential signal, to then sort through some 'usual
suspects' of models to compare to the possible signal and decide whether to
drop it there if you don't see anything suspicious.  I think the term
'agility' applied to problem solving identifies the tail end of the process,
and leaves out the major error creation period of problem sensing,
identification and resource allocation. 

> 
> _However_, as delta_T shrinks, agility becomes canalizing.  Acting
> without sensing or thinking is the key to surviving when delta_T is
> small.  This is why we practice, practice, practice in things like
> sports and music.  The idea is to push these actions down into our
> lizard brain so that we can do them immediately without thinking (but
> not without sensing, of course, _never_ without sensing because ...
> wait
> for it ... sensing and acting are tightly coupled).

[ph] well, learning the high art of making things 'second nature', to become
'adept' and learn to *act without thinking* in a highly successful way also
has a flip side.   That's learning to *think without acting*, becoming adept
in true unbiased observation, to open the full door of awareness.

> 
> > The limits to growth signal is thermodynamic diminishing returns on
> > investment which started long ago... and then it proceeds to an ever
> steeper
> > learning curve on the way to system failure, which has now begun.  If
> people
> > saw that as something a model was needed for I could contribute a few
> of my
> > solutions to begin the full course correction version.  It seems the
> > intellectual community is not listening for the signal yet though...
> having
> > some functional fixation that says none will ever be needed.
> 
> You rightly identify functional fixation as a problem.  But I maintain
> that it's a _symptom_ of abstraction.  To break the fixation, go dig in
> the dirt, put your feet on the ground, embed yourself in the system,
> and
> your fixations will dissipate and new ones will form and dissipate in
> tight correlation with the changing context.

[ph] that seems to acknowledge the problem, but is said as if there's a way
to see your blind spots.  The evidence is that the problem is completely
rampant with everyone.   Every self-consistent representation of anything
necessarily contains the fault, as I hope I effectively describe in those
two short essays.

> 
> > [ph] You leave 'embeddedness' undescribed. How do you achieve it
> without
> > paying attention to the things in the world for which you have no
> model?
> > How would you know if there are things for which you have no model?
> 
> [sigh]  "To embed" means "To cause to be an integral part of a
> surrounding whole".  "Embedded" means "the state of being an integral
> part of a surrounding whole."  "Embeddedness" means "the property or
> characteristic of being, or the degree to which something is,
> embedded".

[ph] I use the term 'engage' or 'engaged' for that as the systems you're
connecting with have their own learning processes and so engaging with them
is importantly coordinating your learning with theirs.  Words come to mean
how they're used, of course.

> 
> If you are not embedded in some system and you want to embed yourself,
> then you simply begin poking and peeking at that system.  And you
> _continue_ to (and continually) poke and peek at the system.  You poke
> and peek wherever and whenever you can for as long as you can.

[ph] there are observation methods designed for identify the multiple
independent systems in an environment.  Yes, poking and peeking, what I call
search and explore, finding things for raising questions is an important
part of the learning process.
> 
> One consequence to being embedded is that you can no longer "see the
> forest" because you're too busy poking and peeking at the trees.  I.e.
> you are no longer abstracted.  You become part of the forest.... just
> another one of the many animals running around poking and peeking at
> the
> other stuff in the forest.

[ph] loosing the blocks of your own abstractions and the functional deficits
they give you is quite endless it seems.   Our whole culture developed in a
most negligent way with respect to the many independent kinds of learning
systems in which we are physically embedded but profoundly mentally
detached.   It seriously looks like we are truly blowing our chance to have
a stable home on earth, you know.  If we crash this civilization there are
no cheap resources with which to start-up another one...

> 
> And that means that you don't build a model of the _forest_ (or if you
> do, you shelve it for later modification after you're finished poking
> and peeking at the trees).  If you want to build an accurate model of
> the forest, then you slowly (regimented) abstract yourself out.  Go
> from
> poking and peeking at the trees to poking and peeking at copses or
> canopies, then perhaps to species of tree, then perhaps to the whole
> forest.

[ph] preventing models from becoming blinders can be done two ways.  You can
put them aside, but then people become insecure and keep asking if they can
have their model back.   I prefer to make them mentally transparent, and
look *through* them, so it's like having night vision that picks out all the
living things in view that have original behavior, the things that are alien
to the model...  That way models don't become blind spots but head lights.

> 
> When you're finally fully abstracted away from the concrete details of
> the forest, you can assemble your model of the forest.
> 
> > [ph] Maybe I'm being too practical.  You're not being at all clear
> how you'd
> > get models for things without a way of knowing you need to making
> them.
> > What in your system would signal you that the systems of the world
> your
> > models describe were developing new behavior?
> 
> Sorry if I'm not being clear.  I just assumed this point was common
> sense and fairly clear already.  I think I first learned it when
> learning to ride a bicycle.  You act and sense _simultaneously_, not
> separately.  Control is real-time.

[ph] yes, *once* something becomes second nature.  That's a process, that
some do well for some things and some for other things.  One of the things
hardly anyone has identified as a skill that could be made second nature is
finding the life around them.  It's importantly because our culture has a
functional fixity of looking for the fixity, i.e. representing things with
models to control them instead of using models to shine on the life...
> 
> The only way you're going to get a signal that you need a new model is
> if you're embedded in some system that is evolving in a way that
> discomforts (or stimulates) you.  And embedding means both sensitivity
> and agility.  If delta_T is large, sensitivity is key.  If delta_T is
> small, agility is key.

[ph] one of the things science should look into is the curious phenomenon
that every experiment misbehaves.  It's a wide open field as far as I can
tell.  

Thanks for pushing my thinking and yours, but we should shorten a bit.

Phil
> 
> --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Communism doesn't work because people like to own stuff. -- Frank Zappa
> 
> 
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to