glen e. p. ropella wrote:
This abstraction
away from the fully embedded _human_ to idealistic "skill sets" is the
problem.  It's what leads us to hire "experts" and then remove them from
their proper context and place them in positions where they do
unimaginable and unforeseen harm (or good).
If there are no meaningful way to talk about skill-sets then there isn't any meaningful way to talk about proper context. Proper context is just a refinement of a skill-set, perhaps down to even 1 or 0 individuals. (The cookie and the cookie cutter.) If that number is 0, then might as well start with the next closest apparent person (the one with the ill-defined `skill set').

A "fully embedded human" sounds like it might be important. But is it? I'd have the most optimism for a person with a track record solving similar problems as the one that needs to be solved. That would suggest to me they've been *able* to become embedded. I'm not denying there are situations where having detailed practical, historical, psychological context is important for making a productive contribution (e.g. some kinds of diplomacy, or human-resources problems), but when I hear that I'm immediately suspicious of organizational dysfunction.

That's true.  However, you seem to be implying that DirectTV is a good
thing.  I agree that unforeseen consequences _can_ be good things.  But,
I don't think they are always good.  There's just as many bad unforeseen
things that come from big government programs like the space shuttle as
there are bad unforeseen things.
Yes, I would rather live in a world of unforeseen consequences driven by (universal) scientific curiosity than one driven only by local needs. Out on the farms, the lowest common denominator can get mighty low.
The most real stuff there is comes from sustained developed of theory
and technology, and that often takes real money, beyond what local
communities can fund.

No.  The most real stuff comes from real action... embedded action in a
context.  Theory (and all inference, thought, etc.) _can_ guide action
to create more good than bad, in my opinion.  But ultimately, unless and
until we have some relatively objective way to measure good and bad
(ultimately a religious or moral judgement), that's all a wash and
there's no evidence that theory guides action to good or bad outcomes.
I can certainly see that conservative governmental aggregation policies could lead to a more *stable* world, but I can't say that I am particularly interested in optimizing for that. Also I said `real', as in a sufficiently good model of the world such that, say, an iPod plays music, or the DirectTV puts pictures on the screen, or the JDAM kills the terrorist. Other kinds of model of control systems that are less interesting to me are those that concern advancing stable social configurations, esp. the ones that make claims about `good' and `bad' -- they seem to usually have the opposite outcome and destabilize.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to