I really like your framing of the environment (part built, part natural) as a 
kind of complicated manifold, maybe like an irregular honeycomb or the closeup 
surface of a sponge, where the surface is the constraint boundary -- the 
problem statement(s) -- and the pocks and holes are the areas we specific 
intelligences wander into and "solve" problems framed by those holes.  
Presumably, some of the solutions then partly fill those holes, making the 
surface slightly more complex.  Of course "foam" is a more accurate word since 
the problem and solution spaces are high dimensional and perhaps disconnected 
(which speaks a bit to Marcus' point about us all being alien, separated at 
least by being in different holes of the foam).

However, I completely reject the (perhaps mis-inferred) idea that welfare and 
social justice are somehow force-fitting people into the pockets of the foam.  
Things like welfare and social justice are merely part of the foam, just as 
much a "natural" part of it as, say, artificial island nation states in the 
pacific and/or the system that sets up urban coyotes as a top predator.  Those 
programs may seem like artificial bandaids.  But if they are, then *every* part 
of the built environment, including roads and termite mounds are *also* 
artificial bandaids.

On 3/5/19 5:42 PM, Steven A Smith wrote:
> Glen -
> 
> What a great (continued) riff on the (general) topic, in spite of the
> thread wandering more than a little (no kinks but far from straight and
> smooth).
> 
> I would like to contrast "learning" with "problem solving" as I think
> the latter is the key point of what might allow "general intelligence"
> (if such exists, as you say) to distinguish itself.  Many may disagree,
> but i find the essence of "problem solving" at it's best to be the art
> and science of "asking the right question".  Once that has been
> achieved, the "answer" becomes self-evident and as Feynman liked to say
> "QED!"     Contemporary machine learning seems to confront the
> definition of "self-evident" and "quite easily"
> 
> The 1970's computer-proof of the 4-color problem is a good (on that
> boundary) example.  Perhaps we could say that the program written to do
> the search of the axiom/logic space is a prime (if obscure) example of
> "asking the right question" and (though it is a bit of a stretch) the
> halting/solution of the program represents the "self-evident" answer
> (nQED?).  At the very least, this is how I take the idea of "elegant"
> solutions to be (though the complexity of the 4-color problem-solution
> would seem to be a far stretch for what one would call "elegant").
> 
> Contemporary machine (deep?) learning techniques (even those emerging in
> the late 80s such as evolutionary algorithms) seem to demonstrate that a
> suitably "framed" question is as good as a well "stated" question with
> the right amount/type of computation.  EA, GA, Nueral Nets, etc.  are
> all "meta-heuristics" .
> 
> I am not sure I can call applications of these techniques, even in their
> best form, "general intelligence" but I think I would be tempted to call
> them "more general" intelligence.   I would *also* characterize a LOT of
> human problem-solving as NO MORE general, and the problem of "the
> expert" seems to frame that even more strongly... it often appears that
> an "expert" is distinguished from others with familiarity with a topic
> by *at least* the very same kind of "supervised learning" that advanced
> algorithms are capable of.   Some experts seem to be very narrow, and
> ultimately not more than a very well populated/trained associative
> memory, whole others seem very general and are *also* capable of
> reframing complex and/or poorly formed questions into an appropriate and
> well-formed enough question for the answer to emerge (with or without
> significant computation in between) as "self-evident". 
> 
> There are plenty of folks with more practical and theoretical knowledge
> of these techniques than I have here, but I felt it was worth trying to
> characterize the question this way.  
> 
> Another important way of looking at the question of what can be
> automated might be an extension/parallel to the point of "if you have a
> hammer, then everything looks like a nail".   It seems that our
> socio-political-economic milieu is evolving to "meet the problem of
> being human" halfway, by providing a sufficiently complex set
> (spectrum?) of choices of "how to live" to satisfy (most) everyone. 
> This does not mean that our system entirely meets the needs of humanity,
> but rather that it does at a granularity/structure that it many (if not
> most) people can fit themselves into one of it's many compartments/slots
> in a matrix of solutions.   
> 
> Social Justice and Welfare systems exist to try to help people fit into
> these slots as well as presumably influencing the cultural and legal
> norms that establish and maintain those slots.   The emergence of ideas
> such as Neurodiversity and this-n-that-spectrum diagnoses seem to help
> deal with the outliers and those falling between the cracks but this is
> once again, an example (I think) of force-fitting the real phenomenon
> (individuals in their arbitrary complexity) to the model
> (socio-political-economic-??? models).

Attachment: pEpkey.asc
Description: application/pgp-keys

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to