Re: [FRIAM] The epiphenomenality relation

2021-12-01 Thread Eric Charles
Ah I've been looking for something in this to latch onto!
Glen -> "The word "epiphenomenon" is loaded with expectation/intention. It
works quite well in artificial systems where we can simply assume it was
designed for a purpose. But in "natural" systems (like the hyena case), if
we use that concept, we've imputed a *model* onto the system."

Me -> We've imputed in all cases. Certainly we can assume artificial
systems were designed for a purpose, but we still don't know what that
purpose is without imputing a model onto that system. And, in both cases,
we could proceed to experiment with the system, in order to test the
predictions of the imputed model and increase our confidence that we have
imputed correctly. The ability to do these things does not distinguish
between the two types of system. There are long and respected scientific
traditions using experimental methods to gain confidence in our
understanding of why certain systems were favored by natural selection,
i.e., to determine the manner in which they help the organism better fit
its environment.

Glen continues -> I can't take that further step without a preliminary
understanding that "wild type" systems don't exhibit epiphenomena at all.
They can't, by definition. If some effect *looks* like an epiphenomenon to
you, it's because *you* imputed your model onto it. It's a clear cut case
of reification.

Me -> Well it might be reification in some sense, but that term usually
implies inaccuracy, which we cannot know in this case without
experimentation. Even with a system we designed ourselves, where we might
have a lot of insight into why we designed the system the way we did, we
certainly don't have perfect knowledge. All we have there is a model of our
own behavior to impute off of. Once again, this doesn't clearly
differentiate the two situations. In all of these situations it is a
mistake to uncritically reify our initial intuitions about the system's
purpose.




On Mon, Nov 29, 2021 at 12:05 PM uǝlƃ ☤>$  wrote:

> Yes, that's the point. Thanks for stating it in yet another way.
>
> The word "epiphenomenon" is loaded with expectation/intention. It works
> quite well in artificial systems where we can simply assume it was designed
> for a purpose. But in "natural" systems (like the hyena case), if we use
> that concept, we've imputed a *model* onto the system.
>
> I would go even further (encroaching on Marcus' example) and argue that
> even if someone *else* designed a system, you cannot reverse engineer that
> designer's intention from the system they built. The agnostic approach is
> to treat every system you did not build yourself, with your own hands, as a
> naturally occurring system. (This is the essence of hacking, including
> benign forms like circuit bending.) I would ... I want to ... but I can't
> take that further step without a preliminary understanding that "wild type"
> systems don't exhibit epiphenomena at all. They can't, by definition. If
> some effect *looks* like an epiphenomenon to you, it's because *you*
> imputed your model onto it. It's a clear cut case of reification.
>
>
> On 11/29/21 8:49 AM, Steve Smith wrote:
> > glen wrote:
> >> ... Purposefully designed systems have bugs (i.e. epiphenomena,
> unintended, side-, additional, secondary, effects). Biological evolution
> does not. There is no bug-feature distinction there.
> >
> > In trying to normalize your terms/conceptions to my own, am I right that
> you are implying that intentionality is required for epiphenomena (reduces
> to tautology if "unintended" is key to "epi")?
> >
> > This leads us back to the teleological debate I suppose.   The common
> (vulgar?)  "evolution" talk is laced with teleological implications...  but
> I think what Glen is saying here that outside the domain of human/sentient
> will/intentionality (which he might also call an illusion), everything
> simply *is what it is* so anything *we* might identify as epiphenomena is
> simply a natural consequence *we* failed to predict and/or which does not
> fit *our* intention/expectation.
> >
> > We watch a rock balanced at the edge of a cliff begin to shift after a
> rain and before our very eyes, we see it tumble off the cliff edge and
> roll/slide/skid toward the bottom of the gradient but being humans, with
> intentions and preferences and ideas, *we* notice there is a human made
> structure (say a cabin) at the bottom of the cliff and we begin to take
> odds on how likely that rock is to slip/slide/roll into the cabin.   *we*
> give that event meaning that it does not have outside of our
> mind/system-of-values.   The rock doesn't care that it came to final rest
> (or not) because the cabin structure in it's (final) path was robust enough
> to absorb/reflect the remaining kinetic energy in the rock-system and the
> cabin doesn't care either!   We (because we are in the cabin, because we
> built the cabin, because we are paying a mortgage to the bank on the cabin,
> because we intend to

Re: [FRIAM] The epiphenomenality relation

2021-12-01 Thread uǝlƃ ☤ $
Variables are ... well, "things that vary". So in the language surrounding 
iteration, I'm not saying "variable X occurs before Y". I'm saying X and Y take 
on values *before* an iterate. And they take on values *after* an iterate. Then 
ΔX and ΔY may be non-zero. I.e. x1, x2 ∈ X and y1, y2 ∈ Y and the iteration 
looks like Iter(x1,y1) → .

In this context, X is not a cause of Y. Iter() is a cause of the variation in X 
and Y.

What Frank said was that the variation in X might be *predictive* of the 
variation in Y. So, even if you don't know the values y1 and y2, you can "get a 
feel for" y1 and y2 by looking at x1 and x2.

Re: "latent variables" - Iter() might be defined in terms of 3 variables, X, Y, 
and Z. And we might have access to X and Y, but not Z. (I.e. we know the values 
x1, x2, y1, and y2. But we don't know the values z1 or z2.) It's possible that 
X be predictive of Y whether or NOT X and Y depend on Z. But if they do depend 
on Z, then we might be able to go beyond merely "predictive of" and say 
something about causality ... e.g. we might be able to say something like Z 
causes both X and Y, which would then explain why X and Y correlate.

I hope that helps.

On 11/30/21 4:10 PM, thompnicks...@gmail.com wrote:
> My problem is, of course, that if variable X occurs before Y and is 
> predictive of it, then it is a cause, by definition.  I am groping for an 
> understanding of a “latent” variable.  I promise I am not arguing, here. 

-- 
"Better to be slapped with the truth than kissed with a lie."
☤>$ uǝlƃ


.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/