I assume what you mean is that one variable's *variation* is predictable from another variable's *variation*. That's subtly different from more substantial relations like direct or indirect proportionality, scaling, etc. To say that, e.g., variable V1 is always large when variable V2 is large is different from saying ν(V1) ≈ ν(V2). And ν() is just one of any arbitrary derivations we might choose.
But, given our discussion of iteration (and the requirement of a delay between them), there's no need to assume that one variable is the *cause* of another variable, only [cor]related to (perhaps predictive of) that other variable. This allows for a common (latent) generator that isn't adequately represented in *any* of the variables ... which risks triggering your anti-inside rhetoric, I know. On 11/30/21 12:04 PM, thompnicks...@gmail.com wrote: > Ok. So one way we could say that a variable was a primary cause of another > is to say that it accounts for a substantial proportion of that variable’s > variance, eh? We could agree that this is, for our purposes, the meaning of > the word “primary.” > > > > There are, of course, an infinite number of ways in which a causal variable > can become salient, right? > > > > n > > > > Nick Thompson > > thompnicks...@gmail.com <mailto:thompnicks...@gmail.com> > > https://wordpress.clarku.edu/nthompson/ > <https://wordpress.clarku.edu/nthompson/> > > > > *From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Frank Wimberly > *Sent:* Monday, November 29, 2021 2:33 PM > *To:* The Friday Morning Applied Complexity Coffee Group <friam@redfish.com> > *Subject:* Re: [FRIAM] The epiphenomenality relation > > > > R squared tells you the percentage of the variance of a variable predictable > by another. > > --- > Frank C. Wimberly > 140 Calle Ojo Feliz, > Santa Fe, NM 87505 > > 505 670-9918 > Santa Fe, NM > > > > On Mon, Nov 29, 2021, 12:23 PM <thompnicks...@gmail.com > <mailto:thompnicks...@gmail.com>> wrote: > > I agree. I use the distinction (artificial vs natural) as a rhetorical > crutch. What we *should* do, what I've asked Nick to do, is talk about how we > *measure* outcomes, how they *scale*. If we run something like a principal > component analysis on all the outcomes and let the data tell us which parts > are primary and which parts secondary, then we don't need the artifical vs > natural distinction (or the epi- vs phenomena distinction) at all. This > outcome's salience is 0.00001, that outcome's salience is 10000.0. > > > > This is the kind of work that Frank has done. We will hear from him > momentarily, I assume. As I understand it, such work can rank the efficacy > of a cause for each of its effects. But it does not tell you to care only > about the most effected effects. That is something you are doing. That’s > your frame. My frame, as a development/evolutionist blah blah tells me to > privilege effects that feed back on causes because these are the only kinds > of effects that in time can shape the development of a biological of > technological artifact. So loopy effects are “primary” to me. Perhaps I > should use your word “salient”, in this case. Yes, I think that would be > better. > > > > > -----Original Message----- > From: Friam <friam-boun...@redfish.com > <mailto:friam-boun...@redfish.com>> On Behalf Of u?l? ?>$ > Sent: Monday, November 29, 2021 11:19 AM > To: friam@redfish.com <mailto:friam@redfish.com> > Subject: Re: [FRIAM] The epiphenomenality relation > > > > I agree. I use the distinction (artificial vs natural) as a rhetorical > crutch. What we *should* do, what I've asked Nick to do, is talk about how we > *measure* outcomes, how they *scale*. If we run something like a principal > component analysis on all the outcomes and let the data tell us which parts > are primary and which parts secondary, then we don't need the artifical vs > natural distinction (or the epi- vs phenomena distinction) at all. This > outcome's salience is 0.00001, that outcome's salience is 10000.0. > > > > Of course, if you change the measure, you get a different distribution. > But if we don't talk, at all, about the measure(s) being used for the > classification, then we're just talking nonsense. > > > > I don't like the following words. But the distinction between > [un]supervised learning is similar. Except there, I tend to argue that there > is no such thing as unsupervised learning. The very choice of any family of > models biases the eventual model you select. > > > > On 11/29/21 9:10 AM, Marcus Daniels wrote: > > > I'm not clear on where/why one draws the line between artificial and > natural. Artificial things have resulted from natural processes. These > higher-order and relatively sharp fitness landscapes have mesas we call > features. They usually don't involve people dying or failing to reproduce, > but they do involve organized behavior by humans stopping, e.g. companies > that go bankrupt. A continuous integration system running regression tests > seems to have some properties of selection. > > > > > > -----Original Message----- > > > From: Friam <friam-boun...@redfish.com > <mailto:friam-boun...@redfish.com>> On Behalf Of ? glen > > > Sent: Monday, November 29, 2021 6:14 AM > > > To: The Friday Morning Applied Complexity Coffee Group > <friam@redfish.com <mailto:friam@redfish.com>> > > > Subject: Re: [FRIAM] The epiphenomenality relation > > > > > > Right. Agnostic discovery of the artifacts resulting from an artificial > machine comes much closer to what happens in natural systems, yes. Those > artifacts would only be considered secondary or side-effects IF the > exploration were NOT agnostic, motivated. You can only separate the artifacts > into primary vs secondary IF you had a purpose in the assembly. No purpose, > no distinction of primary vs secondary. > > > > > > But what you can do is measure the impact of all the resulting > artifacts, on some scale, and order them that way, a distribution of primacy. > Outcome O1 might be Y times more impactful, downstream than outcome O2. If > THAT were what we meant by "secondary" effect, then it would be less laden > with intention. > > > > > > But that's not what Nick seems to be doing. By insisting that some > effects are, by definition, secondary and others primary, he's asserting an > intention/purpose to the assembly. > > > > > > > > > On November 28, 2021 9:40:42 PM PST, Marcus Daniels > <mar...@snoutfarm.com <mailto:mar...@snoutfarm.com>> wrote: > > >> An ab initio simulation of a biochemical system would have a > foundation of some human-engineered code and the atomic model simulated might > have some simplifying assumptions. The low energy configurations and > dynamics are discovered, not engineered. Yet it is all reproducible on a > digital computer with precise causality and in some cases has shown fidelity > with physical experiments. > > >> > > >>> On Nov 28, 2021, at 9:14 PM, ⛧ glen <geprope...@gmail.com > <mailto:geprope...@gmail.com>> wrote: > > >>> > > >>> This sounds like impredicativity, which can be a problem in parallel > computation (resulting in deadlock or race). Unimplemented math has no > problem with it, though. And I'm guessing that some of the higher order proof > assistants find ways around it. A definitional loop seems distinct from > iteration. So, no; I don't see a problem with iteration in digital > computation. I simply don't think the intelligent design we do when > programming is analogous to biological evolution. The former clearly has side > effects (epiphenomena). I argue the latter does not. > > >>> > > >>>> On November 28, 2021 5:40:31 PM PST, Marcus Daniels > <mar...@snoutfarm.com <mailto:mar...@snoutfarm.com>> wrote: > > >>>> Glen had said something a while ago implying that (that trivial > meaning for) loops were somehow more challenging for digital computers. I > didn’t get it. > > >>>> > -- "Better to be slapped with the truth than kissed with a lie." ☤>$ uǝlƃ .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn UTC-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/