I have had a version of this problem for several years, because I want to start 
with small-molecule chemistry on early planets, and eventually talk about 
biospheres full of evolving actors.  I have wanted to have a rough category 
system for how many qualitative kinds of transitions I should need to account 
for, and to explain within ordinary materials by the action of random 
processes.  Just because I am not a(n analytical) philosopher, I have no 
ambition to shoehorn the universe into a system or suppose that my categories 
subsume all questions even I might someday care about, or that they are sure to 
have unambiguous boundaries.  I just want a kind of sketch that seems like it 
will carry some weight.  For now.

Autonomy: One early division to me would be between matter that responds 
“passively” to its environment moment-by-moment, and as a result takes on an 
internal state that is an effectively given function of the surroundings at the 
time, versus one that has some protection for some internal variables from the 
constant outside harassment, and a source of autonomous dynamics for those 
internal variables.  One could bring in words like “energy”, but I would rather 
not for a variety of reasons.  Often, though, when others do, I will understand 
why and be willing to go along with the choice.

Control: The category of things with autonomous internal degrees of freedom 
that have some immunity from the slings and arrows of the immediate 
surroundings is extremely broad.  Within it there could be very many different 
kinds of organizations that, if we lack a better word, we might call 
“architectures”.  One family of architectures that I recognize is that of 
control systems.  Major components include whatever is controlled (in chem-eng 
used to be called “the plant”), a “model” in the sense of Conant and Ashby, 
“sensors” to respond to the plant and signal the model, and “effectors” to get 
an output from the model and somehow influence the plant.  One could ask when 
the organization of some material system is well described by this control-loop 
architecture.  I think the control-loop architecture entails some degree of 
autonomy, else the whole system is adequately described by passive response to 
the environment.  But probably a sophist could find counterexamples.

One could ask whether having the control-loop architecture counts as having 
agency.  By discriminating among states of the world according to their 
relation to states indexed in the model, and then acting on the world (even by 
so little as acting on one’s own position in the world), one could be said to 
express some sort of “goal”, and in that sense to have “had” such a goal.  

Is that enough for agency?  Maybe.  Or maybe not.

Reflection: The controller’s model could, in the previous level, be anything.  
So again very broad.  Presumably a subset of control systems have models that 
incorporate some notion of a a “self”, so they could not only specifically 
model the conditions of the world, but also the condition of the self and of 
the self relative to the world, and then all of these variables become eligible 
targets for control actions.  

Conterfactuals and simulation: autonomy need not be limited to the receiving of 
signals and responding to them with control commands.  It could include 
producing values for counterfactual states within the controller’s model, of 
playing out representations of the consequences of control signals (another 
level of reflection, this time on the dynamics of the command loop), and then 
choosing according to a meta-criterion.  Here I have in mind something like the 
simulation that goes on in the tactical look-ahead in combinatorial games.  We 
now have a couple levels of representation between wherever the criteria are 
hard-coded and wherever the control signal (the “choice”) acts.  They are all 
still control loops, but it seems likely that control loops can have different 
enough major categories of design that there is a place for names for such 
intermediate layers of abstraction to distinguish some kinds as having them, 
from others that don’t.

How much internal reflective representation does one want to require to satisfy 
one or another concept of agency?  None of them, in particular?  A particular 
subset?

For different purposes I can see arguing for different answers, and I am not 
sure how many categories it will be broadly useful to recognize.

Eric


> On Jul 15, 2023, at 8:28 AM, Russ Abbott <russ.abb...@gmail.com> wrote:
> 
> I'm not sure what "closure to efficient cause" means. I considered using as 
> an example an outdoor light that charges itself (and stays off) during the 
> day and goes on at night. In what important way is that different from a 
> flashlight? They both have energy storage systems (batteries). Does it really 
> matter that the garden light "recharges itself" rather than relying on a more 
> direct outside force to change its batteries? And they both have on-off 
> switches. The flashlight's is more conventional whereas the garden light's is 
> a light sensor. Does that really matter? They are both tripped by outside 
> forces.
> 
> BTW, congratulations on your phrase epistemological trespassing! 
> 
> -- Russ
> 
> On Fri, Jul 14, 2023 at 1:47 PM glen <geprope...@gmail.com 
> <mailto:geprope...@gmail.com>> wrote:
>> I'm still attracted to Rosen's closure to efficient cause. Your flashlight 
>> example is classified as non-agent (or non-living ... tomayto tomahto) 
>> because the efficient cause is open. Now, attach sensor and effector to the 
>> flashlight so that it can flick it*self* on when it gets dark and off when 
>> it gets bright, then that (partially) closes it. Maybe we merely kicked the 
>> can down the road a bit. But then we can talk about decoupling and 
>> hierarchies of scale. From the armchair, there is no such thing as a (pure) 
>> agent just like there is no such thing as free will. But for practical 
>> purposes, you can draw the boundary somewhere and call it a day.
>> 
>> On 7/14/23 12:01, Russ Abbott wrote:
>> > I was recently wondering about the informal distinction we make between 
>> > things that are agents and things that aren't.
>> > 
>> > For example, I would consider most living things to be agents. I would 
>> > also consider many computer programs when in operation as agents. The most 
>> > obvious examples (for me) are programs that play games like chess.
>> > 
>> > I would not consider a rock an agent -- mainly because it doesn't do 
>> > anything, especially on its own. But a boulder crashnng down a hill and 
>> > destroying something at the bottom is reasonably called "an agent of 
>> > destruction." Perhaps this is just playing with words: "agent" can have 
>> > multiple meanings.  A writer's agent represents the writer in negotiations 
>> > with publishers. Perhaps that's just another meaning.
>> > 
>> > My tentative definition is that an agent must have access to energy, and 
>> > it must use that energy to interact with the world. It must also have some 
>> > internal logic that determines how it interacts with the world. This final 
>> > condition rules out boulders rolling down a hill.
>> > 
>> > But I doubt that I would call a flashlight (with an on-off switch) an 
>> > agent even though it satisfies my definition.  Does this suggest that an 
>> > agent must manifest a certain minimal level of complexity in its 
>> > interactions? If so, I don't have a suggestion about what that minimal 
>> > level of complexity might be.
>> > 
>> > I'm writing all this because in my search for a characterization of agents 
>> > I looked at the article on Agency 
>> > <https://plato.stanford.edu/archives/win2019/entries/agency/> in the 
>> > /Stanford Encyclopedia of Philosophy./ I found that article almost a 
>> > parody of the "armchair philosopher." Here are the first few sentences 
>> > from the article overview.
>> > 
>> >     In very general terms, an agent is a being with the capacity to act, 
>> > and ‘agency’ denotes the exercise or manifestation of this capacity. The 
>> > philosophy of action provides us with a standard conception and a standard 
>> > theory of action. The former construes action in terms of intentionality, 
>> > the latter explains the intentionality of action in terms of causation by 
>> > the agent’s mental states and events.
>> > 
>> > _
>> > _
>> > That seems to me to raise more questions than it answers. At the same 
>> > time, it seems to limit the notion of /agent/ to things that can have 
>> > intentions and mental models.  (To be fair, the article does consider the 
>> > possibility that there can be agents without these properties. But those 
>> > discussions seem relatively tangential.)
>> > 
>> > Apologies for going on so long. Thanks, Frank, for opening this can of 
>> > worms. And thanks to the others who replied so far.
>> > 
>> > __-- Russ Abbott
>> > Professor Emeritus, Computer Science
>> > California State University, Los Angeles
>> > 
>> > 
>> > 
>> > On Fri, Jul 14, 2023 at 8:33 AM Frank Wimberly <wimber...@gmail.com 
>> > <mailto:wimber...@gmail.com> <mailto:wimber...@gmail.com 
>> > <mailto:wimber...@gmail.com>>> wrote:
>> > 
>> >     Joe Ramsey, who took over my job.in 
>> > <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fjob.in&c=E,1,ZIav2qEBYSxLGqvQX4FG0oAWBKSkcEB9rSfJj-XKpOD9tHOyXksq2ZtBESmsULaSupUC7vk04BazrglG4D-b7AP92McmfQb5aRH7KAKg&typo=1>
>> >  <http://job.in 
>> > <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fjob.in&c=E,1,w5L6ESqFsG_k1WjqiiZd-LW-FNq3wwseGECZMZpifzAWAZM_vc-u9gIIo8UiMeTxSEok1oAHiNRRSoxGNvuXGZ1IeBm5Vevc1u6F8lxy4zQ,&typo=1>>
>> >  the Philosophy Department at Carnegie Mellon, posted the following on 
>> > Facebook:
>> > 
>> >     I like Neil DeGrasse Tyson a lot, but I saw him give a spirited 
>> > defense of science in which he oddly gave no credit to philosophers at 
>> > all. His straw man philosopher is a dedicated *armchair* philosopher who 
>> > spins theories without paying attention to scientific practice and 
>> > contributes nothing to scientific understanding. He misses that scientists 
>> > themselves are constantly raising obviously philosophical questions and 
>> > are often ill-equipped to think about them clearly. What is the correct 
>> > interpretation of quantum mechanics? What is the right way to think about 
>> > reductionism? Is reductionism the right way to think about science? What 
>> > is the nature of consciousness? Can you explain consciousness in terms of 
>> > neuroscience? Are biological kinds real? What does it even mean to be 
>> > real? Or is realism a red herring; should we be pragmatists instead? 
>> > Scientists raise all kinds of philosophical questions and have 
>> > ill-informed opinions about them. But *philosophers* try to answer
>> >     them, and scientists do pay attention to the controversies. At least 
>> > the smart ones do.
>> > 
>> 
>> -- 
>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>> 
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam 
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,oRPdsVIsAzEJwa1TGZ1OL95Y9zsBgPaClmKg5JVAsKzQ9rvd-wcrBscpXC8Lw5ISO_tWbkaNHifx-7E14si15QsAX-y6DNb_E-Nj9OgQYwl8AVrPz4fGfmAYOz0,&typo=1>
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,lGeTb87AyHyklkUdqndSBNKybCA8SWc8Me4TLd8l7NsiHGD1JLSSkrIEZ1dMf6m7B_byhpfyXLm78xDDroiBwc51ni2WVKwJVN7glpAzCU8gw7YJwRhC&typo=1>
>> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,2o08SICcOyMA73TR66Iz98Z29UmspyvwgPkYif75NZgWK_dkvpD6YHhHe5PB__VM4JvMJjqQH0bmz2MDex_5G7KAzLKNrpVDoJ6Vpytrdzi5_3AL3Dar9lLb9Zk3&typo=1>
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/ 
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,MB9qNpv0pDGF1QQuaDuBcULTLUR4dO7AZtvb_sv9A4dd5RxdHlEwSN256UNn6YB0_1bWTxsfT_bYkK8LxQXEE8-P15TOebtv6LBB9lzQxPlJXJ6l&typo=1>
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,SyM6C7uNU8bELi-6dvt9t-FDya4gkk8rwF4W75Vg2eRgFf-lKLFESSnUGupzdnKcAs8vRYNBR_UesWflAdcrBrmhqekEIggVXtmwBzBoPRc,&typo=1
> to (un)subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,XB28OXASFXUrltZTIF778G_1YExwU_0pqhUjnV3tn3bGNbB-mvSFU3-WTj0ru2M8e0shb5HEfs-KkQN9nmxn9JMm5Ko52Y8hlvkpjkw6XDj2yRk,&typo=1
> FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,rd-IfRFB2cX5n5oOBPUCsqxCMzPwTfrtaDXTAQbUgN12oZSFXSXxx5ruO9b11ZMc1lH7_m7COJY2hKXnuOAuJKYkaAtLcj1UyW8xVXzgJfnfk40tiwdJoafHOA,,&typo=1
> archives:  5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,IgK1VCJvM437beMjgYaTgVhjSyvfwpSXMCJuoObQA8b51nFbGXMAq2Z6eQJO5I1Vgo4i8-tTn6gbuhL4BMb29TZYLNnV6H1cJcZ1BwpupY5QdJXs&typo=1
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to