Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
Definitions are all fine and good, but realizable behavior is what matters.   
Analog computers will have imperfect behavior, and there will be leakage 
between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, 
so somewhat isolated from interference.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Tuesday, January 17, 2023 2:11 PM
To: friam@redfish.com
Subject: Re: [FRIAM] NickC channels DaveW

I don't quite grok that. A crisp definition of recursion implies no interaction 
with the outside world, right? If you can tolerate the ambiguity in that 
statement, the artifacts laying about from an unrolled recursion might be seen 
and used by outsiders. That's not to say a trespasser can't have some 
sophisticated intrusion technique. But unrolled seems more "open" to family, 
friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:
> I probably didn't pay enough attention to the thread some time ago on 
> serialization, but to me recursion is hard to distinguish from an unrolling 
> of recursion.

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

I don't quite grok that. A crisp definition of recursion implies no interaction with the 
outside world, right? If you can tolerate the ambiguity in that statement, the artifacts 
laying about from an unrolled recursion might be seen and used by outsiders. That's not 
to say a trespasser can't have some sophisticated intrusion technique. But unrolled seems 
more "open" to family, friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:

I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Steve Smith
I suppose pouring all of the FriAM traffic into (even my own 
bloviations) a chatbot might be a bit usurious (the fool's errand of a 
fool errant)?


On 1/17/23 2:37 PM, glen wrote:
You might try using the OpenAI API directly. It takes some work, but 
not much.


https://openai.com/api/

Or you could sign up for this:

https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/ 



I would hook you up to my Slack bot that queries GPT3 for every 
channel message. But that might get expensive with a verbose person 
like you! 8^D I can imagine some veerrryyy long prompts.



On 1/17/23 12:57, Steve Smith wrote:


On 1/17/23 1:08 PM, Marcus Daniels wrote:
Dogs have about 500 million neurons in their cortex.  Neurons have 
about 7,000 synaptic connections, so I think my dog is a lot smarter 
than a billion parameter LLM.  :-)
And I bet (s)he channels *at least* one FriAM member's affect pretty 
well also!


My 9 month old golden-doodle does as good of a job at that (I won't 
name names) as my (now deceased 11 year old Akita and my 9 year old 
chocolate dobie mix bot did) but nobody here really demonstrates the 
basic nature of either my 9 month old tabby or her 20 year old 
black-mouser predecessor.    There is very little overlap.


The jays and the woodpeckers and the finches and towhees and sparrows 
and nuthatches and robins and the mating pair of doves and the 
several ravens and the (courting?) pair of owls (that I only hear 
hooting to one another in the night) and the lone (that I see) hawk 
and the lone blue heron (very more occasionally) and the flock(lets) 
of geese migrating down the rio-grande flyway... their aggregate 
neural complexity is only multiplicative (order 100-1000x) that of 
any given beast... but somehow their interactions (this is without 
the half-dozen species of rodentia and leporidae and racoons and 
insects and worms and ) would seem to have a more combinatorial 
network of relations?


I tried signing up to try chatGPT for myself (thanks to Glen's Nick 
Cave blog-link) and was denied because "too busy, try back later" and 
realized that it had become a locus for (first world) humans to 
express and combine their greatest hopes and worse fears in a single 
place.


This seems like a higher-order training set?  Not just the 
intersection of all things "worth saying" but somehow 
filtered/diffracted through "the things (some) people are interested 
in in particular"...





-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.

From: Friam  on behalf of glen 
Sent: Tuesday, January 17, 2023 2:21 PM
To: friam@redfish.com 
Subject: Re: [FRIAM] NickC channels DaveW

Being a too-literal person, who never gets the joke, I have to say that these 
simple scalings, combinatorial or not, don't capture the interconnectionist 
point being made in the pain article. The absolute numbers of elements 
(neurons, synapses, signaling molecules, etc.) flatten it all out. But 
_ganglion_, that's a different thing. What we're looking for are loops and 
"integratory" structures. I think that's where we can start to find a scaling 
for smartness.

In that context, my guess is the heart is closer to ChatGPT in its smartness 
than either of those are to the human gut. But structure-based assessments like 
these merely complement behavior-based assessments. We could quantify the 
number of *jobs* done by the thing. The heart has fewer jobs to do than the 
gut. And the gut has fewer jobs to do than the dog. Etc. Of course, the lines 
between jobs aren't all that crisp, especially as the complexity of the thing 
grows. Behaviors in complex things are composable and polymorphic. In spite of 
our imagining what ChatGPT is doing, it's really only doing 1 thing: choosing 
the most likely next token given the previous tokens. You *might* be able to 
serialize your dog and suggest she's really just choosing the most likely next 
behavior given the previous behaviors. But my guess is dog owners perceive (or 
impute) that dogs resolve contradictions that arise in parallel. (chase the 
ball? chew the bone? continue chewing the bone until you get to the ball?) 
Contradiction resolution is evidence of more than 1 task. You could gussy up 
the model by providing a single interface to an ensemble of models. Then it 
might look more like a dog, depending on the algorithm(s) used to resolve 
contradictions between models. But to get closer to dog-complexity, you'd have 
to wire the models together so that they could contradict each other but still 
feed off each other in some way. A model that changes its mind midway through 
its response would be good. I haven't had a dog in a long time. But I seem to 
remember they were easy to redirect, despite the old saying "like a dog with a 
bone".

On 1/17/23 12:51, Prof David West wrote:
> Apropos of nothing:
>
> The human heart has roughly 40,000 neurons and the human gut around 0.1 
> billion neurons (sensory neurons, neurotransmitters, ganglia, and motor 
> neurons).
>
> So the human gut is about 1/5 as smart as Marcus's dog??
>
> davew
>
>
> On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:
>> Dogs have about 500 million neurons in their cortex.  Neurons have
>> about 7,000 synaptic connections, so I think my dog is a lot smarter
>> than a billion parameter LLM.  :-)
>>
>> Sent from my iPhone
>>
>>> On Jan 17, 2023, at 11:35 AM, glen  wrote:
>>>
>>> 
>>> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
>>> what it produced. What do you think?"
>>> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
>>>
>>> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
>>> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
>>>
>>> Taken separately, (1) and (2) are each interesting, if seemingly 
>>> orthogonal. But what twines them, I think, is the concept of "mutual 
>>> information". I read (2) before I read (1) because, for some bizarre 
>>> reason, my day job involves trying to understand pain mechanisms. And (2) 
>>> speaks directly (if only implicitly) to things like IIT. If you read (1) 
>>> first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
>>> NickT's objection to an inner life, it seems clear that the nuance we see 
>>> on the surface, at least longitudinally, *needs* an inner life. You simply 
>>> can't get good stuff out of an entirely flat/transparent/reactive/Markovian 
>>> object.
>>>
>>> However, what NickC misses is that LLMs *have* some intertwined mutual 
>>> information within them. Similar to asking whether an insect experiences 
>>> pain, we can ask whether a X billion parameter LLM experiences something 
>>> like "suffering". My guess is the answer is "yes". It may not be a good 
>>> analog to what we call "suffering", though ... maybe "friction"? ... maybe 
>>> "release"? My sense is that when you engage a LLM (embedded in a larger 
>>> construct that handles the prompts and live learning, of course) in such a 
>>> way that it assembles a response that nobody else has evoked, it might get 
>>> something akin to a tingle ... or like the relief you feel when scratching 
>>> an itch ... of course it would be primordial 

Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

You might try using the OpenAI API directly. It takes some work, but not much.

https://openai.com/api/

Or you could sign up for this:

https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/

I would hook you up to my Slack bot that queries GPT3 for every channel 
message. But that might get expensive with a verbose person like you! 8^D I can 
imagine some veerrryyy long prompts.


On 1/17/23 12:57, Steve Smith wrote:


On 1/17/23 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)

And I bet (s)he channels *at least* one FriAM member's affect pretty well also!

My 9 month old golden-doodle does as good of a job at that (I won't name names) 
as my (now deceased 11 year old Akita and my 9 year old chocolate dobie mix bot 
did) but nobody here really demonstrates the basic nature of either my 9 month 
old tabby or her 20 year old black-mouser predecessor.    There is very little 
overlap.

The jays and the woodpeckers and the finches and towhees and sparrows and 
nuthatches and robins and the mating pair of doves and the several ravens and 
the (courting?) pair of owls (that I only hear hooting to one another in the 
night) and the lone (that I see) hawk and the lone blue heron (very more 
occasionally) and the flock(lets) of geese migrating down the rio-grande 
flyway... their aggregate neural complexity is only multiplicative (order 
100-1000x) that of any given beast... but somehow their interactions (this is 
without the half-dozen species of rodentia and leporidae and racoons and 
insects and worms and ) would seem to have a more combinatorial network of 
relations?

I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave blog-link) and 
was denied because "too busy, try back later" and realized that it had become a 
locus for (first world) humans to express and combine their greatest hopes and worse 
fears in a single place.

This seems like a higher-order training set?  Not just the intersection of all things "worth 
saying" but somehow filtered/diffracted through "the things (some) people are interested 
in in particular"...



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

Being a too-literal person, who never gets the joke, I have to say that these simple 
scalings, combinatorial or not, don't capture the interconnectionist point being made in 
the pain article. The absolute numbers of elements (neurons, synapses, signaling 
molecules, etc.) flatten it all out. But _ganglion_, that's a different thing. What we're 
looking for are loops and "integratory" structures. I think that's where we can 
start to find a scaling for smartness.

In that context, my guess is the heart is closer to ChatGPT in its smartness than either 
of those are to the human gut. But structure-based assessments like these merely 
complement behavior-based assessments. We could quantify the number of *jobs* done by the 
thing. The heart has fewer jobs to do than the gut. And the gut has fewer jobs to do than 
the dog. Etc. Of course, the lines between jobs aren't all that crisp, especially as the 
complexity of the thing grows. Behaviors in complex things are composable and 
polymorphic. In spite of our imagining what ChatGPT is doing, it's really only doing 1 
thing: choosing the most likely next token given the previous tokens. You *might* be able 
to serialize your dog and suggest she's really just choosing the most likely next 
behavior given the previous behaviors. But my guess is dog owners perceive (or impute) 
that dogs resolve contradictions that arise in parallel. (chase the ball? chew the bone? 
continue chewing the bone until you get to the ball?) Contradiction resolution is 
evidence of more than 1 task. You could gussy up the model by providing a single 
interface to an ensemble of models. Then it might look more like a dog, depending on the 
algorithm(s) used to resolve contradictions between models. But to get closer to 
dog-complexity, you'd have to wire the models together so that they could contradict each 
other but still feed off each other in some way. A model that changes its mind midway 
through its response would be good. I haven't had a dog in a long time. But I seem to 
remember they were easy to redirect, despite the old saying "like a dog with a 
bone".

On 1/17/23 12:51, Prof David West wrote:

Apropos of nothing:

The human heart has roughly 40,000 neurons and the human gut around 0.1 billion 
neurons (sensory neurons, neurotransmitters, ganglia, and motor neurons).

So the human gut is about 1/5 as smart as Marcus's dog??

davew


On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have
about 7,000 synaptic connections, so I think my dog is a lot smarter
than a billion parameter LLM.  :-)

Sent from my iPhone


On Jan 17, 2023, at 11:35 AM, glen  wrote:


1) "I asked Chat GPT to write a song in the style of Nick Cave and this is what it 
produced. What do you think?"
https://www.theredhandfiles.com/chat-gpt-what-do-you-think/

2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#

Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. But what 
twines them, I think, is the concept of "mutual information". I read (2) before 
I read (1) because, for some bizarre reason, my day job involves trying to understand 
pain mechanisms. And (2) speaks directly (if only implicitly) to things like IIT. If you 
read (1) first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
NickT's objection to an inner life, it seems clear that the nuance we see on the surface, 
at least longitudinally, *needs* an inner life. You simply can't get good stuff out of an 
entirely flat/transparent/reactive/Markovian object.

However, what NickC misses is that LLMs *have* some intertwined mutual information within them. Similar to asking whether an 
insect experiences pain, we can ask whether a X billion parameter LLM experiences something like "suffering". My guess 
is the answer is "yes". It may not be a good analog to what we call "suffering", though ... maybe 
"friction"? ... maybe "release"? My sense is that when you engage a LLM (embedded in a larger construct that 
handles the prompts and live learning, of course) in such a way that it assembles a response that nobody else has evoked, it 
might get something akin to a tingle ... or like the relief you feel when scratching an itch ... of course it would be primordial 
because the self-attention in such a system is hopelessly disabled compared to the rich self-attention loops we have in our meaty 
bodies. But it just *might* be there in some primitive sense.

As always, agnosticism is the only rational stance. And I won't trust the songs 
written by LLMs until I see a few of them commit suicide, overdose, or punch a 
TMZ cameraman in the face.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- 

Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Steve Smith


On 1/17/23 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)
And I bet (s)he channels *at least* one FriAM member's affect pretty 
well also!


My 9 month old golden-doodle does as good of a job at that (I won't name 
names) as my (now deceased 11 year old Akita and my 9 year old chocolate 
dobie mix bot did) but nobody here really demonstrates the basic nature 
of either my 9 month old tabby or her 20 year old black-mouser 
predecessor.    There is very little overlap.


The jays and the woodpeckers and the finches and towhees and sparrows 
and nuthatches and robins and the mating pair of doves and the several 
ravens and the (courting?) pair of owls (that I only hear hooting to one 
another in the night) and the lone (that I see) hawk and the lone blue 
heron (very more occasionally) and the flock(lets) of geese migrating 
down the rio-grande flyway... their aggregate neural complexity is only 
multiplicative (order 100-1000x) that of any given beast... but somehow 
their interactions (this is without the half-dozen species of rodentia 
and leporidae and racoons and insects and worms and ) would seem to 
have a more combinatorial network of relations?


I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave 
blog-link) and was denied because "too busy, try back later" and 
realized that it had become a locus for (first world) humans to express 
and combine their greatest hopes and worse fears in a single place.


This seems like a higher-order training set?  Not just the intersection 
of all things "worth saying" but somehow filtered/diffracted through 
"the things (some) people are interested in in particular"...



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Prof David West
Apropos of nothing:

The human heart has roughly 40,000 neurons and the human gut around 0.1 billion 
neurons (sensory neurons, neurotransmitters, ganglia, and motor neurons).

So the human gut is about 1/5 as smart as Marcus's dog??

davew


On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:
> Dogs have about 500 million neurons in their cortex.  Neurons have 
> about 7,000 synaptic connections, so I think my dog is a lot smarter 
> than a billion parameter LLM.  :-)
>
> Sent from my iPhone
>
>> On Jan 17, 2023, at 11:35 AM, glen  wrote:
>> 
>> 
>> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
>> what it produced. What do you think?"
>> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
>> 
>> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
>> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
>> 
>> Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. 
>> But what twines them, I think, is the concept of "mutual information". I 
>> read (2) before I read (1) because, for some bizarre reason, my day job 
>> involves trying to understand pain mechanisms. And (2) speaks directly (if 
>> only implicitly) to things like IIT. If you read (1) first, it's difficult 
>> to avoid snapping quickly into NickC's canal. Despite NickT's objection to 
>> an inner life, it seems clear that the nuance we see on the surface, at 
>> least longitudinally, *needs* an inner life. You simply can't get good stuff 
>> out of an entirely flat/transparent/reactive/Markovian object.
>> 
>> However, what NickC misses is that LLMs *have* some intertwined mutual 
>> information within them. Similar to asking whether an insect experiences 
>> pain, we can ask whether a X billion parameter LLM experiences something 
>> like "suffering". My guess is the answer is "yes". It may not be a good 
>> analog to what we call "suffering", though ... maybe "friction"? ... maybe 
>> "release"? My sense is that when you engage a LLM (embedded in a larger 
>> construct that handles the prompts and live learning, of course) in such a 
>> way that it assembles a response that nobody else has evoked, it might get 
>> something akin to a tingle ... or like the relief you feel when scratching 
>> an itch ... of course it would be primordial because the self-attention in 
>> such a system is hopelessly disabled compared to the rich self-attention 
>> loops we have in our meaty bodies. But it just *might* be there in some 
>> primitive sense.
>> 
>> As always, agnosticism is the only rational stance. And I won't trust the 
>> songs written by LLMs until I see a few of them commit suicide, overdose, or 
>> punch a TMZ cameraman in the face.
>> 
>> -- 
>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>> 
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/
>> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Dope Slap Thread

2023-01-17 Thread Nicholas Thompson
I am finding what Mail.google does to messages so confusing that I am gong
to try to simplify here.

EricS writes


*My liking of the analogy of sample estimators and underlying  values
*Ii.e.values
on which the estimations converge--NST*] **is that, if one felt that were a
valid analogy to a specific aspects of Peirce’s
truth-relative-to-states-of-knowledge concept, it would completely clear
the fog of philosophical profundity from Peirce, and say that this idea,
for a modern quantitative reader, is an everyday commonplace, and one that
we can easily examine at all levels from our habits to our formalism, and
study the structure of in cognition. *

To which I can only respond:

*Y E S *
I did feel obligated to reframe the word "underlying" because it adds back
a bit of the mystery that I am so glad to see expunged.  Another way for
thinking about Peirce is to say that  cognition is a statistical project
and statistics is all we got.  Peirce is trying as hard as possible NOT to
be profound.
Nick
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)

Sent from my iPhone

> On Jan 17, 2023, at 11:35 AM, glen  wrote:
> 
> 
> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
> what it produced. What do you think?"
> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
> 
> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
> 
> Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. 
> But what twines them, I think, is the concept of "mutual information". I read 
> (2) before I read (1) because, for some bizarre reason, my day job involves 
> trying to understand pain mechanisms. And (2) speaks directly (if only 
> implicitly) to things like IIT. If you read (1) first, it's difficult to 
> avoid snapping quickly into NickC's canal. Despite NickT's objection to an 
> inner life, it seems clear that the nuance we see on the surface, at least 
> longitudinally, *needs* an inner life. You simply can't get good stuff out of 
> an entirely flat/transparent/reactive/Markovian object.
> 
> However, what NickC misses is that LLMs *have* some intertwined mutual 
> information within them. Similar to asking whether an insect experiences 
> pain, we can ask whether a X billion parameter LLM experiences something like 
> "suffering". My guess is the answer is "yes". It may not be a good analog to 
> what we call "suffering", though ... maybe "friction"? ... maybe "release"? 
> My sense is that when you engage a LLM (embedded in a larger construct that 
> handles the prompts and live learning, of course) in such a way that it 
> assembles a response that nobody else has evoked, it might get something akin 
> to a tingle ... or like the relief you feel when scratching an itch ... of 
> course it would be primordial because the self-attention in such a system is 
> hopelessly disabled compared to the rich self-attention loops we have in our 
> meaty bodies. But it just *might* be there in some primitive sense.
> 
> As always, agnosticism is the only rational stance. And I won't trust the 
> songs written by LLMs until I see a few of them commit suicide, overdose, or 
> punch a TMZ cameraman in the face.
> 
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] NickC channels DaveW

2023-01-17 Thread glen


1) "I asked Chat GPT to write a song in the style of Nick Cave and this is what it 
produced. What do you think?"
https://www.theredhandfiles.com/chat-gpt-what-do-you-think/

2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#

Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. But what 
twines them, I think, is the concept of "mutual information". I read (2) before 
I read (1) because, for some bizarre reason, my day job involves trying to understand 
pain mechanisms. And (2) speaks directly (if only implicitly) to things like IIT. If you 
read (1) first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
NickT's objection to an inner life, it seems clear that the nuance we see on the surface, 
at least longitudinally, *needs* an inner life. You simply can't get good stuff out of an 
entirely flat/transparent/reactive/Markovian object.

However, what NickC misses is that LLMs *have* some intertwined mutual information within them. Similar to asking whether an 
insect experiences pain, we can ask whether a X billion parameter LLM experiences something like "suffering". My guess 
is the answer is "yes". It may not be a good analog to what we call "suffering", though ... maybe 
"friction"? ... maybe "release"? My sense is that when you engage a LLM (embedded in a larger construct that 
handles the prompts and live learning, of course) in such a way that it assembles a response that nobody else has evoked, it 
might get something akin to a tingle ... or like the relief you feel when scratching an itch ... of course it would be primordial 
because the self-attention in such a system is hopelessly disabled compared to the rich self-attention loops we have in our meaty 
bodies. But it just *might* be there in some primitive sense.

As always, agnosticism is the only rational stance. And I won't trust the songs 
written by LLMs until I see a few of them commit suicide, overdose, or punch a 
TMZ cameraman in the face.

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Morning Coffee thought: Good time for global warming bingo

2023-01-17 Thread Steve Smith

Glen Sed:

Not weather, per se. Via HackerNews:

The Ecological Catastrophe You’ve Never Heard Of
https://nautil.us/the-ecological-catastrophe-youve-never-heard-of-257291/


Shaiza!

Among this collection (not to undervalue the headlining story) is the 
"Great Forgetting" 
... 
which was an incredibly compelling parable to try to understand (modern) 
humanity's  impact on the Earth's Systems through the analogy of "memory 
loss" her anecdotal experience with losing a brother to Schizophrenia 
which may have been triggered by a single traumatic event (head injury)...


And the amazing glacial-lake-leak-unto-tsunami-unto-mudslide-etc 
headline story was very powerful...


thanks for that link!

- Steve

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Morning Coffee thought: Good time for global warming bingo

2023-01-17 Thread Steve Smith
Good idea. I like Ecological catastrophe better. It is more broad, and 
includes some of the big ones like anthropogenic extinction.
I have a game at my house that you could borrow. It is called CIA. The 
point of the game is to respond to major crises around the globe, many 
of which are political or technological, but some are ecological.


Fascinating!  And in the background, we could listen to Margaret 
Atwood's Oryx and Crake or more generally her MaddAddam trilogy 
 which would seem to complement 
"CIA" quite well in it's thematic range (or more to the point, the 
extant contributions here to Gil's "Apocalypse-Bingo").





Here are some more ideas for your bingo game:
* extinction
* failed geo-engineering experiment
* massive human migration
* evolving/spreading disease
* crop failures
* insect die off
* ocean die off
* some kind of positive climate feedback that leads to rapid global 
warming

* droughts
* floods
* poisoning of a water system
* earthquakes from petroleum extraction
* wildfires
* climate induced economic disruptions
* heatwaves/coldsnaps

_ Cody Smith _
c...@simtable.com


On Tue, Jan 17, 2023 at 9:18 AM glen  wrote:

Not weather, per se. Via HackerNews:

The Ecological Catastrophe You’ve Never Heard Of
https://nautil.us/the-ecological-catastrophe-youve-never-heard-of-257291/

On 1/17/23 07:38, Gillian Densmore wrote:
> So far: hurricanes,
> Extreme weather
> Tornadoes,
> Monsoons-
> All in January.
> LoL anything else to add to the global warming weather list?
> Sidenote: just one more reason I avoid driving.

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021 http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p 
Zoomhttps://bit.ly/virtualfriam
to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/
archives:  5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021http://friam.383.s1.nabble.com/-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Dope slaps, anyone? Text displaying correctly?

2023-01-17 Thread Steve Smith

Glen wrote:
I *think* that works. Ordinarily, I react badly to hyper-formality. 
But one reason to formalize is so that we can be agnostic about the 
origins of some thing, abstracting it from the world. Whether an 
ultra-abstracter like Peirce would support the historical/scholarly 
logging of whatever messy process gave rise to the stable patterns is 
unclear to me. I tend to think he would not. It seems to me that 
Abstracters tend to want crisp boundaries and forever-trustable 
conclusions, like EricS' suggested ... "committed to making true 
statements". Concretizers, on the other hand, insufferably insist on 
adding the burrs back onto the finished piece, thereby breaking the 
machine. Somewhere within biology, the two camps diverge. Concretizers 
seem to have been rare in logic and physics, less rare in chemistry. 
Abstracters seem to percolate out of the soft sciences, which are 
described that way because they resist abstraction. Their burrs are 
resistant to machining. (Caveat that there's no shortage of hucksters 
that *claim* to have abstracted them, but haven't.)


Of course, the art lies in iterating between the two poles. 
Concretizing enough to make Platonic objects useful in the world. 
Abstracting enough to make concrete objects transmissable across 
circumstance. And none of us are fully integrated animals. We do both, 
just to a greater or lesser extent.


This "oscillation" or "orbit-following" within the dimensionality 
including/dominated-by concrete/abstract is fascinating to me, and I 
think it *is* the dynamics that make it work.  We are so prone to want 
to (statically) place an entity as a point in those spaces (quad-charts 
'R Us!)  and ignore the implied *phase space* that can be derived from 
them (and their dynamics).


I know this is a typical (for me) abstraction that somewhat ignores the 
concrete (and the dynamical) that I speak of ...  I am (naturally) a 
low-dimensional creature (A. Square ala E A Abbott ) struggling to 
apprehend (and maybe navigate) a hidden higher-dimensional space I suppose.



And again, I can't resist referencing Deacon's Homeo/Morpho/Teleo 
Dynamics 


Surely someone here has a better (formal) understanding of this or a 
more inspired (intuitive) apprehension of this than I!?


Or I am just one hand (set of gums) clapping in the dark...




On 1/16/23 07:53, Prof David West wrote:
I do not know and have not read Feferman, so this may be totally off 
base, but ...


glen stated:
/Worded one way: Schema are the stable patterns that emerge from the 
particulars. And the variation of the particulars is circumscribed 
(bounded, defined) by the schema.

/
This is a description of "culture." Restated—hopefully without 
distorting the meaning:


*Culture is the stable patterns of behavior that emerge from 
individual human actions which vary (are idiosyncratic) within bounds 
defined by the culture.*


The second glen statement:

/Worded another way: Our perspective on the world emerges from the 
world. And our perspective on the world shapes how and what we see of 
the world./


alludes to the cognitive feedback loop (at least part of it) that I 
developed in my doctoral dissertation on cognitive anthrpology.


davew


On Mon, Jan 16, 2023, at 3:32 AM, glen wrote:
 > Well, not "languageless", but "language-independent". Now that you've
 > forced me to think harder, that phrase "language-independent" isn't
 > quite right. It's more like "meta-language" ... a family of languages
 > such that the family might be "language-like" ... a language of
 > languages ... a higher order language, maybe.
 >
 > Feferman introduced me to the concept of "schematic axiomatic 
systems",

 > which seems (correct me if I'm wrong) to talk about formal systems
 > where one reasons over sentences with substitutable elements. I.e. 
the

 > *particulars* of any given situation may vary, but the "scheme" into
 > which those particulars fit is stable/invariant. [⛧]
 >
 > EricS seemed to be proposing that not only do the particulars vary
 > within the schema, but the schema also vary. The schema are ways to
 > "parse" the world, the Play-Doh extruder(s) we use to form the 
Play-Doh

 > into something.
 >
 > Your "random yet not random" rendering of Peirce sounds to me similar
 > to the duality between the particulars and the schema they populate.
 >
 > Worded one way: Schema are the stable patterns that emerge from the
 > particulars. And the variation of the particulars is circumscribed
 > (bounded, defined) by the schema.
 >
 > Worded another way: Our perspective on the world emerges from the
 > world. And our perspective on the world shapes how and what we see of
 > the world.
 >
 > And, finally, paraphrasing: The apparition of schema we experience is
 > due to the fact that such schema are useful to organisms. Events 
in the

 > world that don't fit the schema are beyond experience.
 >
 >
 > [⛧] I'm doing my best to avoid talking about jargonal 

Re: [FRIAM] Morning Coffee thought: Good time for global warming bingo

2023-01-17 Thread cody dooderson
Good idea. I like Ecological catastrophe better. It is more broad, and
includes some of the big ones like anthropogenic extinction.
I have a game at my house that you could borrow. It is called CIA. The
point of the game is to respond to major crises around the globe, many of
which are political or technological, but some are ecological.

Here are some more ideas for your bingo game:
* extinction
* failed geo-engineering experiment
* massive human migration
* evolving/spreading disease
* crop failures
* insect die off
* ocean die off
* some kind of positive climate feedback that leads to rapid global warming
* droughts
* floods
* poisoning of a water system
* earthquakes from petroleum extraction
* wildfires
* climate induced economic disruptions
* heatwaves/coldsnaps

_ Cody Smith _
c...@simtable.com


On Tue, Jan 17, 2023 at 9:18 AM glen  wrote:

> Not weather, per se. Via HackerNews:
>
> The Ecological Catastrophe You’ve Never Heard Of
> https://nautil.us/the-ecological-catastrophe-youve-never-heard-of-257291/
>
> On 1/17/23 07:38, Gillian Densmore wrote:
> > So far: hurricanes,
> > Extreme weather
> > Tornadoes,
> > Monsoons-
> > All in January.
> > LoL anything else to add to the global warming weather list?
> > Sidenote: just one more reason I avoid driving.
>
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Dope slaps, anyone? Text displaying correctly?

2023-01-17 Thread Steve Smith

DaveW wrote:
I do not know and have not read Feferman, so this may be totally off 
base, but ...


glen stated:
/Worded one way: Schema are the stable patterns that emerge from the 
particulars. And the variation of the particulars is circumscribed 
(bounded, defined) by the schema.

/
This is a description of "culture." Restated—hopefully without 
distorting the meaning:


*Culture is the stable patterns of behavior that emerge from 
individual human actions which vary (are idiosyncratic) within bounds 
defined by the culture.*


The second glen statement:

/Worded another way: Our perspective on the world emerges from the 
world. And our perspective on the world shapes how and what we see of 
the world./


alludes to the cognitive feedback loop (at least part of it) that I 
developed in my doctoral dissertation on cognitive anthrpology.
This is nicely relevant (and I think supportive of) to my own ideations 
about the individual/collective "duality", again (just to harp) in the 
spirit of Yuval Harari's "Intersubjective Reality" conceit.   It also 
aligns well with my own understanding of co-evolving species 
(ecosystems) and the more general buddhist "dependent co-arising"...


More n Glen's observations about Schema (and the Abstract-Concrete axis) 
under separate cover...
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] unrest in SoAm & Global ideological/sociopolitical/economic alignment...

2023-01-17 Thread Steve Smith
The general sentiment of the replies to this thread seems to be: "there 
is no reason to characterize anything like a 'golden age of Latin 
America' beyond perhaps the post WWII boom in economies participating in 
the rebuilding of Europe with a natural advantage to those who did not 
participate in receiving the destruction of that war.


I have tried to take this to heart and understand what I was trying to 
understand with the possibility (likelihood) that the idea of a "Golden 
Age of Latin America" is probably the superposition of several 
projections, some innocent, some perhaps not.   I also heard the tone 
that the paper I referenced might have an element of "blame the victim" 
suggesting that Latin America had somehow failed to manifest the destiny 
we imagined for them.   A significant element in the limited economic 
and political stability that has happened in "Latin America" during this 
period has been the (barely disguised) interference of major 
(super)powers around the world, in particular, the US, USSR and China 
(probably in that order of magnitude?).


An interesting comment spurred by an discussionon Researchgate 
 
is inlined here:


   /I would understand the 1950s as some kind of "take-off" phase in
   state-building, economic development, education, etc., strengthened
   in the 1960s by on the one hand the Alliance for Progress, on the
   other by first attempts of import-substitution. In this view, the
   50s appear as a golden age because they were, in many parts of the
   continent, the first moment of political engagement with pressing
   issues. For the moment, that was great. Seen from today, not so
   much. I am thinking about the forced industrialization and
   indigenismo that were big in the 50s and can be critizised quite
   harshly now. - Philipp Altmann
   /

In the same spirit of Glen's recent (excellent) summary of the spectrum 
(and need to embrace and traverse it) from Concrete to Abstract:  I do 
believe that collective entities (such as (sub)cultures, peoples, 
countries, regions, etc) can be described across the same spectrum.   
Individuals (GaryS's indigenous neighbors) are probably mostly 
experiencing *very* concrete things (like when the garden you depend on 
for sustenance fails or stutters because of drought or flood or ???) 
while scholars (and politicians and private buttinskis like me) in the 
US or Europe (or even higher education *in* those regions) are smearing 
(by aggregation and statistical measures) and abstracting (with 
forced/adopted ontologies) the "burrs" away, leaving their observations 
and judgements likely to at best only obliquely relevant to what is 
"really going on".


Unfortunately, in the spirit of Harari's "intersubjective reality", as 
those who wield high-leverage power come to accept and believe and act 
on these "obliquely relevant" observations and judgements, then they 
become in some painful way an over-arching "reality" that effects the 
concrete reality of those trying to grow crops or prevent their homes 
from being burned down or washed away by natural processes (sometimes 
set akilter by the actions enjoined by the aforementioned out-of-touchers).


The current (continuing) battles between Lula de Silva and Bolsanaro 
(and their many faithful followers) as well as our own Left/Right 
Authoritarian/Liberal divides are fought in terms of these higher level 
"intersubjective realities" which are the antithesis of what our own 
politicians like to dub (dismissively or divisively?) "kitchen table" 
issues.


I just saw a recent set of reports on the Pegasus Phone-Hacking software 
which convinced me that such tools (that one in particular) are now a 
mainstream part of the global intelligence/security apparatus (and their 
shadows, however you find your way to aligning the "good guys" from the 
"bad guys", etc.   This may feel like a tangent, but the ability to tap 
directly into the global "nervous system" and monitor (and manipulate... 
see discussions here on chatGPT for example) individual (and therefore 
also collective) behaviours has risen significantly since my time in 
this business (trying to be righteous with it at the time) in the first 
decade of this century...


On 1/12/23 11:31 AM, Steve Smith wrote:


GaryS, et al  -

I was recently trying to make a little more sense of the larger 
sociopolitical situation across central/south America and realized 
that your location in Ecuador might provide some useful parallax.


https://www.as-coa.org/articles/2023-elections-latin-america-preview

I was (not?) surprised to read that there was a renewed interest in 
"regional integration".    This article references Lula and Obrador 
and several other Latin American leaders who might be attempting a 
broader ideological (and economic) alignment/cooperation across the 
region.




Re: [FRIAM] Morning Coffee thought: Good time for global warming bingo

2023-01-17 Thread glen

Not weather, per se. Via HackerNews:

The Ecological Catastrophe You’ve Never Heard Of
https://nautil.us/the-ecological-catastrophe-youve-never-heard-of-257291/

On 1/17/23 07:38, Gillian Densmore wrote:

So far: hurricanes,
Extreme weather
Tornadoes,
Monsoons-
All in January.
LoL anything else to add to the global warming weather list?
Sidenote: just one more reason I avoid driving.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Morning Coffee thought: Good time for global warming bingo

2023-01-17 Thread Gillian Densmore
So far: hurricanes,
Extreme weather
Tornadoes,
Monsoons-
All in January.
LoL anything else to add to the global warming weather list?
Sidenote: just one more reason I avoid driving.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/