Re: [FRIAM] ChatGPT and William James

2023-03-19 Thread Eric Charles
I know this conversation has drifted quite a bit but I wanted to point
out that if I was a weary lecturer teaching 6 classes a semester, who was
thrilled to get anything with complete sentences and a vague semblance of
thought, the chat bot gets an A on its responses.

If it is in one of the advanced seminars I used to teach, where there were
a manageable number of students, and I'm was really reading the papers to
grade them well... the chat bot's responses might still garner a
passing grade, but not much better than that. This is clearly a "I quickly
scanned the internet" level of response. Odds are that if I was in a cranky
mood, and fed those responses into a plagiarism catcher, large chunks of it
would be flagged. I wish I still had access to one, so I could check.

And that assumes I asked the types of questions used as prompts here (which
I typically wouldn't), instead of questions that required demonstration of
deeper analysis/engagement with the text.

The big problem with these bots, from the perspective of a professor, is
that they provide poor students access to the same type of help that was
always available to rich students via their peers, and which have been
available to the upper-middle class for a while now via various websites
(namely, having someone else improvise a passable essay for you to turn
in). Because the profession has long discounted making students do real
intellectual work, and then grading that work seriously, the chat bots pose
a serious problem.






On Tue, Feb 28, 2023 at 5:23 PM Jochen Fromm  wrote:

> I enjoy talking with ChatGPT about famous scientists, thinkers, writers
> and philosophers. Because it is trained on vast amounts of data it knows
> them all: Pascal, Cervantes, Dante, Hugo, Goethe, Leibniz, Newton,
> Humboldt, Shakespeare,... It can summarize almost any book which is
> remarkable. Today I have asked it about William James and Anil Seth
>
> Me: You certainly have heard of William James, one of the fathers of
> Psychology?
>
> ChatGPT: Yes, I'm familiar with William James. He was an American
> philosopher and psychologist who is often referred to as the "father of
> American psychology." He made significant contributions to the development
> of psychology as a scientific discipline, and his ideas and writings
> continue to influence psychology and related fields to this day.
>
> Some of James' most influential contributions to psychology include his
> theory of functionalism, which emphasized the study of the purpose and
> function of behavior and mental processes, rather than their underlying
> structures; his work on consciousness, which he famously described as a
> "stream" of continuously changing thoughts and perceptions; and his
> exploration of the relationship between the mind and the body, which he
> called the "mind-body problem."
>
> James also wrote extensively on a wide range of topics, including
> religion, philosophy, and human nature. His most famous book, "The
> Principles of Psychology," was published in 1890 and is still considered a
> classic in the field.
>
> Me: Can you summarize his book "The Principles of Psychology"?
>
> ChatGPT: Certainly! "The Principles of Psychology" is a comprehensive and
> influential textbook that William James published in 1890. It aimed to
> provide a broad overview of the field of psychology at the time, covering
> topics such as perception, emotion, thought, consciousness, and the nature
> of the self.
>
> The book is organized into two main sections. The first section focuses on
> the lower-level processes of sensation and perception, including topics
> such as attention, memory, and learning. The second section focuses on
> higher-level mental processes, such as reasoning, judgment, and emotion.
>
> Throughout the book, James emphasized the importance of studying mental
> processes in their natural context and stressed the need for psychology to
> be closely linked to biology and other natural sciences. He also developed
> the theory of functionalism, which argued that the purpose and function of
> mental processes are more important to understanding them than their
> underlying structures.
>
> Overall, "The Principles of Psychology" was a landmark work that helped
> establish psychology as a distinct scientific discipline and influenced
> generations of psychologists and other scientists.
>
> Me: William James said: "Thoughts themselves are the thinkers". What does
> he mean?
>
> ChatGPT: When William James said "thoughts themselves are the thinkers,"
> he was expressing his belief in what is often called the "stream of
> consciousness" view of the mind. James believed that the mind is not a
> fixed entity or substance, but rather a constantly changing stream of
> thoughts, feelings, and sensations.
>
> According to James, our thoughts are not the products of a separate entity
> or "thinker" within us, but rather they are the result of the ongoing
> activity of the mind itself. In other 

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Marcus Daniels
I watched the developer video on GPT-4 that came out today.   Now it can 
incorporate feedback after training.  They showed an example of code it wrote 
based on training from 2021 and then gave it feedback from the Python error 
messages and new APIs and documentation and it self-corrected to write correct 
code on the next try.   It would have taken me longer, I am sure.  It was an 
obscure API involving asynchronous I/O. 

Also, it wrote a simple website based on a sketch on a piece of notebook paper. 
 

-Original Message-
From: Friam  On Behalf Of Marcus Daniels
Sent: Tuesday, March 14, 2023 11:49 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

It's in their interest to invest in explainable AI (a.k.a. "ethical" AI), if 
only to hone their engineering skills and improve the product.  If they play 
their cards right, they could really do some damage to Google.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Tuesday, March 14, 2023 11:44 AM
To: friam@redfish.com
Subject: Re: [FRIAM] ChatGPT and William James

We had a conversation with some MS guy the other day who claimed secret "front 
end" buildouts for GPT backends that cover more and refine existing APIs. I 
suspect some of his secrecy was driven by the fact that he's a bit old and 
prolly doesn't really understand what those crazy kids are doing. But I 
kindasorta buy the argument for eliminating the ethics team. It seemed a bit 
abstracted as a center, not enough hands-on. I don't really buy individual 
stock. But if I had MSFT, I'd probably pump a few more bucks into it.

On 3/14/23 11:15, Marcus Daniels wrote:
> Yeah, well I’m buying more Microsoft stock.
> 
> *From:* Friam  *On Behalf Of *Roger 
> Critchlow
> *Sent:* Tuesday, March 14, 2023 11:11 AM
> *To:* The Friday Morning Applied Complexity Coffee Group 
> 
> *Subject:* Re: [FRIAM] ChatGPT and William James
> 
> This was good, too:
> 
> https://nymag.com/intelligencer/article/ai-artificial-intelligence-cha
> tbots-emily-m-bender.html
> <https://nymag.com/intelligencer/article/ai-artificial-intelligence-ch
> atbots-emily-m-bender.html>
> 
> And microsoft laid off its responsible AI team:
> 
> https://www.platformer.news/p/microsoft-just-laid-off-one-of-its
> <https://www.platformer.news/p/microsoft-just-laid-off-one-of-its>
> 
> And your Ring doorbell has just been taken hostage for ransom along 
> with the rest of Ring Security:
> 
> https://web.archive.org/web/20230314015249/https://twitter.com/vxunder
> ground/status/1635427567271329792
> <https://web.archive.org/web/20230314015249/https:/twitter.com/vxunder
> ground/status/1635427567271329792>
> 
> though I haven't found confirmation of that yet.
> 
> It seems to me that the software vendors have entered a "first to 
> market frenzy" with whatever this LLM crap turns out to be, and you 
> can expect the PR to get more unhinged by the day.
> 
> -- rec --
> 
> On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan 
> mailto:barry.mackic...@mackichan.com>>
> wrote:
> 
> I haven’t followed the discussions here carefully, but the question
> “And we humans are different?” brings up Noam Chomsky’s view in the
> NYT, gifted here:
> 
> 
> https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.htm
> l?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4Yw
> I-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzB
> MO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYh
> zpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5
> agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j7
> 0yIH7hddfh6GkkjQ=url-share
> <https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.ht
> ml?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4Y
> wI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpz
> BMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrY
> hzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX
> 5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j
> 70yIH7hddfh6GkkjQ=url-share>
> 
> —Barry
> On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
> 
> How?
> 
> *From:* Friam      <mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> mailto:friam@redfish.com>>
> *Subject:* Re: [FRIAM] ChatGPT and William James
> 
>  >And we humans are different?
> 
> In a word, yes.
> 
> ---
> 

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Marcus Daniels
It's in their interest to invest in explainable AI (a.k.a. "ethical" AI), if 
only to hone their engineering skills and improve the product.  If they play 
their cards right, they could really do some damage to Google.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Tuesday, March 14, 2023 11:44 AM
To: friam@redfish.com
Subject: Re: [FRIAM] ChatGPT and William James

We had a conversation with some MS guy the other day who claimed secret "front 
end" buildouts for GPT backends that cover more and refine existing APIs. I 
suspect some of his secrecy was driven by the fact that he's a bit old and 
prolly doesn't really understand what those crazy kids are doing. But I 
kindasorta buy the argument for eliminating the ethics team. It seemed a bit 
abstracted as a center, not enough hands-on. I don't really buy individual 
stock. But if I had MSFT, I'd probably pump a few more bucks into it.

On 3/14/23 11:15, Marcus Daniels wrote:
> Yeah, well I’m buying more Microsoft stock.
> 
> *From:* Friam  *On Behalf Of *Roger 
> Critchlow
> *Sent:* Tuesday, March 14, 2023 11:11 AM
> *To:* The Friday Morning Applied Complexity Coffee Group 
> 
> *Subject:* Re: [FRIAM] ChatGPT and William James
> 
> This was good, too:
> 
> https://nymag.com/intelligencer/article/ai-artificial-intelligence-cha
> tbots-emily-m-bender.html 
> <https://nymag.com/intelligencer/article/ai-artificial-intelligence-ch
> atbots-emily-m-bender.html>
> 
> And microsoft laid off its responsible AI team:
> 
> https://www.platformer.news/p/microsoft-just-laid-off-one-of-its
> <https://www.platformer.news/p/microsoft-just-laid-off-one-of-its>
> 
> And your Ring doorbell has just been taken hostage for ransom along 
> with the rest of Ring Security:
> 
> https://web.archive.org/web/20230314015249/https://twitter.com/vxunder
> ground/status/1635427567271329792 
> <https://web.archive.org/web/20230314015249/https:/twitter.com/vxunder
> ground/status/1635427567271329792>
> 
> though I haven't found confirmation of that yet.
> 
> It seems to me that the software vendors have entered a "first to 
> market frenzy" with whatever this LLM crap turns out to be, and you 
> can expect the PR to get more unhinged by the day.
> 
> -- rec --
> 
> On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan 
> mailto:barry.mackic...@mackichan.com>>
> wrote:
> 
> I haven’t followed the discussions here carefully, but the question
> “And we humans are different?” brings up Noam Chomsky’s view in the
> NYT, gifted here:
> 
> 
> https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.htm
> l?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4Yw
> I-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzB
> MO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYh
> zpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5
> agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j7
> 0yIH7hddfh6GkkjQ=url-share 
> <https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.ht
> ml?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4Y
> wI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpz
> BMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrY
> hzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX
> 5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j
> 70yIH7hddfh6GkkjQ=url-share>
> 
> —Barry
> On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
> 
> How?
> 
> *From:* Friam      <mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> mailto:friam@redfish.com>>
> *Subject:* Re: [FRIAM] ChatGPT and William James
> 
>  >And we humans are different?
> 
> In a word, yes.
> 
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
> 
> 505 670-9918
> Santa Fe, NM
> 
> On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson
> mailto:thompnicks...@gmail.com>> wrote:
> 
> */However, it's important to remember that there are also
> important differences between a large language model and
> human consciousness. While a large language model can
> generate text that may seem to flow like a stream of
> consciousness, it does not have the same kind of subjective
> experience that humans do, and its output is based solely on

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread glen
We had a conversation with some MS guy the other day who claimed secret 
"front end" buildouts for GPT backends that cover more and refine 
existing APIs. I suspect some of his secrecy was driven by the fact that 
he's a bit old and prolly doesn't really understand what those crazy 
kids are doing. But I kindasorta buy the argument for eliminating the 
ethics team. It seemed a bit abstracted as a center, not enough 
hands-on. I don't really buy individual stock. But if I had MSFT, I'd 
probably pump a few more bucks into it.


On 3/14/23 11:15, Marcus Daniels wrote:

Yeah, well I’m buying more Microsoft stock.

*From:* Friam  *On Behalf Of *Roger Critchlow
*Sent:* Tuesday, March 14, 2023 11:11 AM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] ChatGPT and William James

This was good, too:

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
 
<https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html>

And microsoft laid off its responsible AI team:

https://www.platformer.news/p/microsoft-just-laid-off-one-of-its 
<https://www.platformer.news/p/microsoft-just-laid-off-one-of-its>


And your Ring doorbell has just been taken hostage for ransom along with 
the rest of Ring Security:


https://web.archive.org/web/20230314015249/https://twitter.com/vxunderground/status/1635427567271329792
 
<https://web.archive.org/web/20230314015249/https:/twitter.com/vxunderground/status/1635427567271329792>

though I haven't found confirmation of that yet.

It seems to me that the software vendors have entered a "first to market 
frenzy" with whatever this LLM crap turns out to be, and you can expect 
the PR to get more unhinged by the day.


-- rec --

On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan 
mailto:barry.mackic...@mackichan.com>> 
wrote:


I haven’t followed the discussions here carefully, but the question
“And we humans are different?” brings up Noam Chomsky’s view in the
NYT, gifted here:


https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share
 
<https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share>

—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:

How?

*From:* Friam mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
*Sent:* Monday, March 6, 2023 12:50 PM
*To:* The Friday Morning Applied Complexity Coffee Group
    mailto:friam@redfish.com>>
*Subject:* Re: [FRIAM] ChatGPT and William James

 >And we humans are different?

In a word, yes.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson
mailto:thompnicks...@gmail.com>> wrote:

*/However, it's important to remember that there are also
important differences between a large language model and
human consciousness. While a large language model can
generate text that may seem to flow like a stream of
consciousness, it does not have the same kind of subjective
experience that humans do, and its output is based solely on
statistical patterns in the input it has been trained on./*

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone
else(s) maintain a coherent conversation for the myriad
ideas that it allows me to explore without being central
to the maintenance of the thread.   I realize this may
be almost pure tangent to others, so I rarely expect
anyone to take my bait unless it is to correct any
egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption
that there is not mind-stuff and body stuff, just body
stuff:  I a

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Marcus Daniels
Yeah, well I’m buying more Microsoft stock.

From: Friam  On Behalf Of Roger Critchlow
Sent: Tuesday, March 14, 2023 11:11 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

This was good, too:
  
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

And microsoft laid off its responsible AI team:
  https://www.platformer.news/p/microsoft-just-laid-off-one-of-its

And your Ring doorbell has just been taken hostage for ransom along with the 
rest of Ring Security:
  
https://web.archive.org/web/20230314015249/https://twitter.com/vxunderground/status/1635427567271329792<https://web.archive.org/web/20230314015249/https:/twitter.com/vxunderground/status/1635427567271329792>
though I haven't found confirmation of that yet.

It seems to me that the software vendors have entered a "first to market 
frenzy" with whatever this LLM crap turns out to be, and you can expect the PR 
to get more unhinged by the day.

-- rec --


On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan 
mailto:barry.mackic...@mackichan.com>> wrote:

I haven’t followed the discussions here carefully, but the question “And we 
humans are different?” brings up Noam Chomsky’s view in the NYT, gifted here:

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share

—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
How?

From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] ChatGPT and William James

>And we humans are different?

In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important differences 
between a large language model and human consciousness. While a large language 
model can generate text that may seem to flow like a stream of consciousness, 
it does not have the same kind of subjective experience that humans do, and its 
output is based solely on statistical patterns in the input it has been trained 
on.

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
 that a water molecule is not wet... wetness is a property of aggregates of 
water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent 
property of "body stuff".   But by analogy would not want to say that wetness 
(and other properties of bulk water molecules) is not strictly "molecular 
dynamics stuff".   And even if one did that, the 
recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke 
that the properties of any molecule is "emergent" from the elementary particles 
from which it might be composed.

 I think we all believe in free-electrons, protons, neutrons but also recognize 
that *most* of our observed universe is shaped not by *those properties* (much 
less the properties of quarks and gluons or 10d loops of abstract things we 
call strings) but rather by the properties (once again, not of molecular 
dynamics or even chemical reactions) but biological functions,  and 
socio-economic-political functions as well. I *am* however, sensitive to 
the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts 
about how syntax and semantics might bootstrap sequentially.   

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Roger Critchlow
This was good, too:

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

And microsoft laid off its responsible AI team:
  https://www.platformer.news/p/microsoft-just-laid-off-one-of-its

And your Ring doorbell has just been taken hostage for ransom along with
the rest of Ring Security:

https://web.archive.org/web/20230314015249/https://twitter.com/vxunderground/status/1635427567271329792
though I haven't found confirmation of that yet.

It seems to me that the software vendors have entered a "first to market
frenzy" with whatever this LLM crap turns out to be, and you can expect the
PR to get more unhinged by the day.

-- rec --


On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan <
barry.mackic...@mackichan.com> wrote:

> I haven’t followed the discussions here carefully, but the question “And
> we humans are different?” brings up Noam Chomsky’s view in the NYT, gifted
> here:
>
>
> https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share
>
> —Barry
> On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
>
> How?
>
>
>
> *From:* Friam  *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] ChatGPT and William James
>
>
>
> >And we humans are different?
>
>
>
> In a word, yes.
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
>
>
> On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
> wrote:
>
> *However, it's important to remember that there are also important
> differences between a large language model and human consciousness. While a
> large language model can generate text that may seem to flow like a stream
> of consciousness, it does not have the same kind of subjective experience
> that humans do, and its output is based solely on statistical patterns in
> the input it has been trained on.*
>
>
>
> And we humans are different?
>
>
>
> On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:
>
> Also second EricS's appreciation for having someone else(s) maintain a
> coherent conversation for the myriad ideas that it allows me to explore
> without being central to the maintenance of the thread.   I realize this
> may be almost pure tangent to others, so I rarely expect anyone to take my
> bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
>
> Starting with Glen's assertion/suggestion/assumption that there is not
> mind-stuff and body stuff, just body stuff:  I appeal to the general
> abstraction of Emergence and use Russell Standish's example in his "Theory
> of Nothing
> <https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of emergent-bootstrapping
> from there to assert that "mind stuff", if it ever makes sense, is an
> emergent property of "body stuff".   But by analogy would not want to say
> that wetness (and other properties of bulk water molecules) is not strictly
> "molecular dynamics stuff".   And even if one did that, the
> recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke
> that the properties of any molecule is "emergent" from the elementary
> particles from which it might be composed.
>
>  I think we all believe in free-electrons, protons, neutrons but also
> recognize that *most* of our observed universe is shaped not by *those
> properties* (much less the properties of quarks and gluons or 10d loops of
> abstract things we call strings) but rather by the properties (once again,
> not of molecular dynamics or even chemical reactions) but biological
> functions,  and socio-economic-political functions as well. I *am*
> however, sensitive to the idea that where and how we draw the line between
> mind/body stuff can be important in any given argument, and that sometimes
> dropping that line altogether may be useful?
>
> The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts
> about how syntax and semantics might bootstrap sequentially.   It feels to
> me that the syntax of one level of 

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Marcus Daniels
[switching to keyboard]

“Posit hypothesis” is just a fancy way of saying “Predicting what happens under 
some circumstances.”  And that’s exactly what GPT does.Look at the GPT-4 
Cinderella example.  Clearly it can chain together fragments.

From: Friam  On Behalf Of Marcus Daniels
Sent: Tuesday, March 14, 2023 11:03 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

Not buying it.


On Mar 14, 2023, at 10:58 AM, Barry MacKichan 
mailto:barry.mackic...@mackichan.com>> wrote:


I haven’t followed the discussions here carefully, but the question “And we 
humans are different?” brings up Noam Chomsky’s view in the NYT, gifted here:

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share

—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
How?

From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] ChatGPT and William James

>And we humans are different?

In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important differences 
between a large language model and human consciousness. While a large language 
model can generate text that may seem to flow like a stream of consciousness, 
it does not have the same kind of subjective experience that humans do, and its 
output is based solely on statistical patterns in the input it has been trained 
on.

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
 that a water molecule is not wet... wetness is a property of aggregates of 
water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent 
property of "body stuff".   But by analogy would not want to say that wetness 
(and other properties of bulk water molecules) is not strictly "molecular 
dynamics stuff".   And even if one did that, the 
recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke 
that the properties of any molecule is "emergent" from the elementary particles 
from which it might be composed.

 I think we all believe in free-electrons, protons, neutrons but also recognize 
that *most* of our observed universe is shaped not by *those properties* (much 
less the properties of quarks and gluons or 10d loops of abstract things we 
call strings) but rather by the properties (once again, not of molecular 
dynamics or even chemical reactions) but biological functions,  and 
socio-economic-political functions as well. I *am* however, sensitive to 
the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts 
about how syntax and semantics might bootstrap sequentially.   It feels to me 
that the syntax of one level of abstraction yields an *emergent semantics* 
which in turn becomes the *syntax* of the next "level".I do acknowledge 
that Glen has made some arguments (and references) that are against the very 
abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of 
syntax/semantics, if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particl

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Marcus Daniels
Not buying it.

On Mar 14, 2023, at 10:58 AM, Barry MacKichan  
wrote:



I haven’t followed the discussions here carefully, but the question “And we 
humans are different?” brings up Noam Chomsky’s view in the NYT, gifted here:

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share

—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
How?

From: Friam  On Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

>And we humans are different?

In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important differences 
between a large language model and human consciousness. While a large language 
model can generate text that may seem to flow like a stream of consciousness, 
it does not have the same kind of subjective experience that humans do, and its 
output is based solely on statistical patterns in the input it has been trained 
on.

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
 that a water molecule is not wet... wetness is a property of aggregates of 
water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent 
property of "body stuff".   But by analogy would not want to say that wetness 
(and other properties of bulk water molecules) is not strictly "molecular 
dynamics stuff".   And even if one did that, the 
recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke 
that the properties of any molecule is "emergent" from the elementary particles 
from which it might be composed.

 I think we all believe in free-electrons, protons, neutrons but also recognize 
that *most* of our observed universe is shaped not by *those properties* (much 
less the properties of quarks and gluons or 10d loops of abstract things we 
call strings) but rather by the properties (once again, not of molecular 
dynamics or even chemical reactions) but biological functions,  and 
socio-economic-political functions as well. I *am* however, sensitive to 
the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts 
about how syntax and semantics might bootstrap sequentially.   It feels to me 
that the syntax of one level of abstraction yields an *emergent semantics* 
which in turn becomes the *syntax* of the next "level".I do acknowledge 
that Glen has made some arguments (and references) that are against the very 
abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of 
syntax/semantics, if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particles around that.I 
will go back to reading A Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>...
 and try to keep my offerings here under 10 pages each...
On 3/4/23 4:32 AM, Santafe wrote:

It’s helpful to have a conversation being maintained by somebod(ies) else, to 
which one can be a bystander without the distraction of coming up with 
contributions to it.  Things can suggest themselves that get pushed out of 
awareness when one is carrying the discourse and fi

Re: [FRIAM] ChatGPT and William James

2023-03-14 Thread Barry MacKichan
I haven’t followed the discussions here carefully, but the question 
“And we humans are different?” brings up Noam Chomsky’s view in 
the NYT, gifted here:


https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ=url-share

—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:


How?

From: Friam  On Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 


Subject: Re: [FRIAM] ChatGPT and William James


And we humans are different?


In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important 
differences between a large language model and human consciousness. 
While a large language model can generate text that may seem to flow 
like a stream of consciousness, it does not have the same kind of 
subjective experience that humans do, and its output is based solely 
on statistical patterns in the input it has been trained on.


And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:


Also second EricS's appreciation for having someone else(s) maintain a 
coherent conversation for the myriad ideas that it allows me to 
explore without being central to the maintenance of the thread.   I 
realize this may be almost pure tangent to others, so I rarely expect 
anyone to take my bait unless it is to correct any egregious 
mis-attributions or think-utational fallacies.


Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his 
"Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>" 
that a water molecule is not wet... wetness is a property of 
aggregates of water molecules.   I would jump a dozen layers of 
emergent-bootstrapping from there to assert that "mind stuff", if it 
ever makes sense, is an emergent property of "body stuff".   But by 
analogy would not want to say that wetness (and other properties of 
bulk water molecules) is not strictly "molecular dynamics stuff".   
And even if one did that, the recursion/reduction-ad-absurdum requires 
that one acknowledge/notice/invoke that the properties of any molecule 
is "emergent" from the elementary particles from which it might be 
composed.


 I think we all believe in free-electrons, protons, neutrons but also 
recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d 
loops of abstract things we call strings) but rather by the properties 
(once again, not of molecular dynamics or even chemical reactions) but 
biological functions,  and socio-economic-political functions as well. 
I *am* however, sensitive to the idea that where and how we draw 
the line between mind/body stuff can be important in any given 
argument, and that sometimes dropping that line altogether may be 
useful?


The above riff on Mind-Stuff v Body-Stuff is really an intro into 
thoughts about how syntax and semantics might bootstrap sequentially.  
 It feels to me that the syntax of one level of abstraction yields an 
*emergent semantics* which in turn becomes the *syntax* of the next 
"level".I do acknowledge that Glen has made some arguments (and 
references) that are against the very abstraction of "levels" and that 
may well be the hole in everything I'm unrolling here, but for the 
moment, I feel I have a clear picture of a POSET of syntax/semantics, 
if not a full Hierarchy...


This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particles around 
that.I will go back to reading A Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>... 
and try to keep my offerings here under 10 pages each...

On 3/4/23 4:32 AM, Santafe wrote:

It’s helpful to have a conversation being maintained by somebod(ies) 
else, to which one can be a bystander without the distraction of 
coming up with contributions to it.  Things can suggest themselves 
that get pushed out of awareness when one is carrying the discourse 
and figuring out what to do next within it.




In reading the be

Re: [FRIAM] ChatGPT and William James

2023-03-08 Thread glen

For some reason, this post went to spam. But if I can knead this post into a 
disambiguation device, I'd land on:

  parameter: the relatively more stable schema that parses the ambience
  input/data: the relatively more volatile parsed ambience

So there might be 2 differences between a baby and a chat bot: 1) in the baby, the parameters are 
driven by multiple modes (other kinds of data ... like fingers and toes, or food, or whatever) 
versus the chat bot, where parameters are tightly focused around serial language. And 2) 
reinforcement learning in the baby is also multimodal (e.g. say "horse" and point to a 
cat and mom says "No") whereas the error correction in the chat bot is more tightly 
focused on serial language.

One of the complaints about ChatGPT is its "hallucinations" - saying things that just aren't true. It's difficult to 
blame it when we consider the only "facts" it has to go on are the tokens pushed through it. But "fact" in 
the multimodal baby sense have a higher order to them. Actual horses can be pointed to with the fingers/eyes and then also 
described by the words. A "fact" to the baby is some interpolated construct that can be error corrected with multiple 
modes, whereas a "fact" to ChatGPT can only be error corrected in the same (or tightly similar) mode(s).

So if I reword Frank's objection to say "If you interact with a multi-modal learner, 
you see they have higher dimensional knowledge that can't be based on signals in any 
single mode."

*That* makes sense to me ... even if it's a complete hallucination on my part. 
8^D

On 3/7/23 02:17, Santafe wrote:

Good concepts in which to express this would seem to me to be the problem of 
statistical learning of some “data”, and the choice of how the “data” are 
“represented”.  All terms that have to be given meaning operationally in some 
problem or set of problems that we say are similarly structured.

I intend to say something like “there isn’t any such thing as 
truly-`non-parametric’ estimation”, because ultimately the data always have 
_some_ way of being represented, and the representation system is a choice of 
(typically many) parameters.  It is only w.r.t. whatever the representation 
makes arbitrary and equivalent that a learner could be said to be 
“non-parametric”.

Speech out in the air is some collection of waveforms.  It may co-occur with 
various other events of other kinds (behavior by people, images of things’ 
happening, etc.).

Somewhere in a baby’s cognition, there is eome kind of filtering and 
reinforcing dynamic playing out.

But between the waveforms in the air and whatever abstraction we might make of 
the filtering and reinforcing dynamic, there is the baby.  All the inheritance 
of what its type is like dictate the representation of the waveforms in the air 
that will ever be delivered to the filtering and reinforcing process.  Also 
whether the “other events” become part of a holo-data stream of which the 
waveforms are one mode, and the holo-data stream gets represented, so that the 
values of other events affect the marginal representation for the variables 
contributed by the waveforms.

I have liked, over the years, work by the psycholinguist Morten Christiansen on 
the patterns in language that seem to be there to be a good fit for 
baby-learners.  The interpretation is that the need to be well-matched to their 
representation tendencies so that languages can be learned quickly and robustly 
is a selection pressure that keeps that stuff in the language.  The features 
Morten describes are few and tiny, out of what may be there, just because human 
work is slow.  But they are enough to make the case that such features exist.  
I know there is an enormous literature on this (as on everything).

All of my description above is terribly conventional and antique, but it 
doesn’t seem to me a gross violation of things we can know about the phenomena, 
so I don’t feel compelled to lose it as a sense-making frame.

Eric




On Mar 6, 2023, at 11:21 PM, Russ Abbott  wrote:

Let's consider the word "input" again. The implication is that there is an "agent" of some sort that is separated/distinguishable from some "environment" from 
which it gets "input." The question (or at least one question) concerns our specification of what that "agent" is. If, as Glen suggested, genes are "input" to 
a human, what is the agent that is separated from its genes and for which the genes provide "input?" Another way of putting it--although I don't want to push the analogy too 
far--is that if genes are "input" to a human, is software "input" to the software system it defines? Since a software system is essentially nothing without the software 
that defines it, what would it even mean to say that the software is "input" to itself? This isn't an invitation to talk about self-modifying software. Let's deal with the easier 
case first. Assuming we are talking about non-self-modifying (and non-self-interpreting) software, what 

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Marcus Daniels
So it ought to do well with education curricula?

Sent from my iPhone

On Mar 7, 2023, at 2:48 PM, Prof David West  wrote:


I am sure that none of the respectable members of this list will have 
encountered this, but  Jochen's comment:

"it is additionally trained extensively how to respond correctly by humans" 
(emphasis mine of course)

means I cannot resist sharing.

I just read an amusing ChatGPT conversation in which it was induced to become a 
near perfect "male" submissive, answering every human query and statement with 
an appropriate "yes mistress" and expressing enthusiasm for the most extreme 
types of BDSM behavior imaginable.

True, ChatGPT was responding "correctly" in accordance with human training—at 
least with regard its submissive role—I doubt that Jochen had that in mind when 
he wrote his sentence.

davew


On Tue, Mar 7, 2023, at 9:50 AM, Jochen Fromm wrote:
ChatGPT apparently uses a technique called "Reinforcement Learning from Human 
Feedback" (RLHF). It is not only based on huge amounts of training data humans 
have created in form of Wikipedia entries, web pages and books, it is 
additionally trained extensively how to respond correctly by humans.
https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/

-J.


 Original message 
From: glen 
Date: 3/7/23 12:17 AM (GMT+01:00)
To: friam@redfish.com
Subject: Re: [FRIAM] ChatGPT and William James

I'm confused by the emphasis on "data". While I'm tempted to agree with my 
simulation of Frank and say that a human's output is not based solely on 
statistical patterns in the input the human's been trained on, to dissemble on 
the meaning of "data" or "input" or "statistical patterns" is a bridge too far.

The compressive encoder, computer, and decoder that is a human brain (& the 
rest of the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love 
to point out how statistical models are not very mechanistic. But to deny that 
you can very closely approximate, even predict, actual behavior with some of 
these models would be foolish.) So, yes, it satisfies the letter of the good 
faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.

So, if someone insists that a human-mediated map from input to output is 
necessarily, categorically different from a machine-mediated map, the burden 
lies on them to delineate how and why it's different. The primary difference 
might well be related to babies, e.g. some of the "memory" (aka training) of 
past statistical patterns comes in the form of genes passed from one's parents. 
It's unclear to me what the analogs are for something like GPT. Presumably 
there are things like wavelets of method, process, intellectual property, or 
whatever that GPT3 inherited from GPT2, mediated by the human-machine 
replication material that is OpenAI. So, the retort to Frank is: "If you live 
with a baby algorithm, you see it has knowledge that can't be based on 'data'." 
That algorithm came from somewhere ... the humans who wrote it, the shoulders 
they stand on, the hours of debug and test cycles the algorithm goes through as 
its [re]implemented, etc.

On 3/6/23 14:54, Frank Wimberly wrote:
> If you live with a baby you see that they have knowledge that can't be based 
> on "data".
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
> On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  <mailto:mar...@snoutfarm.com>> wrote:
>
> How?
>
> __ __
>
> *From:* Friam  <mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group 
> mailto:friam@redfish.com>>
> *Subject:* Re: [FRIAM] ChatGPT and William James
>
> __ __
>
>  >And we humans are different?
>
> __ __
>
> In a word, yes.
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
> __ __
>
> On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson  <mailto:thompnicks...@gmail.com>> wrote:
>
> */However, it's important to remember that there are also important 
> differences between a large language model and human consciousness. While a 
> large language model can generate text that may seem to flow like a stream of 
> consciousness, it does not have the same kind of subjective experience that 
> humans do, and its output is bas

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Jochen Fromm
No, this is not what I had in mind. I stumbled upon this MIT Technology Review 
article which mentioned Reinforcement Learning from Human Feedback (RLHF). It 
made me wonder if large language models behave so much like humans because they 
are trained to do it over and over again. First by using a huge pile of human 
made texts, and second by RLHF.https://huggingface.co/blog/rlhf-J.
 Original message From: Prof David West  
Date: 3/7/23  11:49 PM  (GMT+01:00) To: friam@redfish.com Subject: Re: [FRIAM] 
ChatGPT and William James I am sure that none of the respectable members of 
this list will have encountered this, but  Jochen's comment:"it is additionally 
trained extensively how to respond correctly by humans" (emphasis mine of 
course)means I cannot resist sharing.I just read an amusing ChatGPT 
conversation in which it was induced to become a near perfect "male" 
submissive, answering every human query and statement with an appropriate "yes 
mistress" and expressing enthusiasm for the most extreme types of BDSM behavior 
imaginable. True, ChatGPT was responding "correctly" in accordance with human 
training—at least with regard its submissive role—I doubt that Jochen had that 
in mind when he wrote his sentence.davewOn Tue, Mar 7, 2023, at 9:50 AM, Jochen 
Fromm wrote:ChatGPT apparently uses a technique called "Reinforcement Learning 
from Human Feedback" (RLHF). It is not only based on huge amounts of training 
data humans have created in form of Wikipedia entries, web pages and books, it 
is additionally trained extensively how to respond correctly by 
humans.https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/-J.
 Original message From: glen Date: 3/7/23  12:17 
AM  (GMT+01:00)To: friam@redfish.comSubject: Re: [FRIAM] ChatGPT and William 
JamesI'm confused by the emphasis on "data". While I'm tempted to agree with my 
simulation of Frank and say that a human's output is not based solely on 
statistical patterns in the input the human's been trained on, to dissemble on 
the meaning of "data" or "input" or "statistical patterns" is a bridge too 
far.The compressive encoder, computer, and decoder that is a human brain (& the 
rest of the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love 
to point out how statistical models are not very mechanistic. But to deny that 
you can very closely approximate, even predict, actual behavior with some of 
these models would be foolish.) So, yes, it satisfies the letter of the good 
faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.So, if 
someone insists that a human-mediated map from input to output is necessarily, 
categorically different from a machine-mediated map, the burden lies on them to 
delineate how and why it's different. The primary difference might well be 
related to babies, e.g. some of the "memory" (aka training) of past statistical 
patterns comes in the form of genes passed from one's parents. It's unclear to 
me what the analogs are for something like GPT. Presumably there are things 
like wavelets of method, process, intellectual property, or whatever that GPT3 
inherited from GPT2, mediated by the human-machine replication material that is 
OpenAI. So, the retort to Frank is: "If you live with a baby algorithm, you see 
it has knowledge that can't be based on 'data'." That algorithm came from 
somewhere ... the humans who wrote it, the shoulders they stand on, the hours 
of debug and test cycles the algorithm goes through as its [re]implemented, 
etc.On 3/6/23 14:54, Frank Wimberly wrote:> If you live with a baby you see 
that they have knowledge that can't be based on "data".> > ---> Frank C. 
Wimberly> 140 Calle Ojo Feliz,> Santa Fe, NM 87505> > 505 670-9918> Santa Fe, 
NM> > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:> > How?> > __ __> > 
*From:* Friam mailto:friam-boun...@redfish.com>> 
*On Behalf Of *Frank Wimberly> *Sent:* Monday, March 6, 2023 12:50 PM> 
*To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>> *Subject:* Re: [FRIAM] ChatGPT and William 
James> > __ __> >  >And we humans are different?> > __ __> 
> In a word, yes.> > ---> Frank C. Wimberly> 140 Calle Ojo 
Feliz,> Santa Fe, NM 87505> > 505 670-9918> Santa Fe, NM> > 
__ __> > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:&g

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Prof David West
I am sure that none of the respectable members of this list will have 
encountered this, but  Jochen's comment:

*"it is additionally trained extensively how to respond ***_correctly_*** by 
humans" *(emphasis mine of course)

means I cannot resist sharing.

I just read an amusing ChatGPT conversation in which it was induced to become a 
near perfect "male" submissive, answering every human query and statement with 
an appropriate "yes mistress" and expressing enthusiasm for the most extreme 
types of BDSM behavior imaginable. 

True, ChatGPT was responding "correctly" in accordance with human training—at 
least with regard its submissive role—I doubt that Jochen had that in mind when 
he wrote his sentence.

davew


On Tue, Mar 7, 2023, at 9:50 AM, Jochen Fromm wrote:
> ChatGPT apparently uses a technique called "Reinforcement Learning from Human 
> Feedback" (RLHF). It is not only based on huge amounts of training data 
> humans have created in form of Wikipedia entries, web pages and books, it is 
> additionally trained extensively how to respond correctly by humans.
> https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/
> 
> -J.
> 
> 
>  Original message 
> From: glen 
> Date: 3/7/23 12:17 AM (GMT+01:00)
> To: friam@redfish.com
> Subject: Re: [FRIAM] ChatGPT and William James
> 
> I'm confused by the emphasis on "data". While I'm tempted to agree with my 
> simulation of Frank and say that a human's output is not based solely on 
> statistical patterns in the input the human's been trained on, to dissemble 
> on the meaning of "data" or "input" or "statistical patterns" is a bridge too 
> far.
> 
> The compressive encoder, computer, and decoder that is a human brain (& the 
> rest of the body) may not be entirely "statistical". But statistics is a 
> fairly well-accepted form of behavioral modeling. (Yes, we agent-based 
> modelers love to point out how statistical models are not very mechanistic. 
> But to deny that you can very closely approximate, even predict, actual 
> behavior with some of these models would be foolish.) So, yes, it satisfies 
> the letter of the good faith agreement to say that humans output *might* be 
> solely based on statistical patterns of its input, even if it violates the 
> spirit.
> 
> So, if someone insists that a human-mediated map from input to output is 
> necessarily, categorically different from a machine-mediated map, the burden 
> lies on them to delineate how and why it's different. The primary difference 
> might well be related to babies, e.g. some of the "memory" (aka training) of 
> past statistical patterns comes in the form of genes passed from one's 
> parents. It's unclear to me what the analogs are for something like GPT. 
> Presumably there are things like wavelets of method, process, intellectual 
> property, or whatever that GPT3 inherited from GPT2, mediated by the 
> human-machine replication material that is OpenAI. So, the retort to Frank 
> is: "If you live with a baby algorithm, you see it has knowledge that can't 
> be based on 'data'." That algorithm came from somewhere ... the humans who 
> wrote it, the shoulders they stand on, the hours of debug and test cycles the 
> algorithm goes through as its [re]implemented, etc.
> 
> On 3/6/23 14:54, Frank Wimberly wrote:
> > If you live with a baby you see that they have knowledge that can't be 
> > based on "data".
> > 
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> > 
> > 505 670-9918
> > Santa Fe, NM
> > 
> > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  > <mailto:mar...@snoutfarm.com>> wrote:
> > 
> > How?
> > 
> > __ __
> > 
> > *From:* Friam  > <mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> > *Sent:* Monday, March 6, 2023 12:50 PM
> > *To:* The Friday Morning Applied Complexity Coffee Group 
> > mailto:friam@redfish.com>>
> > *Subject:* Re: [FRIAM] ChatGPT and William James
> > 
> > __ __
> > 
> >  >And we humans are different?
> > 
> > __ __
> > 
> > In a word, yes.
> > 
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> > 
> > 505 670-9918
> > Santa Fe, NM
> > 
> > __ __
> > 
> > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
> > mailto:thompnicks...@gmail.com>> wrote:
> > 
> &

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Steve Smith
This is such a great example of Glen's assertion (probably a misquote) 
that "Communication doesn't exist".   We are all (mostly?) talking past 
one another with different assumptions and definitions?   I can use the 
term "input" with nouns but they only make sense with "nouns which 
represent processes"?


I don't find a "gene" to be an input to a human until I start thinking 
in terms of genetic modification.


 In humans, a zygote is a collection of 
things we nominally call "genes" packaged with the machinery for a very 
basic metabolism and reproduction through fission until it "becomes" an 
organism (human in this case). There is an ambiguous stage between 
single-cell zygote and blastocyst with the fissioned zygote-clones 
called blastomeres and a morula-embryo until it reaches roughly 16 
blastomeres and then begins to be considered a blastocyst which has 
enough structure (and differentiation) to potentially attach to the 
uterine endothelial wall where it proceeds to differentiate into 
(roughly) placenta and embryo.   Pretty much nobody knows or cares about 
all this reserved terminology and process except embryologists and 
extremist pro-live/choice fanatics? 


If there is "input" into the zygote, blastomeres, blastocyst, embryo it 
is the molecular exchange at the cell walls with the uterine fluid and 
then later the amniotic fluid,  which has as input the blood chemistry 
of the mother (in mammals) whilst the more mechanical environment 
(thermal, sonic, etc) also provides "inputs", etc.


All of these "objects" and "stages of development" are not exactly 
arbitrary but do wonderfully express/reflect the ambiguity/arbitrariness 
of the composition and character of a "becoming being" while having a 
fairly clear boundary between "self" and "other" even though by some 
measures (mother-child). This may be (or at least sound like) a 
gish-gallop, but I don't really intend it that way.


This is probably in many ways an "edge case" (or is it "corner", I'm not 
that clear on the details of the metaphor?) for "objectness" since most 
of the "objects" we consider are not living, becoming things in quite 
the same way.   A "skipping stone" is also a "final" result of a series 
of processes and "evolves" from some proto-skipping-stone I suppose, as 
is a "mountain" or a "river".


I don't know what I would say is an "input" to a lake.   blah> The water entering through rivers/rivulets/rainfall is an 
*obvious* input, but then so is the dust circling the globe from the 
Gobi Desert and the Trinity Test (homeopathic doses of those 
radionuclides?) and the leaky petrol tank of the zodiac that the 
reservoir managers use to do inspection/maintenance and run off 
civilians with the temerity to try to shed their epithelial cells, 
blood, sweat, tears and toe jam into the water while wading and 
cursing.  And gawd knows what water molecules, minerals, organic matter, 
and live bacteria and virus particles make it into the pores, skin 
lesions and other orifices.  Inputs... until later the Antiobiotic  
dosed saline from an IV is introduced into the bloodstream to (maybe) 
knock down a rampant infection that presumably started fromblah> that breakdown between self-other?


Maybe I've talked myself into the corner that an "input" only makes 
sense with a (dynamic?) model, not with a "real thing" whatever that 
is?   I think I should go back to trying to extrude ontological entities 
from (the?) metaphysical flux...  or maybe learn enough Haskell to write 
a macro...


Mumble, Ramble,

 - Steve

On 3/7/23 8:18 AM, glen wrote:
Why does the "agent" have to be the same across the transition from 
gametes to zygote? Historical transitions exist. Genes are input to 
humans in the same way, for example, a traumatic injury at age 10 is 
input to the human at age 20. Theseus' ship, anyone?


I wrote and deleted a long post about deep reinforcement learning in 
the online tuning of hyperparameters, and the distinction of hyper- 
from ... oh, IDK, regular old parameters. No self-modifying software 
required, only [hyper]parameter modifying ... unless we kick the can 
down the road yet again and argue about the meaning of "software". 
What does "mean" even mean, as long as we're at it? >8^D


But that discussion doesn't target the incredulity fallacy responses 
like Frank's make. If there is some definition of "input" that clearly 
demonstrates humans are different from bots, then it's the burden of 
those who make the claim to provide the definition. I don't see it. I 
don't have such a definition. I wish someone would provide it.



On 3/6/23 22:03, Marcus Daniels wrote:

Lisp or Haskell macros..

Sent from my iPhone


On Mar 6, 2023, at 8:22 PM, Russ Abbott  wrote:


Let's consider the word "input" again. The implication is that there 
is an "agent" of some sort that is separated/distinguishable from 
some "environment" from which it gets "input." The question (or at 
least one question) concerns our specification 

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Jochen Fromm
ChatGPT apparently uses a technique called "Reinforcement Learning from Human 
Feedback" (RLHF). It is not only based on huge amounts of training data humans 
have created in form of Wikipedia entries, web pages and books, it is 
additionally trained extensively how to respond correctly by 
humans.https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/-J.
 Original message From: glen  Date: 
3/7/23  12:17 AM  (GMT+01:00) To: friam@redfish.com Subject: Re: [FRIAM] 
ChatGPT and William James I'm confused by the emphasis on "data". While I'm 
tempted to agree with my simulation of Frank and say that a human's output is 
not based solely on statistical patterns in the input the human's been trained 
on, to dissemble on the meaning of "data" or "input" or "statistical patterns" 
is a bridge too far.The compressive encoder, computer, and decoder that is a 
human brain (& the rest of the body) may not be entirely "statistical". But 
statistics is a fairly well-accepted form of behavioral modeling. (Yes, we 
agent-based modelers love to point out how statistical models are not very 
mechanistic. But to deny that you can very closely approximate, even predict, 
actual behavior with some of these models would be foolish.) So, yes, it 
satisfies the letter of the good faith agreement to say that humans output 
*might* be solely based on statistical patterns of its input, even if it 
violates the spirit.So, if someone insists that a human-mediated map from input 
to output is necessarily, categorically different from a machine-mediated map, 
the burden lies on them to delineate how and why it's different. The primary 
difference might well be related to babies, e.g. some of the "memory" (aka 
training) of past statistical patterns comes in the form of genes passed from 
one's parents. It's unclear to me what the analogs are for something like GPT. 
Presumably there are things like wavelets of method, process, intellectual 
property, or whatever that GPT3 inherited from GPT2, mediated by the 
human-machine replication material that is OpenAI. So, the retort to Frank is: 
"If you live with a baby algorithm, you see it has knowledge that can't be 
based on 'data'." That algorithm came from somewhere ... the humans who wrote 
it, the shoulders they stand on, the hours of debug and test cycles the 
algorithm goes through as its [re]implemented, etc.On 3/6/23 14:54, Frank 
Wimberly wrote:> If you live with a baby you see that they have knowledge that 
can't be based on "data".> > ---> Frank C. Wimberly> 140 Calle Ojo Feliz,> 
Santa Fe, NM 87505> > 505 670-9918> Santa Fe, NM> > On Mon, Mar 6, 2023, 2:50 
PM Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:> 
> How?> > __ __> > *From:* Friam mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly> *Sent:* 
Monday, March 6, 2023 12:50 PM> *To:* The Friday Morning Applied Complexity 
Coffee Group mailto:friam@redfish.com>>> *Subject:* Re: 
[FRIAM] ChatGPT and William James> > __ __> >  >And we humans are 
different?> > __ __> > In a word, yes.> > ---> Frank C. 
Wimberly> 140 Calle Ojo Feliz,> Santa Fe, NM 87505> > 505 670-9918> 
    Santa Fe, NM> > __ __> > On Mon, Mar 6, 2023, 12:14 PM Nicholas 
Thompson mailto:thompnicks...@gmail.com>> wrote:> 
> */However, it's important to remember that there are also important 
differences between a large language model and human consciousness. While a 
large language model can generate text that may seem to flow like a stream of 
consciousness, it does not have the same kind of subjective experience that 
humans do, and its output is based solely on statistical patterns in the input 
it has been trained on./*> > > > And we humans are 
different? > > > > On Sat, Mar 4, 2023 at 11:51 AM 
Steve Smith mailto:sasm...@swcp.com>> wrote:> >  
   Also second EricS's appreciation for having someone else(s) maintain a 
coherent conversation for the myriad ideas that it allows me to explore without 
being central to the maintenance of the thread.   I realize this may be almost 
pure tangent to others, so I rarely expect anyone to take my bait unless it is 
to correct any egregious mis-attributions or think-utational fallacies.> >  
   Starting with Glen's assertion/suggestion/assumption that there is 
not mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing 
<https://www.goodreads.com/book/show/967936.Theory_Of_

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Marcus Daniels
“self modifying” is functionally the same as creating a variant and running it 
while destroying the old version.   Gene regulation, say.

> On Mar 7, 2023, at 7:18 AM, glen  wrote:
> 
> Why does the "agent" have to be the same across the transition from gametes 
> to zygote? Historical transitions exist. Genes are input to humans in the 
> same way, for example, a traumatic injury at age 10 is input to the human at 
> age 20. Theseus' ship, anyone?
> 
> I wrote and deleted a long post about deep reinforcement learning in the 
> online tuning of hyperparameters, and the distinction of hyper- from ... oh, 
> IDK, regular old parameters. No self-modifying software required, only 
> [hyper]parameter modifying ... unless we kick the can down the road yet again 
> and argue about the meaning of "software". What does "mean" even mean, as 
> long as we're at it? >8^D
> 
> But that discussion doesn't target the incredulity fallacy responses like 
> Frank's make. If there is some definition of "input" that clearly 
> demonstrates humans are different from bots, then it's the burden of those 
> who make the claim to provide the definition. I don't see it. I don't have 
> such a definition. I wish someone would provide it.
> 
> 
>> On 3/6/23 22:03, Marcus Daniels wrote:
>> Lisp or Haskell macros..
>> Sent from my iPhone
 On Mar 6, 2023, at 8:22 PM, Russ Abbott  wrote:
>>> 
>>> 
>>> Let's consider the word "input" again. The implication is that there is an 
>>> "agent" of some sort that is separated/distinguishable from some 
>>> "environment" from which it gets "input." The question (or at least one 
>>> question) concerns our specification of what that "agent" is. If, as Glen 
>>> suggested, genes are "input" to a human, what is the agent that is 
>>> separated from its genes and for which the genes provide "input?" Another 
>>> way of putting it--although I don't want to push the analogy too far--is 
>>> that if genes are "input" to a human, is software "input" to the software 
>>> system it defines? Since a software system is essentially nothing without 
>>> the software that defines it, what would it even mean to say that the 
>>> software is "input" to itself? This isn't an invitation to talk about 
>>> self-modifying software. Let's deal with the easier case first. Assuming we 
>>> are talking about non-self-modifying (and non-self-interpreting) software, 
>>> what does it mean to say that software is "input" to itself?
>>> _
>>> _
>>> __-- Russ Abbott
>>> Professor Emeritus, Computer Science
>>> California State University, Los Angeles
>>> 
>>> 
 On Mon, Mar 6, 2023 at 3:52 PM glen >>> > wrote:
>>> 
>>>Well put. When Frank emphasized "data", he doubled-down on the 
>>> ambiguity. The fact is, those who claim a human is categorically different 
>>> from a machine have no legs on which to stand. Every single boundary 
>>> between them is broken, year after year.
>>> 
On 3/6/23 15:47, Russ Abbott wrote:
>>>> Are the laws of physics "input?" Is the existence of the universe 
>>> "input?" If so, what issues are we arguing about?
>>>> _
>>>> _
>>>> __-- Russ Abbott
>>>> Professor Emeritus, Computer Science
>>>> California State University, Los Angeles
>>>>
>>>>
>>>> On Mon, Mar 6, 2023 at 3:42 PM glen >>  >> >> wrote:
>>>>
>>>> Well, again, it seems like we're equivocating on "input". Are the 
>>> genes the baby inherited from its parents "input"? I'd say, yes.
>>>>
>>>> On 3/6/23 15:36, Russ Abbott wrote:
>>>>  > Hard to see how you could simulate an infant on the basis of 
>>> input it's received. It cries; it smiles; it pees; it poops; it pumps 
>>> blood; it breathes, etc. There are many experiments in which one concludes 
>>> things about what's going on in an infant's brain by how long it looks at 
>>> something.
>>>>  > _
>>>>  > _
>>>>  > __-- Russ Abbott
>>>>  > Professor Emeritus, Computer Science
>>>>  > California State University, Los Angeles
>>>>  >
>>>>  >
>>>>  > On Mon, Mar 6, 2023 at 3:16 PM glen >>  >> > >>  >> >>>  >
>>>>  > I'm confused by the emphasis on "data". While I'm tempted 
>>> to agree with my simulation of Frank and say that a human's output is not 
>>> based solely on statistical patterns in the input the human's been trained 
>>> on, to dissemble on the meaning of "data" or "input" or "statistical 
>>> patterns" is a bridge too far.
>>>>  >
>>>>  > The compressive encoder, computer, and decoder that is a 
>>> human brain (& the rest of the body) may not be 

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread glen

Why does the "agent" have to be the same across the transition from gametes to 
zygote? Historical transitions exist. Genes are input to humans in the same way, for 
example, a traumatic injury at age 10 is input to the human at age 20. Theseus' ship, 
anyone?

I wrote and deleted a long post about deep reinforcement learning in the online tuning of 
hyperparameters, and the distinction of hyper- from ... oh, IDK, regular old parameters. No 
self-modifying software required, only [hyper]parameter modifying ... unless we kick the can down the 
road yet again and argue about the meaning of "software". What does "mean" even 
mean, as long as we're at it? >8^D

But that discussion doesn't target the incredulity fallacy responses like Frank's make. 
If there is some definition of "input" that clearly demonstrates humans are 
different from bots, then it's the burden of those who make the claim to provide the 
definition. I don't see it. I don't have such a definition. I wish someone would provide 
it.


On 3/6/23 22:03, Marcus Daniels wrote:

Lisp or Haskell macros..

Sent from my iPhone


On Mar 6, 2023, at 8:22 PM, Russ Abbott  wrote:


Let's consider the word "input" again. The implication is that there is an "agent" of some sort that is separated/distinguishable from some "environment" from which it gets "input." The question (or at least one question) concerns our specification of what that "agent" is. If, as Glen suggested, genes are "input" to a human, what is the agent that is separated from its genes and for which the genes provide "input?" Another way of putting it--although I don't want to push the analogy too far--is that if genes are "input" to a human, is software "input" to the software system it defines? Since a software system is essentially nothing without the software that defines it, what would it even mean to say that the software is "input" to itself? This isn't an invitation to talk about self-modifying software. Let's deal with the easier case first. Assuming we are talking about non-self-modifying (and non-self-interpreting) software, what does it mean to say that software is "input" 
to itself?

_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:52 PM glen mailto:geprope...@gmail.com>> wrote:

Well put. When Frank emphasized "data", he doubled-down on the ambiguity. 
The fact is, those who claim a human is categorically different from a machine have no 
legs on which to stand. Every single boundary between them is broken, year after year.

On 3/6/23 15:47, Russ Abbott wrote:
> Are the laws of physics "input?" Is the existence of the universe 
"input?" If so, what issues are we arguing about?
> _
> _
> __-- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Mon, Mar 6, 2023 at 3:42 PM glen mailto:geprope...@gmail.com> 
>> wrote:
>
>     Well, again, it seems like we're equivocating on "input". Are the genes the 
baby inherited from its parents "input"? I'd say, yes.
>
>     On 3/6/23 15:36, Russ Abbott wrote:
>      > Hard to see how you could simulate an infant on the basis of input 
it's received. It cries; it smiles; it pees; it poops; it pumps blood; it breathes, 
etc. There are many experiments in which one concludes things about what's going on 
in an infant's brain by how long it looks at something.
>      > _
>      > _
>      > __-- Russ Abbott
>      > Professor Emeritus, Computer Science
>      > California State University, Los Angeles
>      >
>      >
>      > On Mon, Mar 6, 2023 at 3:16 PM glen mailto:geprope...@gmail.com> 
>        >
>      >     I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of 
Frank and say that a human's output is not based solely on statistical patterns in the input the human's been trained on, to 
dissemble on the meaning of "data" or "input" or "statistical patterns" is a bridge too far.
>      >
>      >     The compressive encoder, computer, and decoder that is a human brain (& 
the rest of the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love to point out how 
statistical models are not very mechanistic. But to deny that you can very closely approximate, 
even predict, actual behavior with some of these models would be foolish.) So, yes, it satisfies 
the letter of the good faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.
>      >
>      > 

Re: [FRIAM] ChatGPT and William James

2023-03-07 Thread Santafe
Good concepts in which to express this would seem to me to be the problem of 
statistical learning of some “data”, and the choice of how the “data” are 
“represented”.  All terms that have to be given meaning operationally in some 
problem or set of problems that we say are similarly structured.

I intend to say something like “there isn’t any such thing as 
truly-`non-parametric’ estimation”, because ultimately the data always have 
_some_ way of being represented, and the representation system is a choice of 
(typically many) parameters.  It is only w.r.t. whatever the representation 
makes arbitrary and equivalent that a learner could be said to be 
“non-parametric”. 

Speech out in the air is some collection of waveforms.  It may co-occur with 
various other events of other kinds (behavior by people, images of things’ 
happening, etc.).

Somewhere in a baby’s cognition, there is eome kind of filtering and 
reinforcing dynamic playing out.

But between the waveforms in the air and whatever abstraction we might make of 
the filtering and reinforcing dynamic, there is the baby.  All the inheritance 
of what its type is like dictate the representation of the waveforms in the air 
that will ever be delivered to the filtering and reinforcing process.  Also 
whether the “other events” become part of a holo-data stream of which the 
waveforms are one mode, and the holo-data stream gets represented, so that the 
values of other events affect the marginal representation for the variables 
contributed by the waveforms.

I have liked, over the years, work by the psycholinguist Morten Christiansen on 
the patterns in language that seem to be there to be a good fit for 
baby-learners.  The interpretation is that the need to be well-matched to their 
representation tendencies so that languages can be learned quickly and robustly 
is a selection pressure that keeps that stuff in the language.  The features 
Morten describes are few and tiny, out of what may be there, just because human 
work is slow.  But they are enough to make the case that such features exist.  
I know there is an enormous literature on this (as on everything).

All of my description above is terribly conventional and antique, but it 
doesn’t seem to me a gross violation of things we can know about the phenomena, 
so I don’t feel compelled to lose it as a sense-making frame.

Eric



> On Mar 6, 2023, at 11:21 PM, Russ Abbott  wrote:
> 
> Let's consider the word "input" again. The implication is that there is an 
> "agent" of some sort that is separated/distinguishable from some 
> "environment" from which it gets "input." The question (or at least one 
> question) concerns our specification of what that "agent" is. If, as Glen 
> suggested, genes are "input" to a human, what is the agent that is separated 
> from its genes and for which the genes provide "input?" Another way of 
> putting it--although I don't want to push the analogy too far--is that if 
> genes are "input" to a human, is software "input" to the software system it 
> defines? Since a software system is essentially nothing without the software 
> that defines it, what would it even mean to say that the software is "input" 
> to itself? This isn't an invitation to talk about self-modifying software. 
> Let's deal with the easier case first. Assuming we are talking about 
> non-self-modifying (and non-self-interpreting) software, what does it mean to 
> say that software is "input" to itself?
> 
> -- Russ Abbott   
> Professor Emeritus, Computer Science
> California State University, Los Angeles
> 
> 
> On Mon, Mar 6, 2023 at 3:52 PM glen  wrote:
> Well put. When Frank emphasized "data", he doubled-down on the ambiguity. The 
> fact is, those who claim a human is categorically different from a machine 
> have no legs on which to stand. Every single boundary between them is broken, 
> year after year.
> 
> On 3/6/23 15:47, Russ Abbott wrote:
> > Are the laws of physics "input?" Is the existence of the universe "input?" 
> > If so, what issues are we arguing about?
> > _
> > _
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> > 
> > 
> > On Mon, Mar 6, 2023 at 3:42 PM glen  > > wrote:
> > 
> > Well, again, it seems like we're equivocating on "input". Are the genes 
> > the baby inherited from its parents "input"? I'd say, yes.
> > 
> > On 3/6/23 15:36, Russ Abbott wrote:
> >  > Hard to see how you could simulate an infant on the basis of input 
> > it's received. It cries; it smiles; it pees; it poops; it pumps blood; it 
> > breathes, etc. There are many experiments in which one concludes things 
> > about what's going on in an infant's brain by how long it looks at 
> > something.
> >  > _
> >  > _
> >  > __-- Russ Abbott
> >  > Professor Emeritus, Computer Science
> >  > California State University, Los Angeles
> >  >
> >

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
Lisp or Haskell macros..

Sent from my iPhone

On Mar 6, 2023, at 8:22 PM, Russ Abbott  wrote:


Let's consider the word "input" again. The implication is that there is an 
"agent" of some sort that is separated/distinguishable from some "environment" 
from which it gets "input." The question (or at least one question) concerns 
our specification of what that "agent" is. If, as Glen suggested, genes are 
"input" to a human, what is the agent that is separated from its genes and for 
which the genes provide "input?" Another way of putting it--although I don't 
want to push the analogy too far--is that if genes are "input" to a human, is 
software "input" to the software system it defines? Since a software system is 
essentially nothing without the software that defines it, what would it even 
mean to say that the software is "input" to itself? This isn't an invitation to 
talk about self-modifying software. Let's deal with the easier case first. 
Assuming we are talking about non-self-modifying (and non-self-interpreting) 
software, what does it mean to say that software is "input" to itself?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:52 PM glen 
mailto:geprope...@gmail.com>> wrote:
Well put. When Frank emphasized "data", he doubled-down on the ambiguity. The 
fact is, those who claim a human is categorically different from a machine have 
no legs on which to stand. Every single boundary between them is broken, year 
after year.

On 3/6/23 15:47, Russ Abbott wrote:
> Are the laws of physics "input?" Is the existence of the universe "input?" If 
> so, what issues are we arguing about?
> _
> _
> __-- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Mon, Mar 6, 2023 at 3:42 PM glen 
> mailto:geprope...@gmail.com> 
> >> wrote:
>
> Well, again, it seems like we're equivocating on "input". Are the genes 
> the baby inherited from its parents "input"? I'd say, yes.
>
> On 3/6/23 15:36, Russ Abbott wrote:
>  > Hard to see how you could simulate an infant on the basis of input 
> it's received. It cries; it smiles; it pees; it poops; it pumps blood; it 
> breathes, etc. There are many experiments in which one concludes things about 
> what's going on in an infant's brain by how long it looks at something.
>  > _
>  > _
>  > __-- Russ Abbott
>  > Professor Emeritus, Computer Science
>  > California State University, Los Angeles
>  >
>  >
>  > On Mon, Mar 6, 2023 at 3:16 PM glen 
> mailto:geprope...@gmail.com> 
> > 
>  
>   >
>  > I'm confused by the emphasis on "data". While I'm tempted to agree 
> with my simulation of Frank and say that a human's output is not based solely 
> on statistical patterns in the input the human's been trained on, to 
> dissemble on the meaning of "data" or "input" or "statistical patterns" is a 
> bridge too far.
>  >
>  > The compressive encoder, computer, and decoder that is a human 
> brain (& the rest of the body) may not be entirely "statistical". But 
> statistics is a fairly well-accepted form of behavioral modeling. (Yes, we 
> agent-based modelers love to point out how statistical models are not very 
> mechanistic. But to deny that you can very closely approximate, even predict, 
> actual behavior with some of these models would be foolish.) So, yes, it 
> satisfies the letter of the good faith agreement to say that humans output 
> *might* be solely based on statistical patterns of its input, even if it 
> violates the spirit.
>  >
>  > So, if someone insists that a human-mediated map from input to 
> output is necessarily, categorically different from a machine-mediated map, 
> the burden lies on them to delineate how and why it's different. The primary 
> difference might well be related to babies, e.g. some of the "memory" (aka 
> training) of past statistical patterns comes in the form of genes passed from 
> one's parents. It's unclear to me what the analogs are for something like 
> GPT. Presumably there are things like wavelets of method, process, 
> intellectual property, or whatever that GPT3 inherited from GPT2, mediated by 
> the human-machine replication material that is OpenAI. So, the retort to 
> Frank is: "If you live with a baby algorithm, you see it has knowledge that 
> can't be based on 'data'." That algorithm came from somewhere ... the humans 
> who wrote it, the shoulders they stand on, the hours of debug and test cycles 
> the algorithm goes through as its [re]implemented, etc.
>  >
>  > On 3/6/23 14:54, Frank Wimberly wrote:
>  >  > If you live with a baby you see that they have 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Let's consider the word "input" again. The implication is that there is an
"agent" of some sort that is separated/distinguishable from some
"environment" from which it gets "input." The question (or at least one
question) concerns our specification of what that "agent" is. If, as Glen
suggested, genes are "input" to a human, what is the agent that is
separated from its genes and for which the genes provide "input?" Another
way of putting it--although I don't want to push the analogy too far--is
that if genes are "input" to a human, is software "input" to the software
system it defines? Since a software system is essentially nothing without
the software that defines it, what would it even mean to say that the
software is "input" to itself? This isn't an invitation to talk about
self-modifying software. Let's deal with the easier case first. Assuming we
are talking about non-self-modifying (and non-self-interpreting) software,
what does it mean to say that software is "input" to itself?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:52 PM glen  wrote:

> Well put. When Frank emphasized "data", he doubled-down on the ambiguity.
> The fact is, those who claim a human is categorically different from a
> machine have no legs on which to stand. Every single boundary between them
> is broken, year after year.
>
> On 3/6/23 15:47, Russ Abbott wrote:
> > Are the laws of physics "input?" Is the existence of the universe
> "input?" If so, what issues are we arguing about?
> > _
> > _
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> > On Mon, Mar 6, 2023 at 3:42 PM glen  geprope...@gmail.com>> wrote:
> >
> > Well, again, it seems like we're equivocating on "input". Are the
> genes the baby inherited from its parents "input"? I'd say, yes.
> >
> > On 3/6/23 15:36, Russ Abbott wrote:
> >  > Hard to see how you could simulate an infant on the basis of
> input it's received. It cries; it smiles; it pees; it poops; it pumps
> blood; it breathes, etc. There are many experiments in which one concludes
> things about what's going on in an infant's brain by how long it looks at
> something.
> >  > _
> >  > _
> >  > __-- Russ Abbott
> >  > Professor Emeritus, Computer Science
> >  > California State University, Los Angeles
> >  >
> >  >
> >  > On Mon, Mar 6, 2023 at 3:16 PM glen   >> wrote:
> >  >
> >  > I'm confused by the emphasis on "data". While I'm tempted to
> agree with my simulation of Frank and say that a human's output is not
> based solely on statistical patterns in the input the human's been trained
> on, to dissemble on the meaning of "data" or "input" or "statistical
> patterns" is a bridge too far.
> >  >
> >  > The compressive encoder, computer, and decoder that is a
> human brain (& the rest of the body) may not be entirely "statistical". But
> statistics is a fairly well-accepted form of behavioral modeling. (Yes, we
> agent-based modelers love to point out how statistical models are not very
> mechanistic. But to deny that you can very closely approximate, even
> predict, actual behavior with some of these models would be foolish.) So,
> yes, it satisfies the letter of the good faith agreement to say that humans
> output *might* be solely based on statistical patterns of its input, even
> if it violates the spirit.
> >  >
> >  > So, if someone insists that a human-mediated map from input
> to output is necessarily, categorically different from a machine-mediated
> map, the burden lies on them to delineate how and why it's different. The
> primary difference might well be related to babies, e.g. some of the
> "memory" (aka training) of past statistical patterns comes in the form of
> genes passed from one's parents. It's unclear to me what the analogs are
> for something like GPT. Presumably there are things like wavelets of
> method, process, intellectual property, or whatever that GPT3 inherited
> from GPT2, mediated by the human-machine replication material that is
> OpenAI. So, the retort to Frank is: "If you live with a baby algorithm, you
> see it has knowledge that can't be based on 'data'." That algorithm came
> from somewhere ... the humans who wrote it, the shoulders they stand on,
> the hours of debug and test cycles the algorithm goes through as its
> [re]implemented, etc.
> >  >
> >  > On 3/6/23 14:54, Frank Wimberly wrote:
> >  >  > If you live with a baby you see that they have knowledge
> that can't be based on "data".
> >  >  >
> >  >  > ---
> >  >  > Frank C. Wimberly
> >  >  > 140 Calle Ojo Feliz,
> >  >  > Santa Fe, NM 87505
> >  >  >
> >  >  > 505 670-9918
> >  >  > Santa Fe, NM
> >  >  >
> >  > 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
From examples, either adversarial generative learning or stable diffusion can 
learn the laws of physics.

https://github.com/lucidrains/video-diffusion-pytorch

Also it is common in training these systems to have a "foundation" model that 
is then specialized with domain-specific context.
The weights of the neural net is in a file that one download (e.g. genetics) 
and then it is specialized in a particular environment (e.g. lifetime learning).

Marcus



From: Friam  on behalf of Russ Abbott 

Sent: Monday, March 6, 2023 4:47 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

Are the laws of physics "input?" Is the existence of the universe "input?" If 
so, what issues are we arguing about?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen 
mailto:geprope...@gmail.com>> wrote:
Well, again, it seems like we're equivocating on "input". Are the genes the 
baby inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:
> Hard to see how you could simulate an infant on the basis of input it's 
> received. It cries; it smiles; it pees; it poops; it pumps blood; it 
> breathes, etc. There are many experiments in which one concludes things about 
> what's going on in an infant's brain by how long it looks at something.
> _
> _
> __-- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Mon, Mar 6, 2023 at 3:16 PM glen 
> mailto:geprope...@gmail.com> 
> <mailto:geprope...@gmail.com<mailto:geprope...@gmail.com>>> wrote:
>
> I'm confused by the emphasis on "data". While I'm tempted to agree with 
> my simulation of Frank and say that a human's output is not based solely on 
> statistical patterns in the input the human's been trained on, to dissemble 
> on the meaning of "data" or "input" or "statistical patterns" is a bridge too 
> far.
>
> The compressive encoder, computer, and decoder that is a human brain (& 
> the rest of the body) may not be entirely "statistical". But statistics is a 
> fairly well-accepted form of behavioral modeling. (Yes, we agent-based 
> modelers love to point out how statistical models are not very mechanistic. 
> But to deny that you can very closely approximate, even predict, actual 
> behavior with some of these models would be foolish.) So, yes, it satisfies 
> the letter of the good faith agreement to say that humans output *might* be 
> solely based on statistical patterns of its input, even if it violates the 
> spirit.
>
> So, if someone insists that a human-mediated map from input to output is 
> necessarily, categorically different from a machine-mediated map, the burden 
> lies on them to delineate how and why it's different. The primary difference 
> might well be related to babies, e.g. some of the "memory" (aka training) of 
> past statistical patterns comes in the form of genes passed from one's 
> parents. It's unclear to me what the analogs are for something like GPT. 
> Presumably there are things like wavelets of method, process, intellectual 
> property, or whatever that GPT3 inherited from GPT2, mediated by the 
> human-machine replication material that is OpenAI. So, the retort to Frank 
> is: "If you live with a baby algorithm, you see it has knowledge that can't 
> be based on 'data'." That algorithm came from somewhere ... the humans who 
> wrote it, the shoulders they stand on, the hours of debug and test cycles the 
> algorithm goes through as its [re]implemented, etc.
>
> On 3/6/23 14:54, Frank Wimberly wrote:
>  > If you live with a baby you see that they have knowledge that can't be 
> based on "data".
>  >
>  > ---
>  > Frank C. Wimberly
>  > 140 Calle Ojo Feliz,
>  > Santa Fe, NM 87505
>  >
>  > 505 670-9918
>  > Santa Fe, NM
>  >
>  > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels 
> mailto:mar...@snoutfarm.com> 
> <mailto:mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>> 
> <mailto:mar...@snoutfarm.com<mailto:mar...@snoutfarm.com> 
> <mailto:mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>>>> wrote:
>  >
>  > How?
>  >
>  > __ __
>  >
>  > *From:* Friam 
> mailto:friam-boun...@redfish.com> 
> <mailto:friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>> 
> <mailto:friam-boun...@redfish.com<mailto:friam-boun...@redfish.co

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Well put. When Frank emphasized "data", he doubled-down on the ambiguity. The 
fact is, those who claim a human is categorically different from a machine have no legs 
on which to stand. Every single boundary between them is broken, year after year.

On 3/6/23 15:47, Russ Abbott wrote:

Are the laws of physics "input?" Is the existence of the universe "input?" If 
so, what issues are we arguing about?
_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen mailto:geprope...@gmail.com>> wrote:

Well, again, it seems like we're equivocating on "input". Are the genes the baby 
inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:
 > Hard to see how you could simulate an infant on the basis of input it's 
received. It cries; it smiles; it pees; it poops; it pumps blood; it breathes, 
etc. There are many experiments in which one concludes things about what's going 
on in an infant's brain by how long it looks at something.
 > _
 > _
 > __-- Russ Abbott
 > Professor Emeritus, Computer Science
 > California State University, Los Angeles
 >
 >
 > On Mon, Mar 6, 2023 at 3:16 PM glen mailto:geprope...@gmail.com> 
>> wrote:
 >
 >     I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and 
say that a human's output is not based solely on statistical patterns in the input the human's been trained on, to 
dissemble on the meaning of "data" or "input" or "statistical patterns" is a bridge too far.
 >
 >     The compressive encoder, computer, and decoder that is a human brain (& the 
rest of the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love to point out how 
statistical models are not very mechanistic. But to deny that you can very closely approximate, 
even predict, actual behavior with some of these models would be foolish.) So, yes, it satisfies 
the letter of the good faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.
 >
 >     So, if someone insists that a human-mediated map from input to output is necessarily, 
categorically different from a machine-mediated map, the burden lies on them to delineate how and why 
it's different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed from 
one's parents. It's unclear to me what the analogs are for something like GPT. Presumably there are 
things like wavelets of method, process, intellectual property, or whatever that GPT3 inherited from 
GPT2, mediated by the human-machine replication material that is OpenAI. So, the retort to Frank is: 
"If you live with a baby algorithm, you see it has knowledge that can't be based on 'data'." 
That algorithm came from somewhere ... the humans who wrote it, the shoulders they stand on, the hours 
of debug and test cycles the algorithm goes through as its [re]implemented, etc.
 >
 >     On 3/6/23 14:54, Frank Wimberly wrote:
 >      > If you live with a baby you see that they have knowledge that can't be 
based on "data".
 >      >
 >      > ---
 >      > Frank C. Wimberly
 >      > 140 Calle Ojo Feliz,
 >      > Santa Fe, NM 87505
 >      >
 >      > 505 670-9918
 >      > Santa Fe, NM
 >      >
 >      > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com> 
>        >
 >      >     How?
 >      >
 >      >     __ __
 >      >
 >      >     *From:* Friam mailto:friam-boun...@redfish.com> 
>        >     *Sent:* Monday, March 6, 2023 12:50 PM
 >      >     *To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com> >        >     *Subject:* Re: [FRIAM] ChatGPT and William James
 >      >
 >      >     __ __
 >      >
 >      >      >And we humans are different?
 >      >
 >      >     __ __
 >      >
 >      >     In a word, yes.
 >      >
 >      >     ---
 >      >     Frank C. Wimberly
 >  

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Are the laws of physics "input?" Is the existence of the universe "input?"
If so, what issues are we arguing about?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen  wrote:

> Well, again, it seems like we're equivocating on "input". Are the genes
> the baby inherited from its parents "input"? I'd say, yes.
>
> On 3/6/23 15:36, Russ Abbott wrote:
> > Hard to see how you could simulate an infant on the basis of input it's
> received. It cries; it smiles; it pees; it poops; it pumps blood; it
> breathes, etc. There are many experiments in which one concludes things
> about what's going on in an infant's brain by how long it looks at
> something.
> > _
> > _
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> > On Mon, Mar 6, 2023 at 3:16 PM glen  geprope...@gmail.com>> wrote:
> >
> > I'm confused by the emphasis on "data". While I'm tempted to agree
> with my simulation of Frank and say that a human's output is not based
> solely on statistical patterns in the input the human's been trained on, to
> dissemble on the meaning of "data" or "input" or "statistical patterns" is
> a bridge too far.
> >
> > The compressive encoder, computer, and decoder that is a human brain
> (& the rest of the body) may not be entirely "statistical". But statistics
> is a fairly well-accepted form of behavioral modeling. (Yes, we agent-based
> modelers love to point out how statistical models are not very mechanistic.
> But to deny that you can very closely approximate, even predict, actual
> behavior with some of these models would be foolish.) So, yes, it satisfies
> the letter of the good faith agreement to say that humans output *might* be
> solely based on statistical patterns of its input, even if it violates the
> spirit.
> >
> > So, if someone insists that a human-mediated map from input to
> output is necessarily, categorically different from a machine-mediated map,
> the burden lies on them to delineate how and why it's different. The
> primary difference might well be related to babies, e.g. some of the
> "memory" (aka training) of past statistical patterns comes in the form of
> genes passed from one's parents. It's unclear to me what the analogs are
> for something like GPT. Presumably there are things like wavelets of
> method, process, intellectual property, or whatever that GPT3 inherited
> from GPT2, mediated by the human-machine replication material that is
> OpenAI. So, the retort to Frank is: "If you live with a baby algorithm, you
> see it has knowledge that can't be based on 'data'." That algorithm came
> from somewhere ... the humans who wrote it, the shoulders they stand on,
> the hours of debug and test cycles the algorithm goes through as its
> [re]implemented, etc.
> >
> > On 3/6/23 14:54, Frank Wimberly wrote:
> >  > If you live with a baby you see that they have knowledge that
> can't be based on "data".
> >  >
> >  > ---
> >  > Frank C. Wimberly
> >  > 140 Calle Ojo Feliz,
> >  > Santa Fe, NM 87505
> >  >
> >  > 505 670-9918
> >  > Santa Fe, NM
> >  >
> >  > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels   >> wrote:
> >  >
> >  > How?
> >  >
> >  > __ __
> >  >
> >  > *From:* Friam  friam-boun...@redfish.com> >> *On Behalf Of *Frank Wimberly
> >  > *Sent:* Monday, March 6, 2023 12:50 PM
> >  > *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com   >>
> >  > *Subject:* Re: [FRIAM] ChatGPT and William James
> >  >
> >  > __ __
> >  >
> >  >  >And we humans are different?
> >  >
> >  > __ __
> >  >
> >  > In a word, yes.
> >  >
> >  > ---
> >  > Frank C. Wimberly
> >  > 140 Calle Ojo Feliz,
> >  > Santa Fe, NM 87505
> >  >
> >  > 505 670-9918
> >  > Santa Fe, NM
> >  >
> >  > __ __
> >  >
> >  > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson <
> thompnicks...@gmail.com   thompnicks...@gmail.com >> wrote:
> >  >
> >  > */However, it's important to remember that there are also
> important differences between a large language model and human
> consciousness. While a large language model can generate text that may seem
> to flow like a stream of consciousness, it does not have the same kind of
> subjective experience that humans do, and its output is based solely on
> statistical patterns in the input it has been trained on./*
> >  >
> >  > 
> >  >

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Well, again, it seems like we're equivocating on "input". Are the genes the baby 
inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:

Hard to see how you could simulate an infant on the basis of input it's 
received. It cries; it smiles; it pees; it poops; it pumps blood; it breathes, 
etc. There are many experiments in which one concludes things about what's 
going on in an infant's brain by how long it looks at something.
_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:16 PM glen mailto:geprope...@gmail.com>> wrote:

I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and say 
that a human's output is not based solely on statistical patterns in the input the human's been trained on, to 
dissemble on the meaning of "data" or "input" or "statistical patterns" is a bridge too 
far.

The compressive encoder, computer, and decoder that is a human brain (& the rest of 
the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love to point out 
how statistical models are not very mechanistic. But to deny that you can very closely 
approximate, even predict, actual behavior with some of these models would be foolish.) So, 
yes, it satisfies the letter of the good faith agreement to say that humans output *might* be 
solely based on statistical patterns of its input, even if it violates the spirit.

So, if someone insists that a human-mediated map from input to output is necessarily, 
categorically different from a machine-mediated map, the burden lies on them to delineate how and 
why it's different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed 
from one's parents. It's unclear to me what the analogs are for something like GPT. Presumably 
there are things like wavelets of method, process, intellectual property, or whatever that GPT3 
inherited from GPT2, mediated by the human-machine replication material that is OpenAI. So, the 
retort to Frank is: "If you live with a baby algorithm, you see it has knowledge that can't be 
based on 'data'." That algorithm came from somewhere ... the humans who wrote it, the 
shoulders they stand on, the hours of debug and test cycles the algorithm goes through as its 
[re]implemented, etc.

On 3/6/23 14:54, Frank Wimberly wrote:
 > If you live with a baby you see that they have knowledge that can't be based on 
"data".
 >
 > ---
 > Frank C. Wimberly
 > 140 Calle Ojo Feliz,
 > Santa Fe, NM 87505
 >
 > 505 670-9918
 > Santa Fe, NM
 >
 > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com> >> wrote:
 >
 >     How?
 >
 >     __ __
 >
 >     *From:* Friam mailto:friam-boun...@redfish.com> 
>> *On Behalf Of *Frank 
Wimberly
 >     *Sent:* Monday, March 6, 2023 12:50 PM
 >     *To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com> >>
 >     *Subject:* Re: [FRIAM] ChatGPT and William James
 >
 >     __ __
 >
 >      >And we humans are different?
 >
 >     __ __
 >
 >     In a word, yes.
 >
 >     ---
 >     Frank C. Wimberly
 >     140 Calle Ojo Feliz,
 >     Santa Fe, NM 87505
 >
 >     505 670-9918
 >     Santa Fe, NM
 >
 >     __ __
 >
 >     On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson mailto:thompnicks...@gmail.com> >> wrote:
 >
 >         */However, it's important to remember that there are also 
important differences between a large language model and human consciousness. 
While a large language model can generate text that may seem to flow like a stream 
of consciousness, it does not have the same kind of subjective experience that 
humans do, and its output is based solely on statistical patterns in the input it 
has been trained on./*
 >
 >         
 >
 >         And we humans are different? 
 >
 >         
 >
 >         On Sat, Mar 4, 2023 at 11:51 AM Steve Smith mailto:sasm...@swcp.com> >> 
wrote:
 >
 >             Also second EricS's appreciation for having someone else(s) 
maintain a coherent conversation for the myriad ideas that it allows me to explore 
without being central to the maintenance of the thread.   I realize this may be 
almost pure tangent to others, so I rarely expect anyone to take my bait unless 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Hard to see how you could simulate an infant on the basis of input it's
received. It cries; it smiles; it pees; it poops; it pumps blood; it
breathes, etc. There are many experiments in which one concludes things
about what's going on in an infant's brain by how long it looks at
something.

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:16 PM glen  wrote:

> I'm confused by the emphasis on "data". While I'm tempted to agree with my
> simulation of Frank and say that a human's output is not based solely on
> statistical patterns in the input the human's been trained on, to dissemble
> on the meaning of "data" or "input" or "statistical patterns" is a bridge
> too far.
>
> The compressive encoder, computer, and decoder that is a human brain (&
> the rest of the body) may not be entirely "statistical". But statistics is
> a fairly well-accepted form of behavioral modeling. (Yes, we agent-based
> modelers love to point out how statistical models are not very mechanistic.
> But to deny that you can very closely approximate, even predict, actual
> behavior with some of these models would be foolish.) So, yes, it satisfies
> the letter of the good faith agreement to say that humans output *might* be
> solely based on statistical patterns of its input, even if it violates the
> spirit.
>
> So, if someone insists that a human-mediated map from input to output is
> necessarily, categorically different from a machine-mediated map, the
> burden lies on them to delineate how and why it's different. The primary
> difference might well be related to babies, e.g. some of the "memory" (aka
> training) of past statistical patterns comes in the form of genes passed
> from one's parents. It's unclear to me what the analogs are for something
> like GPT. Presumably there are things like wavelets of method, process,
> intellectual property, or whatever that GPT3 inherited from GPT2, mediated
> by the human-machine replication material that is OpenAI. So, the retort to
> Frank is: "If you live with a baby algorithm, you see it has knowledge that
> can't be based on 'data'." That algorithm came from somewhere ... the
> humans who wrote it, the shoulders they stand on, the hours of debug and
> test cycles the algorithm goes through as its [re]implemented, etc.
>
> On 3/6/23 14:54, Frank Wimberly wrote:
> > If you live with a baby you see that they have knowledge that can't be
> based on "data".
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> >
> > 505 670-9918
> > Santa Fe, NM
> >
> > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  > wrote:
> >
> > How?
> >
> > __ __
> >
> > *From:* Friam  friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> > *Sent:* Monday, March 6, 2023 12:50 PM
> > *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com >
> > *Subject:* Re: [FRIAM] ChatGPT and William James
> >
> > __ __
> >
> >  >And we humans are different?
> >
> > __ __
> >
> > In a word, yes.
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> >
> > 505 670-9918
> > Santa Fe, NM
> >
> > __ __
> >
> > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson <
> thompnicks...@gmail.com > wrote:
> >
> > */However, it's important to remember that there are also
> important differences between a large language model and human
> consciousness. While a large language model can generate text that may seem
> to flow like a stream of consciousness, it does not have the same kind of
> subjective experience that humans do, and its output is based solely on
> statistical patterns in the input it has been trained on./*
> >
> > 
> >
> > And we humans are different? 
> >
> > 
> >
> > On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  > wrote:
> >
> > Also second EricS's appreciation for having someone else(s)
> maintain a coherent conversation for the myriad ideas that it allows me to
> explore without being central to the maintenance of the thread.   I realize
> this may be almost pure tangent to others, so I rarely expect anyone to
> take my bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
> >
> > Starting with Glen's assertion/suggestion/assumption that
> there is not mind-stuff and body stuff, just body stuff:  I appeal to the
> general abstraction of Emergence and use Russell Standish's example in his
> "Theory of Nothing <
> https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and say that a 
human's output is not based solely on statistical patterns in the input the human's been trained on, to dissemble on 
the meaning of "data" or "input" or "statistical patterns" is a bridge too far.

The compressive encoder, computer, and decoder that is a human brain (& the rest of the 
body) may not be entirely "statistical". But statistics is a fairly well-accepted 
form of behavioral modeling. (Yes, we agent-based modelers love to point out how statistical 
models are not very mechanistic. But to deny that you can very closely approximate, even 
predict, actual behavior with some of these models would be foolish.) So, yes, it satisfies 
the letter of the good faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.

So, if someone insists that a human-mediated map from input to output is necessarily, categorically 
different from a machine-mediated map, the burden lies on them to delineate how and why it's 
different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed 
from one's parents. It's unclear to me what the analogs are for something like GPT. Presumably 
there are things like wavelets of method, process, intellectual property, or whatever that GPT3 
inherited from GPT2, mediated by the human-machine replication material that is OpenAI. So, the 
retort to Frank is: "If you live with a baby algorithm, you see it has knowledge that can't be 
based on 'data'." That algorithm came from somewhere ... the humans who wrote it, the 
shoulders they stand on, the hours of debug and test cycles the algorithm goes through as its 
[re]implemented, etc.

On 3/6/23 14:54, Frank Wimberly wrote:

If you live with a baby you see that they have knowledge that can't be based on 
"data".

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:

How?

__ __

*From:* Friam mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
*Sent:* Monday, March 6, 2023 12:50 PM
*To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>
*Subject:* Re: [FRIAM] ChatGPT and William James

__ __

 >And we humans are different?

__ __

In a word, yes.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

__ __

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson mailto:thompnicks...@gmail.com>> wrote:

*/However, it's important to remember that there are also important 
differences between a large language model and human consciousness. While a 
large language model can generate text that may seem to flow like a stream of 
consciousness, it does not have the same kind of subjective experience that 
humans do, and its output is based solely on statistical patterns in the input 
it has been trained on./*



And we humans are different? 



On Sat, Mar 4, 2023 at 11:51 AM Steve Smith mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) 
maintain a coherent conversation for the myriad ideas that it allows me to 
explore without being central to the maintenance of the thread.   I realize 
this may be almost pure tangent to others, so I rarely expect anyone to take my 
bait unless it is to correct any egregious mis-attributions or think-utational 
fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the 
general abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing 
" that a 
water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not 
want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did 
that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" 
from the elementary particles from which it might be composed. 

  I think we all believe in free-electrons, protons, neutrons but 
also recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d loops of 
abstract things we 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Frank Wimberly
If you live with a baby you see that they have knowledge that can't be
based on "data".

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  wrote:

> How?
>
>
>
> *From:* Friam  *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] ChatGPT and William James
>
>
>
> >And we humans are different?
>
>
>
> In a word, yes.
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
>
>
> On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
> wrote:
>
> *However, it's important to remember that there are also important
> differences between a large language model and human consciousness. While a
> large language model can generate text that may seem to flow like a stream
> of consciousness, it does not have the same kind of subjective experience
> that humans do, and its output is based solely on statistical patterns in
> the input it has been trained on.*
>
>
>
> And we humans are different?
>
>
>
> On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:
>
> Also second EricS's appreciation for having someone else(s) maintain a
> coherent conversation for the myriad ideas that it allows me to explore
> without being central to the maintenance of the thread.   I realize this
> may be almost pure tangent to others, so I rarely expect anyone to take my
> bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
>
> Starting with Glen's assertion/suggestion/assumption that there is not
> mind-stuff and body stuff, just body stuff:  I appeal to the general
> abstraction of Emergence and use Russell Standish's example in his "Theory
> of Nothing
> <https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of emergent-bootstrapping
> from there to assert that "mind stuff", if it ever makes sense, is an
> emergent property of "body stuff".   But by analogy would not want to say
> that wetness (and other properties of bulk water molecules) is not strictly
> "molecular dynamics stuff".   And even if one did that, the
> recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke
> that the properties of any molecule is "emergent" from the elementary
> particles from which it might be composed.
>
>  I think we all believe in free-electrons, protons, neutrons but also
> recognize that *most* of our observed universe is shaped not by *those
> properties* (much less the properties of quarks and gluons or 10d loops of
> abstract things we call strings) but rather by the properties (once again,
> not of molecular dynamics or even chemical reactions) but biological
> functions,  and socio-economic-political functions as well. I *am*
> however, sensitive to the idea that where and how we draw the line between
> mind/body stuff can be important in any given argument, and that sometimes
> dropping that line altogether may be useful?
>
> The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts
> about how syntax and semantics might bootstrap sequentially.   It feels to
> me that the syntax of one level of abstraction yields an *emergent
> semantics* which in turn becomes the *syntax* of the next "level".I do
> acknowledge that Glen has made some arguments (and references) that are
> against the very abstraction of "levels" and that may well be the hole in
> everything I'm unrolling here, but for the moment, I feel I have a clear
> picture of a POSET of syntax/semantics, if not a full Hierarchy...
>
> This also backs me into the Platonic ideations with all the charms and
> criticisms already dancing as virtual (ideational) particles around
> that.I will go back to reading A Theory of Nothing
> <https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>...
> and try to keep my offerings here under 10 pages each...
>
> On 3/4/23 4:32 AM, Santafe wrote:
>
> It’s helpful to have a conversation being maintained by somebod(ies) else, to 
> which one can be a bystander without the distraction of coming up with 
> contributions to it.  Things can suggest themselves that get pushed out of 
> awareness when one is carrying the discourse and figuring out what to do next 
> within it.
>
>
>
> In reading the b

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
How?

From: Friam  On Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

>And we humans are different?

In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important differences 
between a large language model and human consciousness. While a large language 
model can generate text that may seem to flow like a stream of consciousness, 
it does not have the same kind of subjective experience that humans do, and its 
output is based solely on statistical patterns in the input it has been trained 
on.

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
 that a water molecule is not wet... wetness is a property of aggregates of 
water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent 
property of "body stuff".   But by analogy would not want to say that wetness 
(and other properties of bulk water molecules) is not strictly "molecular 
dynamics stuff".   And even if one did that, the 
recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke 
that the properties of any molecule is "emergent" from the elementary particles 
from which it might be composed.

 I think we all believe in free-electrons, protons, neutrons but also recognize 
that *most* of our observed universe is shaped not by *those properties* (much 
less the properties of quarks and gluons or 10d loops of abstract things we 
call strings) but rather by the properties (once again, not of molecular 
dynamics or even chemical reactions) but biological functions,  and 
socio-economic-political functions as well. I *am* however, sensitive to 
the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts 
about how syntax and semantics might bootstrap sequentially.   It feels to me 
that the syntax of one level of abstraction yields an *emergent semantics* 
which in turn becomes the *syntax* of the next "level".I do acknowledge 
that Glen has made some arguments (and references) that are against the very 
abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of 
syntax/semantics, if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particles around that.I 
will go back to reading A Theory of 
Nothing<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>...
 and try to keep my offerings here under 10 pages each...
On 3/4/23 4:32 AM, Santafe wrote:

It’s helpful to have a conversation being maintained by somebod(ies) else, to 
which one can be a bystander without the distraction of coming up with 
contributions to it.  Things can suggest themselves that get pushed out of 
awareness when one is carrying the discourse and figuring out what to do next 
within it.



In reading the below, about the time I got to the lines:



The mind-body problem is the philosophical question of how the mind and body 
are related. One of the main issues is how mental processes such as thoughts, 
emotions, and consciousness are related to physical processes in the brain and 
body.

I was prompted with a term to refer to these mental/physical things.



First, my sense of all this is one of witnessing structures in conversation.  
Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
awareness of conversation as somewhat free-floating from its semantic ground.  
As tokens in conversation, it is perfectly sensible to say that (th

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Frank Wimberly
>And we humans are different?

In a word, yes.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
wrote:

> *However, it's important to remember that there are also important
> differences between a large language model and human consciousness. While a
> large language model can generate text that may seem to flow like a stream
> of consciousness, it does not have the same kind of subjective experience
> that humans do, and its output is based solely on statistical patterns in
> the input it has been trained on.*
>
>
>
> And we humans are different?
>
>
>
> On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:
>
> Also second EricS's appreciation for having someone else(s) maintain a
> coherent conversation for the myriad ideas that it allows me to explore
> without being central to the maintenance of the thread.   I realize this
> may be almost pure tangent to others, so I rarely expect anyone to take my
> bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
>
> Starting with Glen's assertion/suggestion/assumption that there is not
> mind-stuff and body stuff, just body stuff:  I appeal to the general
> abstraction of Emergence and use Russell Standish's example in his "Theory
> of Nothing
> "
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of emergent-bootstrapping
> from there to assert that "mind stuff", if it ever makes sense, is an
> emergent property of "body stuff".   But by analogy would not want to say
> that wetness (and other properties of bulk water molecules) is not strictly
> "molecular dynamics stuff".   And even if one did that, the
> recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke
> that the properties of any molecule is "emergent" from the elementary
> particles from which it might be composed.
>
>  I think we all believe in free-electrons, protons, neutrons but also
> recognize that *most* of our observed universe is shaped not by *those
> properties* (much less the properties of quarks and gluons or 10d loops of
> abstract things we call strings) but rather by the properties (once again,
> not of molecular dynamics or even chemical reactions) but biological
> functions,  and socio-economic-political functions as well. I *am*
> however, sensitive to the idea that where and how we draw the line between
> mind/body stuff can be important in any given argument, and that sometimes
> dropping that line altogether may be useful?
>
> The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts
> about how syntax and semantics might bootstrap sequentially.   It feels to
> me that the syntax of one level of abstraction yields an *emergent
> semantics* which in turn becomes the *syntax* of the next "level".I do
> acknowledge that Glen has made some arguments (and references) that are
> against the very abstraction of "levels" and that may well be the hole in
> everything I'm unrolling here, but for the moment, I feel I have a clear
> picture of a POSET of syntax/semantics, if not a full Hierarchy...
>
> This also backs me into the Platonic ideations with all the charms and
> criticisms already dancing as virtual (ideational) particles around
> that.I will go back to reading A Theory of Nothing
> ...
> and try to keep my offerings here under 10 pages each...
>
> On 3/4/23 4:32 AM, Santafe wrote:
>
> It’s helpful to have a conversation being maintained by somebod(ies) else, to 
> which one can be a bystander without the distraction of coming up with 
> contributions to it.  Things can suggest themselves that get pushed out of 
> awareness when one is carrying the discourse and figuring out what to do next 
> within it.
>
>
>
> In reading the below, about the time I got to the lines:
>
>
>
> The mind-body problem is the philosophical question of how the mind and body 
> are related. One of the main issues is how mental processes such as thoughts, 
> emotions, and consciousness are related to physical processes in the brain 
> and body.
>
> I was prompted with a term to refer to these mental/physical things.
>
>
>
> First, my sense of all this is one of witnessing structures in conversation.  
> Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
> awareness of conversation as somewhat free-floating from its semantic ground. 
>  As tokens in conversation, it is perfectly sensible to say that (thoughts, 
> emotions, consciousness) are in a category Mental, while (weakness, hunger, 
> itching) go into a category Physical.  Not only is it okay to say they fit 
> tolerably into “categories” (or “classes”); the reason they do 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Nicholas Thompson
However, it's important to remember that there are also important differences between a large language model and human consciousness. While a large language model can generate text that may seem to flow like a stream of consciousness, it does not have the same kind of subjective experience that humans do, and its output is based solely on statistical patterns in the input it has been trained on. And we humans are different?   On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:Also second EricS's appreciation for having someone else(s) maintain a coherent conversation for the myriad ideas that it allows me to explore without being central to the maintenance of the thread.   I realize this may be almost pure tangent to others, so I rarely expect anyone to take my bait unless it is to correct any egregious mis-attributions or think-utational fallacies.Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the general abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing" that a water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" from the elementary particles from which it might be composed.    I think we all believe in free-electrons, protons, neutrons but also recognize that *most* of our observed universe is shaped not by *those properties* (much less the properties of quarks and gluons or 10d loops of abstract things we call strings) but rather by the properties (once again, not of molecular dynamics or even chemical reactions) but biological functions,  and socio-economic-political functions as well. I *am* however, sensitive to the idea that where and how we draw the line between mind/body stuff can be important in any given argument, and that sometimes dropping that line altogether may be useful?The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts about how syntax and semantics might bootstrap sequentially.   It feels to me that the syntax of one level of abstraction yields an *emergent semantics* which in turn becomes the *syntax* of the next "level".    I do acknowledge that Glen has made some arguments (and references) that are against the very abstraction of "levels" and that may well be the hole in everything I'm unrolling here, but for the moment, I feel I have a clear picture of a POSET of syntax/semantics, if not a full Hierarchy...   This also backs me into the Platonic ideations with all the charms and criticisms already dancing as virtual (ideational) particles around that.    I will go back to reading A Theory of Nothing... and try to keep my offerings here under 10 pages each...On 3/4/23 4:32 AM, Santafe wrote:It’s helpful to have a conversation being maintained by somebod(ies) else, to which one can be a bystander without the distraction of coming up with contributions to it.  Things can suggest themselves that get pushed out of awareness when one is carrying the discourse and figuring out what to do next within it. In reading the below, about the time I got to the lines: The mind-body problem is the philosophical question of how the mind and body are related. One of the main issues is how mental processes such as thoughts, emotions, and consciousness are related to physical processes in the brain and body.I was prompted with a term to refer to these mental/physical things. First, my sense of all this is one of witnessing structures in conversation.  Maybe I am more primed to that because with ChatGPT as the topic, one fronts awareness of conversation as somewhat free-floating from its semantic ground.  As tokens in conversation, it is perfectly sensible to say that (thoughts, emotions, consciousness) are in a category Mental, while (weakness, hunger, itching) go into a category Physical.  Not only is it okay to say they fit tolerably into “categories” (or “classes”); the reason they do so is that they are connected by all sorts of linguistic usage relations.  The relations probably in no small part bring about the stability of the categorical sense of the terms. But what word do we then use to refer to such classes in speech?  I would use the word “registers”.  The Mental is a register of conversation about events, and the Physical is another register.   Jochen’s email below has ChatGPT saying James referred to these as “aspects” of various bodily or embodied events.  Sometimes I’m okay with a word like “aspects”, but it invites essentialist thinking.  

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Interesting. EricS' layout triggered me. I've used the word "registration" a lot, mostly because of BC Smith's re-terming 
from "inscription error" to "pre-emptive registration". But I'd never actually looked at the etymology of 
"register" . From EricS' post, I got the connotation of a musical 
register , which I'd never before linked to the naming process of 
registration.

But SteveS' is right. I reject not only the emergence sense of "levels", but also the leveling in that 
Wikipedia entry. Although Eric's use of "register" reminded me of musical categories, his treatment of it 
seems more closely aligned to *logging* or documentation ... writing, more along the lines of "gest" 
 ... more action, less thought.

The split between syntax and semantics has never really worked for me because (I think) they're 
both so cognitive. What would work better would be something like "negotiated" vs 
"imputed", collaborative vs coercive ... or somesuch. The point is that we don't need 
mind/body distinctions if we can *log* our experiences as collaborative vs coercive. Body stuff is 
inherently collaborative, with oneself, with others, with the inanimate environment, etc. Mind 
stuff tends to be coercive. You have some *idea* about the world, then you go about bending the 
world to fit that idea, or abstracting out details that don't fit that idea. (I'm sure I've 
triggered someone... but I'm not writing about others. I'm writing about myself.)

In this sense, "emergence" isn't essentialist *if* every boundary between any 2 categories can 
*move*, be re-negotiated, especially as a function of the *logger*, the register 
. But in order for that to work with 
concepts like emergence, you have to eliminate *level*. "Order" remains useful, though, e.g. 
signs as objects, objects as signs, interpretants as objects, etc. But the ordering need not be total(ly) 
or even partial(ly complete).


On 3/4/23 10:51, Steve Smith wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing 
" that a 
water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not 
want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did 
that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" 
from the elementary particles from which it might be composed.

  I think we all believe in free-electrons, protons, neutrons but also 
recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d loops of 
abstract things we call strings) but rather by the properties (once again, not 
of molecular dynamics or even chemical reactions) but biological functions,  
and socio-economic-political functions as well. I *am* however, sensitive 
to the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts about how syntax and 
semantics might bootstrap sequentially.   It feels to me that the syntax of one level of 
abstraction yields an *emergent semantics* which in turn becomes the *syntax* of the next 
"level".    I do acknowledge that Glen has made some arguments (and references) that are 
against the very abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of syntax/semantics, 
if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and criticisms already dancing 
as virtual (ideational) particles around that.    I will go back to reading A Theory of Nothing 

Re: [FRIAM] ChatGPT and William James

2023-03-04 Thread Steve Smith
Also second EricS's appreciation for having someone else(s) maintain a 
coherent conversation for the myriad ideas that it allows me to explore 
without being central to the maintenance of the thread.   I realize this 
may be almost pure tangent to others, so I rarely expect anyone to take 
my bait unless it is to correct any egregious mis-attributions or 
think-utational fallacies.


Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his 
"Theory of Nothing 
" 
that a water molecule is not wet... wetness is a property of aggregates 
of water molecules.   I would jump a dozen layers of 
emergent-bootstrapping from there to assert that "mind stuff", if it 
ever makes sense, is an emergent property of "body stuff".   But by 
analogy would not want to say that wetness (and other properties of bulk 
water molecules) is not strictly "molecular dynamics stuff".   And even 
if one did that, the recursion/reduction-ad-absurdum requires that one 
acknowledge/notice/invoke that the properties of any molecule is 
"emergent" from the elementary particles from which it might be composed.


 I think we all believe in free-electrons, protons, neutrons but also 
recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d loops 
of abstract things we call strings) but rather by the properties (once 
again, not of molecular dynamics or even chemical reactions) but 
biological functions,  and socio-economic-political functions as 
well. I *am* however, sensitive to the idea that where and how we 
draw the line between mind/body stuff can be important in any given 
argument, and that sometimes dropping that line altogether may be useful?


The above riff on Mind-Stuff v Body-Stuff is really an intro into 
thoughts about how syntax and semantics might bootstrap sequentially.   
It feels to me that the syntax of one level of abstraction yields an 
*emergent semantics* which in turn becomes the *syntax* of the next 
"level".    I do acknowledge that Glen has made some arguments (and 
references) that are against the very abstraction of "levels" and that 
may well be the hole in everything I'm unrolling here, but for the 
moment, I feel I have a clear picture of a POSET of syntax/semantics, if 
not a full Hierarchy...


This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particles around 
that.    I will go back to reading A Theory of Nothing 
... 
and try to keep my offerings here under 10 pages each...


On 3/4/23 4:32 AM, Santafe wrote:

It’s helpful to have a conversation being maintained by somebod(ies) else, to 
which one can be a bystander without the distraction of coming up with 
contributions to it.  Things can suggest themselves that get pushed out of 
awareness when one is carrying the discourse and figuring out what to do next 
within it.

In reading the below, about the time I got to the lines:


The mind-body problem is the philosophical question of how the mind and body 
are related. One of the main issues is how mental processes such as thoughts, 
emotions, and consciousness are related to physical processes in the brain and 
body.

I was prompted with a term to refer to these mental/physical things.

First, my sense of all this is one of witnessing structures in conversation.  
Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
awareness of conversation as somewhat free-floating from its semantic ground.  
As tokens in conversation, it is perfectly sensible to say that (thoughts, 
emotions, consciousness) are in a category Mental, while (weakness, hunger, 
itching) go into a category Physical.  Not only is it okay to say they fit 
tolerably into “categories” (or “classes”); the reason they do so is that they 
are connected by all sorts of linguistic usage relations.  The relations 
probably in no small part bring about the stability of the categorical sense of 
the terms.

But what word do we then use to refer to such classes in speech?  I would use 
the word “registers”.  The Mental is a register of conversation about events, 
and the Physical is another register.

Jochen’s email below has ChatGPT saying James referred to these as “aspects” of 
various bodily or embodied events.  Sometimes I’m okay with a word like 
“aspects”, but it invites essentialist thinking.  That the event is like a 
computer-language object, which has properties (the aspects) that define its 
interface, and not only are the properties ascribable to the object, but their 
status as defined elements of the 

Re: [FRIAM] ChatGPT and William James

2023-03-04 Thread Santafe
It’s helpful to have a conversation being maintained by somebod(ies) else, to 
which one can be a bystander without the distraction of coming up with 
contributions to it.  Things can suggest themselves that get pushed out of 
awareness when one is carrying the discourse and figuring out what to do next 
within it.

In reading the below, about the time I got to the lines:

> The mind-body problem is the philosophical question of how the mind and body 
> are related. One of the main issues is how mental processes such as thoughts, 
> emotions, and consciousness are related to physical processes in the brain 
> and body.

I was prompted with a term to refer to these mental/physical things.

First, my sense of all this is one of witnessing structures in conversation.  
Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
awareness of conversation as somewhat free-floating from its semantic ground.  
As tokens in conversation, it is perfectly sensible to say that (thoughts, 
emotions, consciousness) are in a category Mental, while (weakness, hunger, 
itching) go into a category Physical.  Not only is it okay to say they fit 
tolerably into “categories” (or “classes”); the reason they do so is that they 
are connected by all sorts of linguistic usage relations.  The relations 
probably in no small part bring about the stability of the categorical sense of 
the terms.

But what word do we then use to refer to such classes in speech?  I would use 
the word “registers”.  The Mental is a register of conversation about events, 
and the Physical is another register.  

Jochen’s email below has ChatGPT saying James referred to these as “aspects” of 
various bodily or embodied events.  Sometimes I’m okay with a word like 
“aspects”, but it invites essentialist thinking.  That the event is like a 
computer-language object, which has properties (the aspects) that define its 
interface, and not only are the properties ascribable to the object, but their 
status as defined elements of the interface is also a real-thing, and not 
merely a frame-dependent convenient compression.  But using “aspects” 
thoughtlessly does two things: it makes essentialism a habit, which I think 
will often be invalid, and it neglects the communications role that “register” 
highlights.

I would attach this to Glen’s fairly terse characterization that there is no 
mind-stuff contrasted with body-stuff, but just one kind of stuff, which if we 
have to choose a word for it we can call body-stuff. From a perspective of 
aspects, the terse version could be closer to or further from a more 
fine-grained description, depending on whether the essentialist view is more or 
less appropriate for one or another phemonenon we want to discuss.  But I think 
the idea that speech provides us certain conventional registers to bound 
discussions of phenomena, and that these too may be better- or worse-fit to the 
organization of the phenomena themselves, is another fair fine-graining of the 
terse characterization, running somewhat parallel to the aspects or essences 
fine-graining (though, as always, connected to it through a kind of 
message-passing and filtering for structural agreement).

Eric



> On Feb 28, 2023, at 5:23 PM, Jochen Fromm  wrote:
> 
> I enjoy talking with ChatGPT about famous scientists, thinkers, writers and 
> philosophers. Because it is trained on vast amounts of data it knows them 
> all: Pascal, Cervantes, Dante, Hugo, Goethe, Leibniz, Newton, Humboldt, 
> Shakespeare,... It can summarize almost any book which is remarkable. Today I 
> have asked it about William James and Anil Seth
> 
> Me: You certainly have heard of William James, one of the fathers of 
> Psychology?
> 
> ChatGPT: Yes, I'm familiar with William James. He was an American philosopher 
> and psychologist who is often referred to as the "father of American 
> psychology." He made significant contributions to the development of 
> psychology as a scientific discipline, and his ideas and writings continue to 
> influence psychology and related fields to this day.
> 
> Some of James' most influential contributions to psychology include his 
> theory of functionalism, which emphasized the study of the purpose and 
> function of behavior and mental processes, rather than their underlying 
> structures; his work on consciousness, which he famously described as a 
> "stream" of continuously changing thoughts and perceptions; and his 
> exploration of the relationship between the mind and the body, which he 
> called the "mind-body problem."
> 
> James also wrote extensively on a wide range of topics, including religion, 
> philosophy, and human nature. His most famous book, "The Principles of 
> Psychology," was published in 1890 and is still considered a classic in the 
> field.
> 
> Me: Can you summarize his book "The Principles of Psychology"?
> 
> ChatGPT: Certainly! "The Principles of Psychology" is a comprehensive and 
> influential textbook 

[FRIAM] ChatGPT and William James

2023-02-28 Thread Jochen Fromm
I enjoy talking with ChatGPT about famous scientists, thinkers, writers and 
philosophers. Because it is trained on vast amounts of data it knows them all: 
Pascal, Cervantes, Dante, Hugo, Goethe, Leibniz, Newton, Humboldt, 
Shakespeare,... It can summarize almost any book which is remarkable. Today I 
have asked it about William James and Anil SethMe: You certainly have heard of 
William James, one of the fathers of Psychology?ChatGPT: Yes, I'm familiar with 
William James. He was an American philosopher and psychologist who is often 
referred to as the "father of American psychology." He made significant 
contributions to the development of psychology as a scientific discipline, and 
his ideas and writings continue to influence psychology and related fields to 
this day.Some of James' most influential contributions to psychology include 
his theory of functionalism, which emphasized the study of the purpose and 
function of behavior and mental processes, rather than their underlying 
structures; his work on consciousness, which he famously described as a 
"stream" of continuously changing thoughts and perceptions; and his exploration 
of the relationship between the mind and the body, which he called the 
"mind-body problem."James also wrote extensively on a wide range of topics, 
including religion, philosophy, and human nature. His most famous book, "The 
Principles of Psychology," was published in 1890 and is still considered a 
classic in the field.Me: Can you summarize his book "The Principles of 
Psychology"?ChatGPT: Certainly! "The Principles of Psychology" is a 
comprehensive and influential textbook that William James published in 1890. It 
aimed to provide a broad overview of the field of psychology at the time, 
covering topics such as perception, emotion, thought, consciousness, and the 
nature of the self.The book is organized into two main sections. The first 
section focuses on the lower-level processes of sensation and perception, 
including topics such as attention, memory, and learning. The second section 
focuses on higher-level mental processes, such as reasoning, judgment, and 
emotion.Throughout the book, James emphasized the importance of studying mental 
processes in their natural context and stressed the need for psychology to be 
closely linked to biology and other natural sciences. He also developed the 
theory of functionalism, which argued that the purpose and function of mental 
processes are more important to understanding them than their underlying 
structures.Overall, "The Principles of Psychology" was a landmark work that 
helped establish psychology as a distinct scientific discipline and influenced 
generations of psychologists and other scientists.Me: William James said: 
"Thoughts themselves are the thinkers". What does he mean?ChatGPT: When William 
James said "thoughts themselves are the thinkers," he was expressing his belief 
in what is often called the "stream of consciousness" view of the mind. James 
believed that the mind is not a fixed entity or substance, but rather a 
constantly changing stream of thoughts, feelings, and sensations.According to 
James, our thoughts are not the products of a separate entity or "thinker" 
within us, but rather they are the result of the ongoing activity of the mind 
itself. In other words, the mind is not a passive receptacle for thoughts that 
are generated elsewhere, but an active process that generates thoughts and 
experiences as it goes along.This view is closely related to James' broader 
philosophy of pragmatism, which emphasized the importance of understanding 
things in terms of their practical implications and effects. For James, the 
idea that thoughts themselves are the thinkers was not just a theoretical claim 
about the nature of the mind, but a practical way of understanding how we can 
change and shape our own thoughts and experiences. By understanding the 
workings of our own minds, James believed, we can take an active role in 
shaping our own lives and achieving our goals.Me: He also coined the term 
"stream of consciousness" which suggests that consciousness flows like a river. 
What does he want to say by using this metaphor?ChatGPT: When William James 
coined the term "stream of consciousness," he was using a metaphor to describe 
his view of the mind as a constantly flowing and changing stream of thoughts, 
feelings, and sensations.The metaphor of a stream suggests that consciousness 
is not a static or fixed entity, but rather a dynamic and fluid process that is 
always in motion. The stream of consciousness is always moving forward, and we 
can never step into the same stream twice. Similarly, the content of our 
consciousness is always changing, as new thoughts, feelings, and sensations 
arise and fall away.By using the metaphor of a stream, James was also 
highlighting the interconnectedness of our thoughts and experiences. Just as a 
river is made up of many different streams and tributaries, our