Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
Lisp or Haskell macros..

Sent from my iPhone

On Mar 6, 2023, at 8:22 PM, Russ Abbott  wrote:


Let's consider the word "input" again. The implication is that there is an 
"agent" of some sort that is separated/distinguishable from some "environment" 
from which it gets "input." The question (or at least one question) concerns 
our specification of what that "agent" is. If, as Glen suggested, genes are 
"input" to a human, what is the agent that is separated from its genes and for 
which the genes provide "input?" Another way of putting it--although I don't 
want to push the analogy too far--is that if genes are "input" to a human, is 
software "input" to the software system it defines? Since a software system is 
essentially nothing without the software that defines it, what would it even 
mean to say that the software is "input" to itself? This isn't an invitation to 
talk about self-modifying software. Let's deal with the easier case first. 
Assuming we are talking about non-self-modifying (and non-self-interpreting) 
software, what does it mean to say that software is "input" to itself?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:52 PM glen 
mailto:geprope...@gmail.com>> wrote:
Well put. When Frank emphasized "data", he doubled-down on the ambiguity. The 
fact is, those who claim a human is categorically different from a machine have 
no legs on which to stand. Every single boundary between them is broken, year 
after year.

On 3/6/23 15:47, Russ Abbott wrote:
> Are the laws of physics "input?" Is the existence of the universe "input?" If 
> so, what issues are we arguing about?
> _
> _
> __-- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Mon, Mar 6, 2023 at 3:42 PM glen 
> mailto:geprope...@gmail.com> 
> >> wrote:
>
> Well, again, it seems like we're equivocating on "input". Are the genes 
> the baby inherited from its parents "input"? I'd say, yes.
>
> On 3/6/23 15:36, Russ Abbott wrote:
>  > Hard to see how you could simulate an infant on the basis of input 
> it's received. It cries; it smiles; it pees; it poops; it pumps blood; it 
> breathes, etc. There are many experiments in which one concludes things about 
> what's going on in an infant's brain by how long it looks at something.
>  > _
>  > _
>  > __-- Russ Abbott
>  > Professor Emeritus, Computer Science
>  > California State University, Los Angeles
>  >
>  >
>  > On Mon, Mar 6, 2023 at 3:16 PM glen 
> mailto:geprope...@gmail.com> 
> > 
>  
>   >
>  > I'm confused by the emphasis on "data". While I'm tempted to agree 
> with my simulation of Frank and say that a human's output is not based solely 
> on statistical patterns in the input the human's been trained on, to 
> dissemble on the meaning of "data" or "input" or "statistical patterns" is a 
> bridge too far.
>  >
>  > The compressive encoder, computer, and decoder that is a human 
> brain (& the rest of the body) may not be entirely "statistical". But 
> statistics is a fairly well-accepted form of behavioral modeling. (Yes, we 
> agent-based modelers love to point out how statistical models are not very 
> mechanistic. But to deny that you can very closely approximate, even predict, 
> actual behavior with some of these models would be foolish.) So, yes, it 
> satisfies the letter of the good faith agreement to say that humans output 
> *might* be solely based on statistical patterns of its input, even if it 
> violates the spirit.
>  >
>  > So, if someone insists that a human-mediated map from input to 
> output is necessarily, categorically different from a machine-mediated map, 
> the burden lies on them to delineate how and why it's different. The primary 
> difference might well be related to babies, e.g. some of the "memory" (aka 
> training) of past statistical patterns comes in the form of genes passed from 
> one's parents. It's unclear to me what the analogs are for something like 
> GPT. Presumably there are things like wavelets of method, process, 
> intellectual property, or whatever that GPT3 inherited from GPT2, mediated by 
> the human-machine replication material that is OpenAI. So, the retort to 
> Frank is: "If you live with a baby algorithm, you see it has knowledge that 
> can't be based on 'data'." That algorithm came from somewhere ... the humans 
> who wrote it, the shoulders they stand on, the hours of debug and test cycles 
> the algorithm goes through as its [re]implemented, etc.
>  >
>  > On 3/6/23 14:54, Frank Wimberly wrote:
>  >  > If you live with a baby you see that they have 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Let's consider the word "input" again. The implication is that there is an
"agent" of some sort that is separated/distinguishable from some
"environment" from which it gets "input." The question (or at least one
question) concerns our specification of what that "agent" is. If, as Glen
suggested, genes are "input" to a human, what is the agent that is
separated from its genes and for which the genes provide "input?" Another
way of putting it--although I don't want to push the analogy too far--is
that if genes are "input" to a human, is software "input" to the software
system it defines? Since a software system is essentially nothing without
the software that defines it, what would it even mean to say that the
software is "input" to itself? This isn't an invitation to talk about
self-modifying software. Let's deal with the easier case first. Assuming we
are talking about non-self-modifying (and non-self-interpreting) software,
what does it mean to say that software is "input" to itself?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:52 PM glen  wrote:

> Well put. When Frank emphasized "data", he doubled-down on the ambiguity.
> The fact is, those who claim a human is categorically different from a
> machine have no legs on which to stand. Every single boundary between them
> is broken, year after year.
>
> On 3/6/23 15:47, Russ Abbott wrote:
> > Are the laws of physics "input?" Is the existence of the universe
> "input?" If so, what issues are we arguing about?
> > _
> > _
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> > On Mon, Mar 6, 2023 at 3:42 PM glen  geprope...@gmail.com>> wrote:
> >
> > Well, again, it seems like we're equivocating on "input". Are the
> genes the baby inherited from its parents "input"? I'd say, yes.
> >
> > On 3/6/23 15:36, Russ Abbott wrote:
> >  > Hard to see how you could simulate an infant on the basis of
> input it's received. It cries; it smiles; it pees; it poops; it pumps
> blood; it breathes, etc. There are many experiments in which one concludes
> things about what's going on in an infant's brain by how long it looks at
> something.
> >  > _
> >  > _
> >  > __-- Russ Abbott
> >  > Professor Emeritus, Computer Science
> >  > California State University, Los Angeles
> >  >
> >  >
> >  > On Mon, Mar 6, 2023 at 3:16 PM glen   >> wrote:
> >  >
> >  > I'm confused by the emphasis on "data". While I'm tempted to
> agree with my simulation of Frank and say that a human's output is not
> based solely on statistical patterns in the input the human's been trained
> on, to dissemble on the meaning of "data" or "input" or "statistical
> patterns" is a bridge too far.
> >  >
> >  > The compressive encoder, computer, and decoder that is a
> human brain (& the rest of the body) may not be entirely "statistical". But
> statistics is a fairly well-accepted form of behavioral modeling. (Yes, we
> agent-based modelers love to point out how statistical models are not very
> mechanistic. But to deny that you can very closely approximate, even
> predict, actual behavior with some of these models would be foolish.) So,
> yes, it satisfies the letter of the good faith agreement to say that humans
> output *might* be solely based on statistical patterns of its input, even
> if it violates the spirit.
> >  >
> >  > So, if someone insists that a human-mediated map from input
> to output is necessarily, categorically different from a machine-mediated
> map, the burden lies on them to delineate how and why it's different. The
> primary difference might well be related to babies, e.g. some of the
> "memory" (aka training) of past statistical patterns comes in the form of
> genes passed from one's parents. It's unclear to me what the analogs are
> for something like GPT. Presumably there are things like wavelets of
> method, process, intellectual property, or whatever that GPT3 inherited
> from GPT2, mediated by the human-machine replication material that is
> OpenAI. So, the retort to Frank is: "If you live with a baby algorithm, you
> see it has knowledge that can't be based on 'data'." That algorithm came
> from somewhere ... the humans who wrote it, the shoulders they stand on,
> the hours of debug and test cycles the algorithm goes through as its
> [re]implemented, etc.
> >  >
> >  > On 3/6/23 14:54, Frank Wimberly wrote:
> >  >  > If you live with a baby you see that they have knowledge
> that can't be based on "data".
> >  >  >
> >  >  > ---
> >  >  > Frank C. Wimberly
> >  >  > 140 Calle Ojo Feliz,
> >  >  > Santa Fe, NM 87505
> >  >  >
> >  >  > 505 670-9918
> >  >  > Santa Fe, NM
> >  >  >
> >  > 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
From examples, either adversarial generative learning or stable diffusion can 
learn the laws of physics.

https://github.com/lucidrains/video-diffusion-pytorch

Also it is common in training these systems to have a "foundation" model that 
is then specialized with domain-specific context.
The weights of the neural net is in a file that one download (e.g. genetics) 
and then it is specialized in a particular environment (e.g. lifetime learning).

Marcus



From: Friam  on behalf of Russ Abbott 

Sent: Monday, March 6, 2023 4:47 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

Are the laws of physics "input?" Is the existence of the universe "input?" If 
so, what issues are we arguing about?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen 
mailto:geprope...@gmail.com>> wrote:
Well, again, it seems like we're equivocating on "input". Are the genes the 
baby inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:
> Hard to see how you could simulate an infant on the basis of input it's 
> received. It cries; it smiles; it pees; it poops; it pumps blood; it 
> breathes, etc. There are many experiments in which one concludes things about 
> what's going on in an infant's brain by how long it looks at something.
> _
> _
> __-- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Mon, Mar 6, 2023 at 3:16 PM glen 
> mailto:geprope...@gmail.com> 
> >> wrote:
>
> I'm confused by the emphasis on "data". While I'm tempted to agree with 
> my simulation of Frank and say that a human's output is not based solely on 
> statistical patterns in the input the human's been trained on, to dissemble 
> on the meaning of "data" or "input" or "statistical patterns" is a bridge too 
> far.
>
> The compressive encoder, computer, and decoder that is a human brain (& 
> the rest of the body) may not be entirely "statistical". But statistics is a 
> fairly well-accepted form of behavioral modeling. (Yes, we agent-based 
> modelers love to point out how statistical models are not very mechanistic. 
> But to deny that you can very closely approximate, even predict, actual 
> behavior with some of these models would be foolish.) So, yes, it satisfies 
> the letter of the good faith agreement to say that humans output *might* be 
> solely based on statistical patterns of its input, even if it violates the 
> spirit.
>
> So, if someone insists that a human-mediated map from input to output is 
> necessarily, categorically different from a machine-mediated map, the burden 
> lies on them to delineate how and why it's different. The primary difference 
> might well be related to babies, e.g. some of the "memory" (aka training) of 
> past statistical patterns comes in the form of genes passed from one's 
> parents. It's unclear to me what the analogs are for something like GPT. 
> Presumably there are things like wavelets of method, process, intellectual 
> property, or whatever that GPT3 inherited from GPT2, mediated by the 
> human-machine replication material that is OpenAI. So, the retort to Frank 
> is: "If you live with a baby algorithm, you see it has knowledge that can't 
> be based on 'data'." That algorithm came from somewhere ... the humans who 
> wrote it, the shoulders they stand on, the hours of debug and test cycles the 
> algorithm goes through as its [re]implemented, etc.
>
> On 3/6/23 14:54, Frank Wimberly wrote:
>  > If you live with a baby you see that they have knowledge that can't be 
> based on "data".
>  >
>  > ---
>  > Frank C. Wimberly
>  > 140 Calle Ojo Feliz,
>  > Santa Fe, NM 87505
>  >
>  > 505 670-9918
>  > Santa Fe, NM
>  >
>  > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels 
> mailto:mar...@snoutfarm.com> 
> > 
>  
>   >
>  > How?
>  >
>  > __ __
>  >
>  > *From:* Friam 
> mailto:friam-boun...@redfish.com> 
> > 
>  
>  Behalf Of *Frank Wimberly
>  > *Sent:* Monday, March 6, 2023 12:50 PM
>  > *To:* The Friday Morning Applied Complexity Coffee Group 
> mailto:friam@redfish.com> 
> > 
>  
>   > *Subject:* Re: [FRIAM] ChatGPT and William 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Well put. When Frank emphasized "data", he doubled-down on the ambiguity. The 
fact is, those who claim a human is categorically different from a machine have no legs 
on which to stand. Every single boundary between them is broken, year after year.

On 3/6/23 15:47, Russ Abbott wrote:

Are the laws of physics "input?" Is the existence of the universe "input?" If 
so, what issues are we arguing about?
_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen mailto:geprope...@gmail.com>> wrote:

Well, again, it seems like we're equivocating on "input". Are the genes the baby 
inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:
 > Hard to see how you could simulate an infant on the basis of input it's 
received. It cries; it smiles; it pees; it poops; it pumps blood; it breathes, 
etc. There are many experiments in which one concludes things about what's going 
on in an infant's brain by how long it looks at something.
 > _
 > _
 > __-- Russ Abbott
 > Professor Emeritus, Computer Science
 > California State University, Los Angeles
 >
 >
 > On Mon, Mar 6, 2023 at 3:16 PM glen mailto:geprope...@gmail.com> 
>> wrote:
 >
 >     I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and 
say that a human's output is not based solely on statistical patterns in the input the human's been trained on, to 
dissemble on the meaning of "data" or "input" or "statistical patterns" is a bridge too far.
 >
 >     The compressive encoder, computer, and decoder that is a human brain (& the 
rest of the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love to point out how 
statistical models are not very mechanistic. But to deny that you can very closely approximate, 
even predict, actual behavior with some of these models would be foolish.) So, yes, it satisfies 
the letter of the good faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.
 >
 >     So, if someone insists that a human-mediated map from input to output is necessarily, 
categorically different from a machine-mediated map, the burden lies on them to delineate how and why 
it's different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed from 
one's parents. It's unclear to me what the analogs are for something like GPT. Presumably there are 
things like wavelets of method, process, intellectual property, or whatever that GPT3 inherited from 
GPT2, mediated by the human-machine replication material that is OpenAI. So, the retort to Frank is: 
"If you live with a baby algorithm, you see it has knowledge that can't be based on 'data'." 
That algorithm came from somewhere ... the humans who wrote it, the shoulders they stand on, the hours 
of debug and test cycles the algorithm goes through as its [re]implemented, etc.
 >
 >     On 3/6/23 14:54, Frank Wimberly wrote:
 >      > If you live with a baby you see that they have knowledge that can't be 
based on "data".
 >      >
 >      > ---
 >      > Frank C. Wimberly
 >      > 140 Calle Ojo Feliz,
 >      > Santa Fe, NM 87505
 >      >
 >      > 505 670-9918
 >      > Santa Fe, NM
 >      >
 >      > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com> 
>        >
 >      >     How?
 >      >
 >      >     __ __
 >      >
 >      >     *From:* Friam mailto:friam-boun...@redfish.com> 
>        >     *Sent:* Monday, March 6, 2023 12:50 PM
 >      >     *To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com> >        >     *Subject:* Re: [FRIAM] ChatGPT and William James
 >      >
 >      >     __ __
 >      >
 >      >      >And we humans are different?
 >      >
 >      >     __ __
 >      >
 >      >     In a word, yes.
 >      >
 >      >     ---
 >      >     Frank C. Wimberly
 >  

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Are the laws of physics "input?" Is the existence of the universe "input?"
If so, what issues are we arguing about?

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:42 PM glen  wrote:

> Well, again, it seems like we're equivocating on "input". Are the genes
> the baby inherited from its parents "input"? I'd say, yes.
>
> On 3/6/23 15:36, Russ Abbott wrote:
> > Hard to see how you could simulate an infant on the basis of input it's
> received. It cries; it smiles; it pees; it poops; it pumps blood; it
> breathes, etc. There are many experiments in which one concludes things
> about what's going on in an infant's brain by how long it looks at
> something.
> > _
> > _
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> > On Mon, Mar 6, 2023 at 3:16 PM glen  geprope...@gmail.com>> wrote:
> >
> > I'm confused by the emphasis on "data". While I'm tempted to agree
> with my simulation of Frank and say that a human's output is not based
> solely on statistical patterns in the input the human's been trained on, to
> dissemble on the meaning of "data" or "input" or "statistical patterns" is
> a bridge too far.
> >
> > The compressive encoder, computer, and decoder that is a human brain
> (& the rest of the body) may not be entirely "statistical". But statistics
> is a fairly well-accepted form of behavioral modeling. (Yes, we agent-based
> modelers love to point out how statistical models are not very mechanistic.
> But to deny that you can very closely approximate, even predict, actual
> behavior with some of these models would be foolish.) So, yes, it satisfies
> the letter of the good faith agreement to say that humans output *might* be
> solely based on statistical patterns of its input, even if it violates the
> spirit.
> >
> > So, if someone insists that a human-mediated map from input to
> output is necessarily, categorically different from a machine-mediated map,
> the burden lies on them to delineate how and why it's different. The
> primary difference might well be related to babies, e.g. some of the
> "memory" (aka training) of past statistical patterns comes in the form of
> genes passed from one's parents. It's unclear to me what the analogs are
> for something like GPT. Presumably there are things like wavelets of
> method, process, intellectual property, or whatever that GPT3 inherited
> from GPT2, mediated by the human-machine replication material that is
> OpenAI. So, the retort to Frank is: "If you live with a baby algorithm, you
> see it has knowledge that can't be based on 'data'." That algorithm came
> from somewhere ... the humans who wrote it, the shoulders they stand on,
> the hours of debug and test cycles the algorithm goes through as its
> [re]implemented, etc.
> >
> > On 3/6/23 14:54, Frank Wimberly wrote:
> >  > If you live with a baby you see that they have knowledge that
> can't be based on "data".
> >  >
> >  > ---
> >  > Frank C. Wimberly
> >  > 140 Calle Ojo Feliz,
> >  > Santa Fe, NM 87505
> >  >
> >  > 505 670-9918
> >  > Santa Fe, NM
> >  >
> >  > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels   >> wrote:
> >  >
> >  > How?
> >  >
> >  > __ __
> >  >
> >  > *From:* Friam  friam-boun...@redfish.com> >> *On Behalf Of *Frank Wimberly
> >  > *Sent:* Monday, March 6, 2023 12:50 PM
> >  > *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com   >>
> >  > *Subject:* Re: [FRIAM] ChatGPT and William James
> >  >
> >  > __ __
> >  >
> >  >  >And we humans are different?
> >  >
> >  > __ __
> >  >
> >  > In a word, yes.
> >  >
> >  > ---
> >  > Frank C. Wimberly
> >  > 140 Calle Ojo Feliz,
> >  > Santa Fe, NM 87505
> >  >
> >  > 505 670-9918
> >  > Santa Fe, NM
> >  >
> >  > __ __
> >  >
> >  > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson <
> thompnicks...@gmail.com   thompnicks...@gmail.com >> wrote:
> >  >
> >  > */However, it's important to remember that there are also
> important differences between a large language model and human
> consciousness. While a large language model can generate text that may seem
> to flow like a stream of consciousness, it does not have the same kind of
> subjective experience that humans do, and its output is based solely on
> statistical patterns in the input it has been trained on./*
> >  >
> >  > 
> >  >

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Well, again, it seems like we're equivocating on "input". Are the genes the baby 
inherited from its parents "input"? I'd say, yes.

On 3/6/23 15:36, Russ Abbott wrote:

Hard to see how you could simulate an infant on the basis of input it's 
received. It cries; it smiles; it pees; it poops; it pumps blood; it breathes, 
etc. There are many experiments in which one concludes things about what's 
going on in an infant's brain by how long it looks at something.
_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:16 PM glen mailto:geprope...@gmail.com>> wrote:

I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and say 
that a human's output is not based solely on statistical patterns in the input the human's been trained on, to 
dissemble on the meaning of "data" or "input" or "statistical patterns" is a bridge too 
far.

The compressive encoder, computer, and decoder that is a human brain (& the rest of 
the body) may not be entirely "statistical". But statistics is a fairly 
well-accepted form of behavioral modeling. (Yes, we agent-based modelers love to point out 
how statistical models are not very mechanistic. But to deny that you can very closely 
approximate, even predict, actual behavior with some of these models would be foolish.) So, 
yes, it satisfies the letter of the good faith agreement to say that humans output *might* be 
solely based on statistical patterns of its input, even if it violates the spirit.

So, if someone insists that a human-mediated map from input to output is necessarily, 
categorically different from a machine-mediated map, the burden lies on them to delineate how and 
why it's different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed 
from one's parents. It's unclear to me what the analogs are for something like GPT. Presumably 
there are things like wavelets of method, process, intellectual property, or whatever that GPT3 
inherited from GPT2, mediated by the human-machine replication material that is OpenAI. So, the 
retort to Frank is: "If you live with a baby algorithm, you see it has knowledge that can't be 
based on 'data'." That algorithm came from somewhere ... the humans who wrote it, the 
shoulders they stand on, the hours of debug and test cycles the algorithm goes through as its 
[re]implemented, etc.

On 3/6/23 14:54, Frank Wimberly wrote:
 > If you live with a baby you see that they have knowledge that can't be based on 
"data".
 >
 > ---
 > Frank C. Wimberly
 > 140 Calle Ojo Feliz,
 > Santa Fe, NM 87505
 >
 > 505 670-9918
 > Santa Fe, NM
 >
 > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com> >> wrote:
 >
 >     How?
 >
 >     __ __
 >
 >     *From:* Friam mailto:friam-boun...@redfish.com> 
>> *On Behalf Of *Frank 
Wimberly
 >     *Sent:* Monday, March 6, 2023 12:50 PM
 >     *To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com> >>
 >     *Subject:* Re: [FRIAM] ChatGPT and William James
 >
 >     __ __
 >
 >      >And we humans are different?
 >
 >     __ __
 >
 >     In a word, yes.
 >
 >     ---
 >     Frank C. Wimberly
 >     140 Calle Ojo Feliz,
 >     Santa Fe, NM 87505
 >
 >     505 670-9918
 >     Santa Fe, NM
 >
 >     __ __
 >
 >     On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson mailto:thompnicks...@gmail.com> >> wrote:
 >
 >         */However, it's important to remember that there are also 
important differences between a large language model and human consciousness. 
While a large language model can generate text that may seem to flow like a stream 
of consciousness, it does not have the same kind of subjective experience that 
humans do, and its output is based solely on statistical patterns in the input it 
has been trained on./*
 >
 >         
 >
 >         And we humans are different? 
 >
 >         
 >
 >         On Sat, Mar 4, 2023 at 11:51 AM Steve Smith mailto:sasm...@swcp.com> >> 
wrote:
 >
 >             Also second EricS's appreciation for having someone else(s) 
maintain a coherent conversation for the myriad ideas that it allows me to explore 
without being central to the maintenance of the thread.   I realize this may be 
almost pure tangent to others, so I rarely expect anyone to take my bait unless 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Russ Abbott
Hard to see how you could simulate an infant on the basis of input it's
received. It cries; it smiles; it pees; it poops; it pumps blood; it
breathes, etc. There are many experiments in which one concludes things
about what's going on in an infant's brain by how long it looks at
something.

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Mon, Mar 6, 2023 at 3:16 PM glen  wrote:

> I'm confused by the emphasis on "data". While I'm tempted to agree with my
> simulation of Frank and say that a human's output is not based solely on
> statistical patterns in the input the human's been trained on, to dissemble
> on the meaning of "data" or "input" or "statistical patterns" is a bridge
> too far.
>
> The compressive encoder, computer, and decoder that is a human brain (&
> the rest of the body) may not be entirely "statistical". But statistics is
> a fairly well-accepted form of behavioral modeling. (Yes, we agent-based
> modelers love to point out how statistical models are not very mechanistic.
> But to deny that you can very closely approximate, even predict, actual
> behavior with some of these models would be foolish.) So, yes, it satisfies
> the letter of the good faith agreement to say that humans output *might* be
> solely based on statistical patterns of its input, even if it violates the
> spirit.
>
> So, if someone insists that a human-mediated map from input to output is
> necessarily, categorically different from a machine-mediated map, the
> burden lies on them to delineate how and why it's different. The primary
> difference might well be related to babies, e.g. some of the "memory" (aka
> training) of past statistical patterns comes in the form of genes passed
> from one's parents. It's unclear to me what the analogs are for something
> like GPT. Presumably there are things like wavelets of method, process,
> intellectual property, or whatever that GPT3 inherited from GPT2, mediated
> by the human-machine replication material that is OpenAI. So, the retort to
> Frank is: "If you live with a baby algorithm, you see it has knowledge that
> can't be based on 'data'." That algorithm came from somewhere ... the
> humans who wrote it, the shoulders they stand on, the hours of debug and
> test cycles the algorithm goes through as its [re]implemented, etc.
>
> On 3/6/23 14:54, Frank Wimberly wrote:
> > If you live with a baby you see that they have knowledge that can't be
> based on "data".
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> >
> > 505 670-9918
> > Santa Fe, NM
> >
> > On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  > wrote:
> >
> > How?
> >
> > __ __
> >
> > *From:* Friam  friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
> > *Sent:* Monday, March 6, 2023 12:50 PM
> > *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com >
> > *Subject:* Re: [FRIAM] ChatGPT and William James
> >
> > __ __
> >
> >  >And we humans are different?
> >
> > __ __
> >
> > In a word, yes.
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz,
> > Santa Fe, NM 87505
> >
> > 505 670-9918
> > Santa Fe, NM
> >
> > __ __
> >
> > On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson <
> thompnicks...@gmail.com > wrote:
> >
> > */However, it's important to remember that there are also
> important differences between a large language model and human
> consciousness. While a large language model can generate text that may seem
> to flow like a stream of consciousness, it does not have the same kind of
> subjective experience that humans do, and its output is based solely on
> statistical patterns in the input it has been trained on./*
> >
> > 
> >
> > And we humans are different? 
> >
> > 
> >
> > On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  > wrote:
> >
> > Also second EricS's appreciation for having someone else(s)
> maintain a coherent conversation for the myriad ideas that it allows me to
> explore without being central to the maintenance of the thread.   I realize
> this may be almost pure tangent to others, so I rarely expect anyone to
> take my bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
> >
> > Starting with Glen's assertion/suggestion/assumption that
> there is not mind-stuff and body stuff, just body stuff:  I appeal to the
> general abstraction of Emergence and use Russell Standish's example in his
> "Theory of Nothing <
> https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true_srp=true=GgXJ0ISQei=1>"
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

I'm confused by the emphasis on "data". While I'm tempted to agree with my simulation of Frank and say that a 
human's output is not based solely on statistical patterns in the input the human's been trained on, to dissemble on 
the meaning of "data" or "input" or "statistical patterns" is a bridge too far.

The compressive encoder, computer, and decoder that is a human brain (& the rest of the 
body) may not be entirely "statistical". But statistics is a fairly well-accepted 
form of behavioral modeling. (Yes, we agent-based modelers love to point out how statistical 
models are not very mechanistic. But to deny that you can very closely approximate, even 
predict, actual behavior with some of these models would be foolish.) So, yes, it satisfies 
the letter of the good faith agreement to say that humans output *might* be solely based on 
statistical patterns of its input, even if it violates the spirit.

So, if someone insists that a human-mediated map from input to output is necessarily, categorically 
different from a machine-mediated map, the burden lies on them to delineate how and why it's 
different. The primary difference might well be related to babies, e.g. some of the 
"memory" (aka training) of past statistical patterns comes in the form of genes passed 
from one's parents. It's unclear to me what the analogs are for something like GPT. Presumably 
there are things like wavelets of method, process, intellectual property, or whatever that GPT3 
inherited from GPT2, mediated by the human-machine replication material that is OpenAI. So, the 
retort to Frank is: "If you live with a baby algorithm, you see it has knowledge that can't be 
based on 'data'." That algorithm came from somewhere ... the humans who wrote it, the 
shoulders they stand on, the hours of debug and test cycles the algorithm goes through as its 
[re]implemented, etc.

On 3/6/23 14:54, Frank Wimberly wrote:

If you live with a baby you see that they have knowledge that can't be based on 
"data".

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:

How?

__ __

*From:* Friam mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
*Sent:* Monday, March 6, 2023 12:50 PM
*To:* The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>
*Subject:* Re: [FRIAM] ChatGPT and William James

__ __

 >And we humans are different?

__ __

In a word, yes.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

__ __

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson mailto:thompnicks...@gmail.com>> wrote:

*/However, it's important to remember that there are also important 
differences between a large language model and human consciousness. While a 
large language model can generate text that may seem to flow like a stream of 
consciousness, it does not have the same kind of subjective experience that 
humans do, and its output is based solely on statistical patterns in the input 
it has been trained on./*



And we humans are different? 



On Sat, Mar 4, 2023 at 11:51 AM Steve Smith mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) 
maintain a coherent conversation for the myriad ideas that it allows me to 
explore without being central to the maintenance of the thread.   I realize 
this may be almost pure tangent to others, so I rarely expect anyone to take my 
bait unless it is to correct any egregious mis-attributions or think-utational 
fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the 
general abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing 
" that a 
water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not 
want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did 
that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" 
from the elementary particles from which it might be composed. 

  I think we all believe in free-electrons, protons, neutrons but 
also recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d loops of 
abstract things we 

[FRIAM] [off topic] exterminators for mice?

2023-03-06 Thread Gillian Densmore
Looking for a dude in the santa fe area that could help find hiding spots,
holes and blah blah that mice can use to get into the house.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Frank Wimberly
If you live with a baby you see that they have knowledge that can't be
based on "data".

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 2:50 PM Marcus Daniels  wrote:

> How?
>
>
>
> *From:* Friam  *On Behalf Of *Frank Wimberly
> *Sent:* Monday, March 6, 2023 12:50 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] ChatGPT and William James
>
>
>
> >And we humans are different?
>
>
>
> In a word, yes.
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
>
>
> On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
> wrote:
>
> *However, it's important to remember that there are also important
> differences between a large language model and human consciousness. While a
> large language model can generate text that may seem to flow like a stream
> of consciousness, it does not have the same kind of subjective experience
> that humans do, and its output is based solely on statistical patterns in
> the input it has been trained on.*
>
>
>
> And we humans are different?
>
>
>
> On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:
>
> Also second EricS's appreciation for having someone else(s) maintain a
> coherent conversation for the myriad ideas that it allows me to explore
> without being central to the maintenance of the thread.   I realize this
> may be almost pure tangent to others, so I rarely expect anyone to take my
> bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
>
> Starting with Glen's assertion/suggestion/assumption that there is not
> mind-stuff and body stuff, just body stuff:  I appeal to the general
> abstraction of Emergence and use Russell Standish's example in his "Theory
> of Nothing
> "
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of emergent-bootstrapping
> from there to assert that "mind stuff", if it ever makes sense, is an
> emergent property of "body stuff".   But by analogy would not want to say
> that wetness (and other properties of bulk water molecules) is not strictly
> "molecular dynamics stuff".   And even if one did that, the
> recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke
> that the properties of any molecule is "emergent" from the elementary
> particles from which it might be composed.
>
>  I think we all believe in free-electrons, protons, neutrons but also
> recognize that *most* of our observed universe is shaped not by *those
> properties* (much less the properties of quarks and gluons or 10d loops of
> abstract things we call strings) but rather by the properties (once again,
> not of molecular dynamics or even chemical reactions) but biological
> functions,  and socio-economic-political functions as well. I *am*
> however, sensitive to the idea that where and how we draw the line between
> mind/body stuff can be important in any given argument, and that sometimes
> dropping that line altogether may be useful?
>
> The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts
> about how syntax and semantics might bootstrap sequentially.   It feels to
> me that the syntax of one level of abstraction yields an *emergent
> semantics* which in turn becomes the *syntax* of the next "level".I do
> acknowledge that Glen has made some arguments (and references) that are
> against the very abstraction of "levels" and that may well be the hole in
> everything I'm unrolling here, but for the moment, I feel I have a clear
> picture of a POSET of syntax/semantics, if not a full Hierarchy...
>
> This also backs me into the Platonic ideations with all the charms and
> criticisms already dancing as virtual (ideational) particles around
> that.I will go back to reading A Theory of Nothing
> ...
> and try to keep my offerings here under 10 pages each...
>
> On 3/4/23 4:32 AM, Santafe wrote:
>
> It’s helpful to have a conversation being maintained by somebod(ies) else, to 
> which one can be a bystander without the distraction of coming up with 
> contributions to it.  Things can suggest themselves that get pushed out of 
> awareness when one is carrying the discourse and figuring out what to do next 
> within it.
>
>
>
> In reading the below, about the time I got to the lines:
>
>
>
> The mind-body problem is the philosophical question of how the mind and body 
> are related. One of the main issues is how mental processes such as thoughts, 
> emotions, and consciousness are related to physical processes in the brain 
> and body.
>
> I was prompted with a term to refer to these mental/physical things.
>
>
>
> First, my sense of all this is one of 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Marcus Daniels
How?

From: Friam  On Behalf Of Frank Wimberly
Sent: Monday, March 6, 2023 12:50 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT and William James

>And we humans are different?

In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
mailto:thompnicks...@gmail.com>> wrote:
However, it's important to remember that there are also important differences 
between a large language model and human consciousness. While a large language 
model can generate text that may seem to flow like a stream of consciousness, 
it does not have the same kind of subjective experience that humans do, and its 
output is based solely on statistical patterns in the input it has been trained 
on.

And we humans are different?

On Sat, Mar 4, 2023 at 11:51 AM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not 
mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of 
Nothing"
 that a water molecule is not wet... wetness is a property of aggregates of 
water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent 
property of "body stuff".   But by analogy would not want to say that wetness 
(and other properties of bulk water molecules) is not strictly "molecular 
dynamics stuff".   And even if one did that, the 
recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke 
that the properties of any molecule is "emergent" from the elementary particles 
from which it might be composed.

 I think we all believe in free-electrons, protons, neutrons but also recognize 
that *most* of our observed universe is shaped not by *those properties* (much 
less the properties of quarks and gluons or 10d loops of abstract things we 
call strings) but rather by the properties (once again, not of molecular 
dynamics or even chemical reactions) but biological functions,  and 
socio-economic-political functions as well. I *am* however, sensitive to 
the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts 
about how syntax and semantics might bootstrap sequentially.   It feels to me 
that the syntax of one level of abstraction yields an *emergent semantics* 
which in turn becomes the *syntax* of the next "level".I do acknowledge 
that Glen has made some arguments (and references) that are against the very 
abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of 
syntax/semantics, if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and 
criticisms already dancing as virtual (ideational) particles around that.I 
will go back to reading A Theory of 
Nothing...
 and try to keep my offerings here under 10 pages each...
On 3/4/23 4:32 AM, Santafe wrote:

It’s helpful to have a conversation being maintained by somebod(ies) else, to 
which one can be a bystander without the distraction of coming up with 
contributions to it.  Things can suggest themselves that get pushed out of 
awareness when one is carrying the discourse and figuring out what to do next 
within it.



In reading the below, about the time I got to the lines:



The mind-body problem is the philosophical question of how the mind and body 
are related. One of the main issues is how mental processes such as thoughts, 
emotions, and consciousness are related to physical processes in the brain and 
body.

I was prompted with a term to refer to these mental/physical things.



First, my sense of all this is one of witnessing structures in conversation.  
Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
awareness of conversation as somewhat free-floating from its semantic ground.  
As tokens in conversation, it is perfectly sensible to say that (thoughts, 
emotions, consciousness) are in a category Mental, while (weakness, hunger, 
itching) go into a 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Frank Wimberly
>And we humans are different?

In a word, yes.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson 
wrote:

> *However, it's important to remember that there are also important
> differences between a large language model and human consciousness. While a
> large language model can generate text that may seem to flow like a stream
> of consciousness, it does not have the same kind of subjective experience
> that humans do, and its output is based solely on statistical patterns in
> the input it has been trained on.*
>
>
>
> And we humans are different?
>
>
>
> On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:
>
> Also second EricS's appreciation for having someone else(s) maintain a
> coherent conversation for the myriad ideas that it allows me to explore
> without being central to the maintenance of the thread.   I realize this
> may be almost pure tangent to others, so I rarely expect anyone to take my
> bait unless it is to correct any egregious mis-attributions or
> think-utational fallacies.
>
> Starting with Glen's assertion/suggestion/assumption that there is not
> mind-stuff and body stuff, just body stuff:  I appeal to the general
> abstraction of Emergence and use Russell Standish's example in his "Theory
> of Nothing
> "
> that a water molecule is not wet... wetness is a property of aggregates of
> water molecules.   I would jump a dozen layers of emergent-bootstrapping
> from there to assert that "mind stuff", if it ever makes sense, is an
> emergent property of "body stuff".   But by analogy would not want to say
> that wetness (and other properties of bulk water molecules) is not strictly
> "molecular dynamics stuff".   And even if one did that, the
> recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke
> that the properties of any molecule is "emergent" from the elementary
> particles from which it might be composed.
>
>  I think we all believe in free-electrons, protons, neutrons but also
> recognize that *most* of our observed universe is shaped not by *those
> properties* (much less the properties of quarks and gluons or 10d loops of
> abstract things we call strings) but rather by the properties (once again,
> not of molecular dynamics or even chemical reactions) but biological
> functions,  and socio-economic-political functions as well. I *am*
> however, sensitive to the idea that where and how we draw the line between
> mind/body stuff can be important in any given argument, and that sometimes
> dropping that line altogether may be useful?
>
> The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts
> about how syntax and semantics might bootstrap sequentially.   It feels to
> me that the syntax of one level of abstraction yields an *emergent
> semantics* which in turn becomes the *syntax* of the next "level".I do
> acknowledge that Glen has made some arguments (and references) that are
> against the very abstraction of "levels" and that may well be the hole in
> everything I'm unrolling here, but for the moment, I feel I have a clear
> picture of a POSET of syntax/semantics, if not a full Hierarchy...
>
> This also backs me into the Platonic ideations with all the charms and
> criticisms already dancing as virtual (ideational) particles around
> that.I will go back to reading A Theory of Nothing
> ...
> and try to keep my offerings here under 10 pages each...
>
> On 3/4/23 4:32 AM, Santafe wrote:
>
> It’s helpful to have a conversation being maintained by somebod(ies) else, to 
> which one can be a bystander without the distraction of coming up with 
> contributions to it.  Things can suggest themselves that get pushed out of 
> awareness when one is carrying the discourse and figuring out what to do next 
> within it.
>
>
>
> In reading the below, about the time I got to the lines:
>
>
>
> The mind-body problem is the philosophical question of how the mind and body 
> are related. One of the main issues is how mental processes such as thoughts, 
> emotions, and consciousness are related to physical processes in the brain 
> and body.
>
> I was prompted with a term to refer to these mental/physical things.
>
>
>
> First, my sense of all this is one of witnessing structures in conversation.  
> Maybe I am more primed to that because with ChatGPT as the topic, one fronts 
> awareness of conversation as somewhat free-floating from its semantic ground. 
>  As tokens in conversation, it is perfectly sensible to say that (thoughts, 
> emotions, consciousness) are in a category Mental, while (weakness, hunger, 
> itching) go into a category Physical.  Not only is it okay to say they fit 
> tolerably into “categories” (or “classes”); the reason they do 

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread Nicholas Thompson
However, it's important to remember that there are also important differences between a large language model and human consciousness. While a large language model can generate text that may seem to flow like a stream of consciousness, it does not have the same kind of subjective experience that humans do, and its output is based solely on statistical patterns in the input it has been trained on. And we humans are different?   On Sat, Mar 4, 2023 at 11:51 AM Steve Smith  wrote:Also second EricS's appreciation for having someone else(s) maintain a coherent conversation for the myriad ideas that it allows me to explore without being central to the maintenance of the thread.   I realize this may be almost pure tangent to others, so I rarely expect anyone to take my bait unless it is to correct any egregious mis-attributions or think-utational fallacies.Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the general abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing" that a water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" from the elementary particles from which it might be composed.    I think we all believe in free-electrons, protons, neutrons but also recognize that *most* of our observed universe is shaped not by *those properties* (much less the properties of quarks and gluons or 10d loops of abstract things we call strings) but rather by the properties (once again, not of molecular dynamics or even chemical reactions) but biological functions,  and socio-economic-political functions as well. I *am* however, sensitive to the idea that where and how we draw the line between mind/body stuff can be important in any given argument, and that sometimes dropping that line altogether may be useful?The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts about how syntax and semantics might bootstrap sequentially.   It feels to me that the syntax of one level of abstraction yields an *emergent semantics* which in turn becomes the *syntax* of the next "level".    I do acknowledge that Glen has made some arguments (and references) that are against the very abstraction of "levels" and that may well be the hole in everything I'm unrolling here, but for the moment, I feel I have a clear picture of a POSET of syntax/semantics, if not a full Hierarchy...   This also backs me into the Platonic ideations with all the charms and criticisms already dancing as virtual (ideational) particles around that.    I will go back to reading A Theory of Nothing... and try to keep my offerings here under 10 pages each...On 3/4/23 4:32 AM, Santafe wrote:It’s helpful to have a conversation being maintained by somebod(ies) else, to which one can be a bystander without the distraction of coming up with contributions to it.  Things can suggest themselves that get pushed out of awareness when one is carrying the discourse and figuring out what to do next within it. In reading the below, about the time I got to the lines: The mind-body problem is the philosophical question of how the mind and body are related. One of the main issues is how mental processes such as thoughts, emotions, and consciousness are related to physical processes in the brain and body.I was prompted with a term to refer to these mental/physical things. First, my sense of all this is one of witnessing structures in conversation.  Maybe I am more primed to that because with ChatGPT as the topic, one fronts awareness of conversation as somewhat free-floating from its semantic ground.  As tokens in conversation, it is perfectly sensible to say that (thoughts, emotions, consciousness) are in a category Mental, while (weakness, hunger, itching) go into a category Physical.  Not only is it okay to say they fit tolerably into “categories” (or “classes”); the reason they do so is that they are connected by all sorts of linguistic usage relations.  The relations probably in no small part bring about the stability of the categorical sense of the terms. But what word do we then use to refer to such classes in speech?  I would use the word “registers”.  The Mental is a register of conversation about events, and the Physical is another register.   Jochen’s email below has ChatGPT saying James referred to these as “aspects” of various bodily or embodied events.  Sometimes I’m okay with a word like “aspects”, but it invites essentialist thinking.  

Re: [FRIAM] ChatGPT and William James

2023-03-06 Thread glen

Interesting. EricS' layout triggered me. I've used the word "registration" a lot, mostly because of BC Smith's re-terming 
from "inscription error" to "pre-emptive registration". But I'd never actually looked at the etymology of 
"register" . From EricS' post, I got the connotation of a musical 
register , which I'd never before linked to the naming process of 
registration.

But SteveS' is right. I reject not only the emergence sense of "levels", but also the leveling in that 
Wikipedia entry. Although Eric's use of "register" reminded me of musical categories, his treatment of it 
seems more closely aligned to *logging* or documentation ... writing, more along the lines of "gest" 
 ... more action, less thought.

The split between syntax and semantics has never really worked for me because (I think) they're 
both so cognitive. What would work better would be something like "negotiated" vs 
"imputed", collaborative vs coercive ... or somesuch. The point is that we don't need 
mind/body distinctions if we can *log* our experiences as collaborative vs coercive. Body stuff is 
inherently collaborative, with oneself, with others, with the inanimate environment, etc. Mind 
stuff tends to be coercive. You have some *idea* about the world, then you go about bending the 
world to fit that idea, or abstracting out details that don't fit that idea. (I'm sure I've 
triggered someone... but I'm not writing about others. I'm writing about myself.)

In this sense, "emergence" isn't essentialist *if* every boundary between any 2 categories can 
*move*, be re-negotiated, especially as a function of the *logger*, the register 
. But in order for that to work with 
concepts like emergence, you have to eliminate *level*. "Order" remains useful, though, e.g. 
signs as objects, objects as signs, interpretants as objects, etc. But the ordering need not be total(ly) 
or even partial(ly complete).


On 3/4/23 10:51, Steve Smith wrote:

Also second EricS's appreciation for having someone else(s) maintain a coherent 
conversation for the myriad ideas that it allows me to explore without being 
central to the maintenance of the thread.   I realize this may be almost pure 
tangent to others, so I rarely expect anyone to take my bait unless it is to 
correct any egregious mis-attributions or think-utational fallacies.

Starting with Glen's assertion/suggestion/assumption that there is not mind-stuff and body stuff, just body stuff:  I appeal to the general 
abstraction of Emergence and use Russell Standish's example in his "Theory of Nothing 
" that a 
water molecule is not wet... wetness is a property of aggregates of water molecules.   I would jump a dozen layers of emergent-bootstrapping from 
there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body stuff".   But by analogy would not 
want to say that wetness (and other properties of bulk water molecules) is not strictly "molecular dynamics stuff".   And even if one did 
that, the recursion/reduction-ad-absurdum requires that one acknowledge/notice/invoke that the properties of any molecule is "emergent" 
from the elementary particles from which it might be composed.

  I think we all believe in free-electrons, protons, neutrons but also 
recognize that *most* of our observed universe is shaped not by *those 
properties* (much less the properties of quarks and gluons or 10d loops of 
abstract things we call strings) but rather by the properties (once again, not 
of molecular dynamics or even chemical reactions) but biological functions,  
and socio-economic-political functions as well. I *am* however, sensitive 
to the idea that where and how we draw the line between mind/body stuff can be 
important in any given argument, and that sometimes dropping that line 
altogether may be useful?

The above riff on Mind-Stuff v Body-Stuff is really an intro into thoughts about how syntax and 
semantics might bootstrap sequentially.   It feels to me that the syntax of one level of 
abstraction yields an *emergent semantics* which in turn becomes the *syntax* of the next 
"level".    I do acknowledge that Glen has made some arguments (and references) that are 
against the very abstraction of "levels" and that may well be the hole in everything I'm 
unrolling here, but for the moment, I feel I have a clear picture of a POSET of syntax/semantics, 
if not a full Hierarchy...

This also backs me into the Platonic ideations with all the charms and criticisms already dancing 
as virtual (ideational) particles around that.    I will go back to reading A Theory of Nothing