Re: [FRIAM] experience monism

2023-02-08 Thread Nicholas Thompson
to friam

Dear David and other helpful persons,

Thanks again for your help here.  Man! Do I look forward to your definitive
work on experience!  All this cogitation is exhausting me.

Your comment that I might dismiss your questions has an edge that I didn’t
see when you first made it.  There is, perhaps, a sense in which I* should*
dismiss them.   The questions you ask have the feel of metaphysics.  You
know, How many angels can dance on the head of a pin?  Pragmatists try to
dissolve metaphysical questions either into non-questions or empirical
questions. “After all, if the answer to the question isn’t to find some
angels and measure their feet, then what *are* we talking about, eh?”  Perhaps
we might devote our time to a more productive discussion?  Notice that the
whole notion of a “productive” discussion itself reeks of pragmatism with
its convergentist aspirations.



The only thing that can be positively asserted about metaphysics – by which
I mean that vast spongy fetid cloud of supposition that surrounds and
infects everything we explicitly believe -- is that it is inevitable.  Thus,
though debating metaphysics is useless, failing to own up to it is
dishonest.   Metaphysics is not something we propose; it’s something we
confess to.

So, I feel obligated to go on and answer these questions, even though their
answers may indeed be unrelated to the proper thrust of “experience monism”.
Whatever metaphysics might be offered to support my experience monism,  it’s
value will always be in its capacity to root important concepts such as
truth and reality, not in relations between our experiences and some
notional world-beyond-experience, but in relations among experiences,
themselves.







*The eloquence and perspicacity of Professor Thompson has convinced me to
become an Experience monist. In my naive sophomoric enthusiasm, I have set
about writing THE definitive work on Experience. But I have a few
questions:*



*   1A) If an Experience is is a composite- there must be
'atomic' Experience from which it is composed. Is it possible to Experience
and "atomic Experience" in isolation?*

Any whole with different properties can be analyzed into parts.  If your
first experience of  apple pie your gramma took from her oven and sliced,
then all of that is apple pie in the first instance. As cinnamon is
experienced in other contexts and apple pie is eaten in other contexts, the
experience of apple pie can be analyzed into parts, meaning that one can
begin to experience cinnamon as something apart from the experience of
apple pie. The analysis of any experience into component experiences is as
much a cognitive achievement as its unification.



*2) Does an Experience have duration, or is each Experience akin to a frame
of a film and continuity simply an artifact of being presented at some
rate; e.g., 30 frames per nanosecond?*

I like, for the moment, to think of experiences as successive
lightning-like illuminations of a landscape of associations.  I would call
these associations “signs” if my grasp of semeiotics were not so protean.

You did not quite ask me, but I must answer the question of time, or order
of experiences.  Peirce at one offers the quasi-neural notion of the fading
of nodes in the network of associations since each was last illuminated.  So
parts of this landscape of associations gets harder to illuminate as they
are illuminated less often.

But these questions seem like candidates for empirical investigation using
tachistiscopes, and that sort of thing.

*3) Can Experiences be differentiated as "potential" and "actual?" To
illustrate: I turn on the camera on my phone and images pass through the
lens and appear on the screen, but a photograph does not come into
existence until I press the shutter button. Does something similar happen
with experience? They are potential until I "press the conscious awareness
button" at which point they become actual?*

Potentiality and actuality are themselves cognitive achievements and
experiences in their own right.

*4) Can Experiences be categorized? To borrow vocabulary (somewhat
tortured( from Peter Sjostedt-Hughes' pentad of perception;*

Peters’s pentad doesn’t make a whole lot of sense to me, laced as it is
with apriorist dualist appeals to physiology and an external world.  I
think a disrupted experience is one that doesn’t fit well with existing
networks of association.

   - *Experience grounded in/originating from the spatio-temporal
   environment (Sensed Experience)*
   - *Experience of an atemporal quality, e.g., color or scent (Perceived
   Experience)*
   - *An Experience partly caused by an external physicality—e.g., motion
   of molecules partly causative of the Experience of heat (Ecto-Physical
   Experience)*
   - *An Experience that is partly caused by an internal physicality—e.g.,
   synapses firing in the brain (Endo-Physical Experience)*
   - *Experiences not grounded in/originating from the spatio-temporal
   environment, e.g., 

Re: [FRIAM] Datasets as Experience

2023-02-08 Thread Marcus Daniels
I don't know what Bing uses for storage, or how it relates to OpenAI's added 
codebase.   There are older references to use of TensorFlow at OpenAI, but 
things may have changed.   The "file" would likely be the serialization of the 
(distributed) tensors of the ChatGPT model.   One possible way storage works is 
with Azure Blobs but I see they also use more conventional POSIX-like 
filesystems like Lustre.   I'm not sure what the point of your remark is.   It 
seems obvious to me a well-placed data engineer at Microsoft could identify the 
exact scope of the secondary storage of Bing's neural networks.   My point is 
that passable cultural knowledge will probably fit in a rack of disk drives (or 
maybe even a few?).   I find that humorous, but I'm told unusual things amuse 
me.  

-Original Message-
From: Friam  On Behalf Of glen
Sent: Wednesday, February 8, 2023 12:54 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Datasets as Experience

Ha! It's cool you used the phrase "a set of files". What does "file" mean? I 
mean, hell, we can't really even well-define "set". The other day, some 
internet person was complaining that they wanted to leave Twitter now that 
Elno's taken it to hell. But they needed a Delete. I took the liberty of 
explaining that you can only delete things if there's only 1 of that thing. Any 
distributed implementation, at best, makes deletion difficult or, at worst, 
impossible. My example was a journaling file system, a boon for irresponsible 
"rm -rf *" people and data forensics, but a bane for Elno-types. We're so lost 
in metaphor, we can't even see our hand in front of our face.

On 2/8/23 11:47, Marcus Daniels wrote:
> I read that Bing won’t write a cover letter if asked.  I love the idea of a 
> set of files sitting on a filesystem at Microsoft that represent human 
> ethics.  It reminds me of people that complain about being characterized by 
> their skill sets.  I think we are going to learn just how little we are.

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Datasets as Experience

2023-02-08 Thread Steve Smith




https://quoteinvestigator.com/2014/08/31/illusion/

Is that irony? Degeneracy? Define "semantics"! >8^D


   degenerate, recursive irony  up the down rabbithole through the
   looking glass all the way down...

My first memory of appreciating this was when I was introduced to 
Bertrand Russell's considerations around what was "really real" !


It may have come from this work:  1915 Ultimate Constituents of Matter 
 ?


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Datasets as Experience

2023-02-08 Thread glen


https://quoteinvestigator.com/2014/08/31/illusion/

Is that irony? Degeneracy? Define "semantics"! >8^D

On 2/8/23 12:09, Steve Smith wrote:



...
Rather than posit that these models don't have semantics, I'd posit *we* don't 
have semantics.

The problem with communication is the illusion that it exists.


I don't think I know what you mean by these statements?





--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Datasets as Experience

2023-02-08 Thread glen

Ha! It's cool you used the phrase "a set of files". What does "file" mean? I mean, hell, we can't 
really even well-define "set". The other day, some internet person was complaining that they wanted to leave 
Twitter now that Elno's taken it to hell. But they needed a Delete. I took the liberty of explaining that you can only 
delete things if there's only 1 of that thing. Any distributed implementation, at best, makes deletion difficult or, at 
worst, impossible. My example was a journaling file system, a boon for irresponsible "rm -rf *" people and 
data forensics, but a bane for Elno-types. We're so lost in metaphor, we can't even see our hand in front of our face.

On 2/8/23 11:47, Marcus Daniels wrote:

I read that Bing won’t write a cover letter if asked.  I love the idea of a set 
of files sitting on a filesystem at Microsoft that represent human ethics.  It 
reminds me of people that complain about being characterized by their skill 
sets.  I think we are going to learn just how little we are.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Datasets as Experience

2023-02-08 Thread Steve Smith




...
Rather than posit that these models don't have semantics, I'd posit 
*we* don't have semantics.


The problem with communication is the illusion that it exists.


I don't think I know what you mean by these statements?





-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Datasets as Experience

2023-02-08 Thread Marcus Daniels
I read that Bing won’t write a cover letter if asked.  I love the idea of a set 
of files sitting on a filesystem at Microsoft that represent human ethics.  It 
reminds me of people that complain about being characterized by their skill 
sets.  I think we are going to learn just how little we are.
> On Feb 8, 2023, at 10:14 AM, glen  wrote:
> 
> I've recently developed a taste for judging people by the content of their 
> character, something I used to and kindasorta still do denigrate as hubris 
> (because our models of others' character are models, always wrong, rarely 
> useful). And one of the best measures of character is how someone responds 
> when presented with a "learning opportunity". ChatGPT is an extraordinary 
> mansplainer. And even though, when you show it facts that contradict it's 
> prior opinion, it gives lip-service with words like "sorry", it will continue 
> to *confidently* spout half-truths and rhetorical bullsh¡t (even if you ask 
> it to, say, write a LCG pRNG in C). Just like the tendency of apps like 
> Stable Diffusion to "pornify" women, ChatGPT encodes the culture of its 
> input. So if ChatGPT is a mansplainer or evil, in any sense, it's *because* 
> the culture from which it draws its input is that. I.e. It's a mirror. We are 
> evil. We are dreadworthy.
> 
> BTW, I did a back of the envelope calculation and the cost of operating one 
> (small) 777 per day seems to be about the same as operating the one ChatGPT 
> per day (~$100k). Presumably, if/when OAI begins distributing the model, such 
> that there are several of them out there (like the fleets of 777s), the costs 
> will be lower. At that point, the semantic content of one 777 might exceed 
> that of one ChatGPT instance.
> 
>> On 2/8/23 09:08, Santafe wrote:
>> It’s funny.  I was reading some commentary on this last week (can’t even 
>> remember where now; that was _last week_!), and I remember thinking that the 
>> description reminded me of Williams Syndrome in people.  They have a 
>> grammatical sense that is at the stronger end of the human range, but their 
>> train of meaning has come to be characterized (again, a now-tropish 
>> short-hand) as “word salad”.
>> That there should be several somewhat-autonomous processes running in 
>> parallel in people, and coupled by some kind of message-passing, as Ray 
>> Jackendoff proposes, seems quite reasonable and in keeping with brain 
>> biology, and if there is, it would be a compact way to account for the 
>> seeming independence in refinement of grammatical sense and whatever other 
>> part of sentence-coherence we have come to term “semantics”.
>> Last year, too, someone (I think my boss at the time, which would make it 
>> two years ago) told me about some nature paper saying that a comparative 
>> genome analysis of domestic dogs and wolves had shown a mutation in the dogs 
>> at the cognate locus to the one that results in Williams Syndrome in people. 
>>  That would be an easy indulgent interpretation: the greater 
>> affectionateness preserved into adulthood, and the increased verbal-or-other 
>> communicativeness.  Though Barry Lopez, I think it was, argues that wolves 
>> have higher social intelligence, which I guess would be making some claim 
>> about a “semantics”.
>> The chatbot has, however, a knd of pure authentic evil that Philip K. Dick 
>> tried to mimic (the argument with the door), and came close enough to be 
>> laughing-through-tears, but could not truly simulate as it shines through in 
>> the Ginsparg exchange.  Or dealing with the maddening, horrifying computer 
>> interfaces that every company puts up to its customers, after they have 
>> fired all the human problem-solvers.  Few things put me in a real dread, 
>> because I am now fairly old, and getting older as fast as I can.  But the 
>> prospect of still being alive in a world where that interface is all that is 
>> left to any of us, is dreadworthy.
>> Eric
 On Feb 8, 2023, at 11:51 AM, glen  wrote:
>>> 
>>> I wrote and deleted a much longer response. But all I really want to say is 
>>> that these *models* are heavily engineered. TANSTAAFL. They are as 
>>> engineered, to intentional purpose, as a Boeing 777. We have this tendency 
>>> to think that because these boxes are opaque (more so to some than others), 
>>> they're magical or "semantic-less". They simulate a human language user 
>>> pretty well. So even if there's little structural analogy, there's good 
>>> behavioral analogy. Rather than posit that these models don't have 
>>> semantics, I'd posit *we* don't have semantics.
>>> 
>>> The problem with communication is the illusion that it exists.
>>> 
>>> On 2/7/23 14:16, Steve Smith wrote:
 DaveW -
 I really don't know much of/if anything really about these modern AIs, 
 beyond what pops up on the myriad popular science/tech feeds that are part 
 of *my* training set/source.   I studied some AI in the 70s/80s and then 
 "Learning Classifier 

Re: [FRIAM] Datasets as Experience

2023-02-08 Thread glen

I've recently developed a taste for judging people by the content of their character, something I used to and 
kindasorta still do denigrate as hubris (because our models of others' character are models, always wrong, 
rarely useful). And one of the best measures of character is how someone responds when presented with a 
"learning opportunity". ChatGPT is an extraordinary mansplainer. And even though, when you show it 
facts that contradict it's prior opinion, it gives lip-service with words like "sorry", it will 
continue to *confidently* spout half-truths and rhetorical bullsh¡t (even if you ask it to, say, write a LCG 
pRNG in C). Just like the tendency of apps like Stable Diffusion to "pornify" women, ChatGPT 
encodes the culture of its input. So if ChatGPT is a mansplainer or evil, in any sense, it's *because* the 
culture from which it draws its input is that. I.e. It's a mirror. We are evil. We are dreadworthy.

BTW, I did a back of the envelope calculation and the cost of operating one 
(small) 777 per day seems to be about the same as operating the one ChatGPT per 
day (~$100k). Presumably, if/when OAI begins distributing the model, such that 
there are several of them out there (like the fleets of 777s), the costs will 
be lower. At that point, the semantic content of one 777 might exceed that of 
one ChatGPT instance.

On 2/8/23 09:08, Santafe wrote:

It’s funny.  I was reading some commentary on this last week (can’t even 
remember where now; that was _last week_!), and I remember thinking that the 
description reminded me of Williams Syndrome in people.  They have a 
grammatical sense that is at the stronger end of the human range, but their 
train of meaning has come to be characterized (again, a now-tropish short-hand) 
as “word salad”.

That there should be several somewhat-autonomous processes running in parallel 
in people, and coupled by some kind of message-passing, as Ray Jackendoff 
proposes, seems quite reasonable and in keeping with brain biology, and if 
there is, it would be a compact way to account for the seeming independence in 
refinement of grammatical sense and whatever other part of sentence-coherence 
we have come to term “semantics”.

Last year, too, someone (I think my boss at the time, which would make it two 
years ago) told me about some nature paper saying that a comparative genome 
analysis of domestic dogs and wolves had shown a mutation in the dogs at the 
cognate locus to the one that results in Williams Syndrome in people.  That 
would be an easy indulgent interpretation: the greater affectionateness 
preserved into adulthood, and the increased verbal-or-other communicativeness.  
Though Barry Lopez, I think it was, argues that wolves have higher social 
intelligence, which I guess would be making some claim about a “semantics”.

The chatbot has, however, a knd of pure authentic evil that Philip K. Dick 
tried to mimic (the argument with the door), and came close enough to be 
laughing-through-tears, but could not truly simulate as it shines through in 
the Ginsparg exchange.  Or dealing with the maddening, horrifying computer 
interfaces that every company puts up to its customers, after they have fired 
all the human problem-solvers.  Few things put me in a real dread, because I am 
now fairly old, and getting older as fast as I can.  But the prospect of still 
being alive in a world where that interface is all that is left to any of us, 
is dreadworthy.

Eric




On Feb 8, 2023, at 11:51 AM, glen  wrote:

I wrote and deleted a much longer response. But all I really want to say is that these 
*models* are heavily engineered. TANSTAAFL. They are as engineered, to intentional 
purpose, as a Boeing 777. We have this tendency to think that because these boxes are 
opaque (more so to some than others), they're magical or "semantic-less". They 
simulate a human language user pretty well. So even if there's little structural analogy, 
there's good behavioral analogy. Rather than posit that these models don't have 
semantics, I'd posit *we* don't have semantics.

The problem with communication is the illusion that it exists.

On 2/7/23 14:16, Steve Smith wrote:

DaveW -
I really don't know much of/if anything really about these modern AIs, beyond what pops 
up on the myriad popular science/tech feeds that are part of *my* training set/source.   
I studied some AI in the 70s/80s and then "Learning Classifier Systems" and 
(other) Machine Learning techniques in the late 90s, and then worked with folks who did 
Neural Nets during the early 00s, including trying to help them find patterns *in* the NN 
structures to correlate with the function of their NNs and training sets, etc.
The one thing I would say about what I hear you saying here is that I don't think these modern 
learning models, by definition, have neither syntax *nor* semantics built into them..   they are 
what I colloquially (because I'm sure there is a very precise term of art by the same name) think 
of 

Re: [FRIAM] Datasets as Experience

2023-02-08 Thread Santafe
It’s funny.  I was reading some commentary on this last week (can’t even 
remember where now; that was _last week_!), and I remember thinking that the 
description reminded me of Williams Syndrome in people.  They have a 
grammatical sense that is at the stronger end of the human range, but their 
train of meaning has come to be characterized (again, a now-tropish short-hand) 
as “word salad”.  

That there should be several somewhat-autonomous processes running in parallel 
in people, and coupled by some kind of message-passing, as Ray Jackendoff 
proposes, seems quite reasonable and in keeping with brain biology, and if 
there is, it would be a compact way to account for the seeming independence in 
refinement of grammatical sense and whatever other part of sentence-coherence 
we have come to term “semantics”.

Last year, too, someone (I think my boss at the time, which would make it two 
years ago) told me about some nature paper saying that a comparative genome 
analysis of domestic dogs and wolves had shown a mutation in the dogs at the 
cognate locus to the one that results in Williams Syndrome in people.  That 
would be an easy indulgent interpretation: the greater affectionateness 
preserved into adulthood, and the increased verbal-or-other communicativeness.  
Though Barry Lopez, I think it was, argues that wolves have higher social 
intelligence, which I guess would be making some claim about a “semantics”.

The chatbot has, however, a knd of pure authentic evil that Philip K. Dick 
tried to mimic (the argument with the door), and came close enough to be 
laughing-through-tears, but could not truly simulate as it shines through in 
the Ginsparg exchange.  Or dealing with the maddening, horrifying computer 
interfaces that every company puts up to its customers, after they have fired 
all the human problem-solvers.  Few things put me in a real dread, because I am 
now fairly old, and getting older as fast as I can.  But the prospect of still 
being alive in a world where that interface is all that is left to any of us, 
is dreadworthy.

Eric



> On Feb 8, 2023, at 11:51 AM, glen  wrote:
> 
> I wrote and deleted a much longer response. But all I really want to say is 
> that these *models* are heavily engineered. TANSTAAFL. They are as 
> engineered, to intentional purpose, as a Boeing 777. We have this tendency to 
> think that because these boxes are opaque (more so to some than others), 
> they're magical or "semantic-less". They simulate a human language user 
> pretty well. So even if there's little structural analogy, there's good 
> behavioral analogy. Rather than posit that these models don't have semantics, 
> I'd posit *we* don't have semantics.
> 
> The problem with communication is the illusion that it exists.
> 
> On 2/7/23 14:16, Steve Smith wrote:
>> DaveW -
>> I really don't know much of/if anything really about these modern AIs, 
>> beyond what pops up on the myriad popular science/tech feeds that are part 
>> of *my* training set/source.   I studied some AI in the 70s/80s and then 
>> "Learning Classifier Systems" and (other) Machine Learning techniques in the 
>> late 90s, and then worked with folks who did Neural Nets during the early 
>> 00s, including trying to help them find patterns *in* the NN structures to 
>> correlate with the function of their NNs and training sets, etc.
>> The one thing I would say about what I hear you saying here is that I don't 
>> think these modern learning models, by definition, have neither syntax *nor* 
>> semantics built into them..   they are what I colloquially (because I'm sure 
>> there is a very precise term of art by the same name) think of or call 
>> "model-less" models. At most I think the only models of language they have 
>> explicit in them might be the Alphabet and conventions about white-space and 
>> perhaps punctuation?   And very likely they span *many* languages, not just 
>> English or maybe even "Indo European".
>> I wonder what others know about these things or if there are known good 
>> references?
>> Perhaps we should just feed thesemaunderings into ChatGPT and it will sort 
>> us out forthwith?!
>> - SteveS
>> On 2/7/23 2:57 PM, Prof David West wrote:
>>> I am curious, but not enough to do some hard research to confirm or deny, 
>>> but ...
>>> 
>>> Surface appearances suggest, to me, that the large language model AIs seem 
>>> to focus on syntax and statistical word usage derived from those large 
>>> datasets.
>>> 
>>> I do not see any evidence in same of semantics (probably because I am but a 
>>> "bear of little brain.")
>>> 
>>> In contrast, the Cyc project (Douglas Lenat, 1984 - and still out there as 
>>> an expensive AI) was all about semantics. The last time I was, briefly, at 
>>> MCC, they were just switching from teaching Cyc how to read newspapers and 
>>> engage in meaningful conversation about the news of the day, to teaching it 
>>> how to read the National Enquirer, etc. and differentiate between 

Re: [FRIAM] Datasets as Experience

2023-02-08 Thread glen

I wrote and deleted a much longer response. But all I really want to say is that these 
*models* are heavily engineered. TANSTAAFL. They are as engineered, to intentional 
purpose, as a Boeing 777. We have this tendency to think that because these boxes are 
opaque (more so to some than others), they're magical or "semantic-less". They 
simulate a human language user pretty well. So even if there's little structural analogy, 
there's good behavioral analogy. Rather than posit that these models don't have 
semantics, I'd posit *we* don't have semantics.

The problem with communication is the illusion that it exists.

On 2/7/23 14:16, Steve Smith wrote:

DaveW -

I really don't know much of/if anything really about these modern AIs, beyond what pops 
up on the myriad popular science/tech feeds that are part of *my* training set/source.   
I studied some AI in the 70s/80s and then "Learning Classifier Systems" and 
(other) Machine Learning techniques in the late 90s, and then worked with folks who did 
Neural Nets during the early 00s, including trying to help them find patterns *in* the NN 
structures to correlate with the function of their NNs and training sets, etc.

The one thing I would say about what I hear you saying here is that I don't think these modern 
learning models, by definition, have neither syntax *nor* semantics built into them..   they are 
what I colloquially (because I'm sure there is a very precise term of art by the same name) think 
of or call "model-less" models. At most I think the only models of language they have 
explicit in them might be the Alphabet and conventions about white-space and perhaps punctuation?   
And very likely they span *many* languages, not just English or maybe even "Indo 
European".

I wonder what others know about these things or if there are known good 
references?

Perhaps we should just feed thesemaunderings into ChatGPT and it will sort us 
out forthwith?!

- SteveS


On 2/7/23 2:57 PM, Prof David West wrote:

I am curious, but not enough to do some hard research to confirm or deny, but 
...

Surface appearances suggest, to me, that the large language model AIs seem to 
focus on syntax and statistical word usage derived from those large datasets.

I do not see any evidence in same of semantics (probably because I am but a "bear of 
little brain.")

In contrast, the Cyc project (Douglas Lenat, 1984 - and still out there as an 
expensive AI) was all about semantics. The last time I was, briefly, at MCC, 
they were just switching from teaching Cyc how to read newspapers and engage in 
meaningful conversation about the news of the day, to teaching it how to read 
the National Enquirer, etc. and differentiate between syntactically and 
literally 'true' news and the false semantics behind same.

davew


On Tue, Feb 7, 2023, at 11:35 AM, Jochen Fromm wrote:

I was just wondering if our prefrontal cortex areas in the brain contain a 
large language model too - but each of them trained on slightly different 
datasets. Similar enough to understand each other, but different enough so that 
everyone has a unique experience and point of view o_O

-J.


 Original message 
From: Marcus Daniels 
Date: 2/6/23 9:39 PM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Datasets as Experience

It depends if it is given boundaries between the datasets.   Is it learning one 
distribution or two?


*From:* Friam  *On Behalf Of *Jochen Fromm
*Sent:* Sunday, February 5, 2023 4:38 AM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* [FRIAM] Datasets as Experience


Would a CV of a large language model contain all the datasets it has seen? As 
adaptive agents of our selfish genes we are all trained on slightly different 
datasets. A Spanish speaker is a person trained on a Spanish dataset. An 
Italian speaker is a trained on an Italian dataset, etc. Speakers of different 
languages are trained on different datasets, therefore the same sentence is 
easy for a native speaker but impossible to understand for those who do not 
know the language.


Do all large language models need to be trained on the same datasets? Or could many large 
language models be combined to a society of mind as Marvin Minsky describes it in his 
book "The society of mind"? Now that they are able to understand language it 
seems to be possible that one large language model replies to the questions from another. 
And we would even be able to understand the conversations.




--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021