Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Russ Abbott
Thanks for the pointer to this article. I found the new Bing chat engine
quite impressive. To a surprising extent it felt like the author was
interacting with an actual human being. The following brief exchange
doesn't sound or feel like it was generated by stringing together words
found during training. This was a part of the conversation in which the
author accused the Bing chat engine of defaming him. He is claiming that
the misinformation he was given (about an article he had written) was
"published" and hence subject to libel laws.

[image: image.png]

-- Russ

On Fri, Feb 10, 2023 at 11:31 AM Marcus Daniels 
wrote:

> I don't think it is necessarily the case a very large neural net would
> need to backtrack.   It could devote more and more resources to different
> hypotheses so long as it was not energy constrained.
>
> -Original Message-
> From: Friam  On Behalf Of Santafe
> Sent: Friday, February 10, 2023 3:11 AM
> To: The Friday Morning Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] ChatGPT is not very smart...
>
> So probably this has all been analyzed to death already, by people who,
> unlike me, know what they are talking about.
>
> But in re-reading it, I feel like the structure of the problem is
> characterizable.
>
> It is as if “facts” that are constraints on the sentences to be composed
> are ordered in a way that is somewhat like lexicographic ordering.
>
> Lexicographically, azzz is before zaaa; no subsequent letters even
> get consideration (much less the set of them all in aggregate), until the
> first letter has dictated as much of the sorting as its values can index.
>
> Likewise, somewhere chatGPT was stuck in a local valley, where there was a
> “fact” that a particular list of two states is “the core of the answer” to
> this set of questions, and membership in the list is the first condition in
> a selection-tree for replies.  However tortured, the following sentences
> have to operate within a reality that has already been truncated by the
> “first fact”.
>
> (Again, since this is obvious, my even belaboring it is kind of dull once
> I see it on the page.)
>
> Come to think of it, it reminds me too of Vygotsky’s claim that children
> first assign category terms by what he called “family resemblances”, and
> only later at the age where some developmental threshold is crossed, do
> they form “predicates” in the sense we would use the term in langauge
> semantics.  “Family resemblances” can be, effectively, anything, are very
> idiosyncratic to the experiential history of any particular
> child-in-context, and in that sense are fragile categories.  The
> distinction in predicates, once they emerge, is that they seem to be much
> less fragile, which amounts to being exchangeable across thinkers and
> across situations for a single thinker.
>
> As predicates, membership-in-a-list, letter-count, being-a-state-name, are
> of quite unlike types.  As there is a kind of Venn-diagram intersection
> logic that can be applied to letter-count and being-a-state-name, which
> does not exist in general form for membership in arbitrary lists, we learn
> that the latter predicates should get priority in navigating streams of
> thoughts and of sentences.  But if, to chatGPT, those are not yet
> predicates, but rather various modes of family resemblance, all comparably
> eligible, one can scan how a dialog of the below form would be a derivable
> outcome.
>
> If that is the right analysis, the step of getting stuck in a pre-emptive
> membership-in-a-list commitment is not the revealing, or by that criterion
> the interesting, part of the dialog below.  The interesting part becomes
> the suddenly much-higher need for “cleverness”, to find flowing sentences
> that can be forced to the far side of an unresolvable contradiction.  It’s
> like my laptop, when some internet-check-in step becomes unsatisfiable, and
> it will dump more and more CPU into trying to do something that got
> mis-configured, the fan ramps up, and this will spiral indefinitely until I
> stop what I am doing to kill the offending process.  One can imagine that
> “learning” what predicates “are”, that they “have” logics, and what those
> logics “are”, is a feedback loop of trying to avoid the mental costs of
> getting into blind alleys.  The deeper the backtrack to find the unbinding
> choice, the harder the learning program is, until we get up to scientific
> categories, where it becomes quite difficult to recognize that we had
> invalid meanings for “time”, “space”, and “state”.  But a kind of
> continuous process of rewarding laziness across all those levels of
> organization of thought.
>
> I also, of course, cannot keep out of mind the George Santos characters or
> similar pathological liars, who live full-time in this navigating-snags
> mode.  Police interrogators think they can wear even pathological liars
> down — catch them out — because the effort level is high enough that
> eventually nobody can 

[FRIAM] Whisper, a speech-to-text prrogram based on GPT-3

2023-02-10 Thread Barry MacKichan
I downloaded Whisper and tried it out. I fed it a 20-minute screencast I 
did about 10 years ago. There are about four levels you can choose which 
trade accuracy for speed. I tried the recommended level, down one from 
the highest. After a substantial wait, I got the results.


First, the input did not contain any hints about punctuation and 
sentences, such as what you have to use with most other text-to-speech 
programs. It decided where to end sentences and place commas and put 
them where I thought they should be.


I was quite surprised that it correctly understood “LaTeX”. Also I 
was surprised when it understood “Skim” (a pdf reader on the Mac) as 
“SCIM”. This is the evidence of nerd hands in the selection of the 
training set. But it did not correctly get “pdfTeX”; it came out as 
“PDF Tech”. Unsurprisingly, it did not capitalize the names of our 
products that I mentioned in the screen cast, such as “Scientific 
WorkPlace”.


It allows as an input option a string of text that is consumed by the 
program before it attempts to transcribe the input file. For the next 
run, I used the option


	--initial_prompt "pdfTeX, LaTeX, skim, Scientific WorkPlace, Scientific 
Word"
and the result was flawless but for one error: it still failed to 
recognize “pdfTeX”.


It appears that it does not try to break the text into paragraphs, but I 
may not have given it enough text to test that.


They claim to support 99 languages and to translate text from one of 
these languages to English.


I ran a shorter test of it after disconnecting from all networks, and it 
succeeded. This seems to say that the model data is on my computer.


—Barry-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] A very good piece about ChatGPT.

2023-02-10 Thread glen

I'm curious why it's unanticipated. Is this a (very mild) form of the Nobel 
Effect: https://en.wikipedia.org/wiki/Nobel_Prize_effect? The idea that one's 
been successful in some persnickety sub-domain triggers some dopamine and makes 
them think they'll achieve easier success in some other (equally persnickety) 
sub-domain?

I think this highlights a subtle error in the Sutton article Jochen linked, 
related (sorry for the repetition) to a wise choice of use case(s). Sutton 
writes:

http://www.incompleteideas.net/IncIdeas/BitterLesson.html
"1) AI researchers have often tried to build knowledge into their agents, 2) this 
always helps in the short term, and is personally satisfying to the researcher, but 3) in 
the long run it plateaus and even inhibits further progress, and 4) breakthrough progress 
eventually arrives by an opposing approach based on scaling computation by search and 
learning."

This passage and the whole article *assume* a subset of use cases. At first I thought it 
was simply a lack of distinction between special AI vs. AGI. But it's not. It's the 
assumption of a tiny part of special AI. It's the assumption that AI is equivalent to 
*opaque* AI ... "free lunch" algorithms ... brute force. It might be useful to 
contrast something like ChatGPT with Alpha Zero (or, better yet, Alpha Tensor).

The knowledge is in the scaffolding to the well-formed problem. The knowledge 
isn't in the brute force computation that occurs inside the space constrained 
by the scaffolding. To learn something *is* to build that scaffolding.

On 2/10/23 10:45, Russ Abbott wrote:

The amount fo work required to build the intellectual scaffolding almost always 
seems to be an unanticipated problem when learning something new. In many cases 
the scaffolding is not just intellectual. Think of what it takes to learn to 
play musical instrument!


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Marcus Daniels
I don't think it is necessarily the case a very large neural net would need to 
backtrack.   It could devote more and more resources to different hypotheses so 
long as it was not energy constrained.

-Original Message-
From: Friam  On Behalf Of Santafe
Sent: Friday, February 10, 2023 3:11 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT is not very smart...

So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.  

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.  

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a pre-emptive 
membership-in-a-list commitment is not the revealing, or by that criterion the 
interesting, part of the dialog below.  The interesting part becomes the 
suddenly much-higher need for “cleverness”, to find flowing sentences that can 
be forced to the far side of an unresolvable contradiction.  It’s like my 
laptop, when some internet-check-in step becomes unsatisfiable, and it will 
dump more and more CPU into trying to do something that got mis-configured, the 
fan ramps up, and this will spiral indefinitely until I stop what I am doing to 
kill the offending process.  One can imagine that “learning” what predicates 
“are”, that they “have” logics, and what those logics “are”, is a feedback loop 
of trying to avoid the mental costs of getting into blind alleys.  The deeper 
the backtrack to find the unbinding choice, the harder the learning program is, 
until we get up to scientific categories, where it becomes quite difficult to 
recognize that we had invalid meanings for “time”, “space”, and “state”.  But a 
kind of continuous process of rewarding laziness across all those levels of 
organization of thought.

I also, of course, cannot keep out of mind the George Santos characters or 
similar pathological liars, who live full-time in this navigating-snags mode.  
Police interrogators think they can wear even pathological liars down — catch 
them out — because the effort level is high enough that eventually nobody can 
sustain it.  But at the same time, humans have a wide talent range for 
different tasks, and I could believe that for the pathological liar, this 
snag-navigating can be done much more fluently and with lower effort, than the 
same degree of performance would demand of a “normie”.

All the above feels about as satisfying as listening to somebody explain a 
joke.  Oh well.

Eric


> On Feb 2, 2023, at 1:49 PM, Barry MacKichan  
> wrote:
> 
> or is obsessed with four-letter words. From an email from another group. 
> Paragraphs with an initial capital are ChatGPT’s responses. The others are 
> input:
> 
> Chat GPT dialog, courtesy of Paul Ginsparg:
> 
> 
> “how many states in the united states have four letter names?
> 

Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread glen

This was laugh out loud funny for me. YMMV.

Arguing with AI: My first dispute with Microsoft’s brilliant and boneheaded 
Bing search engine
https://www.geekwire.com/2023/arguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine/?mc_cid=e4cf83ee41_eid=d5c14b4c73

I quoted my comment about choosing your use case wisely. In this article, I 
think it's clear that mixing the 2 use cases doesn't work out that well ... 
yet, anyway.

On 2/10/23 10:14, glen wrote:

[⛧] I wanted to write something about how the use case of ChatGPT severely 
biases these perspectives on what GPT3.5 is or can do. But the post was too 
long already. It's sufficient to say there are many many *other* use cases for 
GPT3.5. ChatGPT ≠ GPT3.5.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] A very good piece about ChatGPT.

2023-02-10 Thread Russ Abbott
Very nice piece by Rich Sutton!

Re NanoGPT. Even though I'm retired, I wanted to learn more about how the
LLMs work. So I started a project in which a number of students and i work
through Karpathy's nanoGPT. We've just started, but I think it will be
worth the effort. Karpathy uses non-trivial Python. Many students don't
know Python at all. So learning Python along with its non-trivial features
as well as becoming familiar with PyTorch, is something of a challenge for
the project. Students (and I, in fact) were not prepared for the amount of
scaffolding background knowledge one had to build for oneself.

The amount fo work required to build the intellectual scaffolding almost
always seems to be an unanticipated problem when learning something new. In
many cases the scaffolding is not just intellectual. Think of what it takes
to learn to play musical instrument!

-- Russ


On Fri, Feb 10, 2023 at 10:09 AM Jochen Fromm  wrote:

> Interesting article. I have found this link to "GPT in 60 Lines of NumPy"
> on Hacker News
> https://jaykmody.com/blog/gpt-from-scratch/
>
> It is similar to nanoGPT and minGPT from Andrej Karpathy, who has joined
> OpenAI again recently
> https://github.com/karpathy/nanoGPT
>
> The secret is apparently not to use a complicated algorithm but just a
> huge amount of data, as Rich Sutton writes here
> http://www.incompleteideas.net/IncIdeas/BitterLesson.html
>
> -J.
>
>
>  Original message 
> From: Russ Abbott 
> Date: 2/10/23 5:53 PM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
>
> Subject: [FRIAM] A very good piece about ChatGPT.
>
> It's in the New Yorker
> 
> and intended for a non-scientific audience.
>
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread glen

From the New Yorker article Russ posted:
"If a compression program knows that force equals mass times acceleration, it can discard a 
lot of words when compressing the pages about physics because it will be able to reconstruct 
them." ... "Perhaps arithmetic is a special case, one for which large language models are 
poorly suited. Is it possible that, in areas outside addition and subtraction, statistical 
regularities in text actually do correspond to genuine knowledge of the real world?"

I think EricS has identified an almost-solution to the problem Chiang implicitly 
expresses above. Chiang's expression of the problem irritates me because it assumes that 
pathological math somehow isn't math. But it is still math. When some boorish yahoo at 
the pub says "2+2 isn't 4", I have to trot out the 3 group to show that 2+2 
really isn't 4. 2+2=0. Even further, I feel fine arguing that Plato was as important as 
he was not because of his philosohy(ies), but because he wrote in *dialogue* format. When 
a moron like me asks an expert to explain some thing in their field, my *approach* to 
their facts is not somehow false ... or indicative that I don't understand something like 
basic arithmetic.  It's more akin to EricS' suggestion that the dialogue is helping me 
re-slice/re-classify [⛧] the world into the slicing used by the expert.

Of course, the extent to which I can make my slicing as crisp as that of the 
expert will vary. My thinking process requires that I fold the slicings I learn 
into some kind of semi-coherent gestalt, integrating all the slicings of every 
domain I can keep in context at the same time. So my ability to re-construct a 
slicing I learned from an expert will be lossy, not because I'm a lossy 
compressor, but because I'm trying to do both 1) snag-testing and 2) reduction 
to a grand unified model. Despite a STEMmy type's tendency to think those are 
the same thing, they're not. Inconsistency robustness (1) and reduction (2) are 
complementary. An absolute reduction to a single logic (as Chiang implies for 
arithmetic) is, as yet, impossible. You have to relinquish either consistency 
or completeness.

Anyway, this comment doesn't change either EricS' or Chiang's conclusions. It's 
just an attempt to show appreciation for both. Thanks.


[⛧] I wanted to write something about how the use case of ChatGPT severely 
biases these perspectives on what GPT3.5 is or can do. But the post was too 
long already. It's sufficient to say there are many many *other* use cases for 
GPT3.5. ChatGPT ≠ GPT3.5.

On 2/10/23 03:11, Santafe wrote:

So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a 

Re: [FRIAM] A very good piece about ChatGPT.

2023-02-10 Thread Jochen Fromm
Interesting article. I have found this link to "GPT in 60 Lines of NumPy" on 
Hacker Newshttps://jaykmody.com/blog/gpt-from-scratch/It is similar to nanoGPT 
and minGPT from Andrej Karpathy, who has joined OpenAI again 
recentlyhttps://github.com/karpathy/nanoGPTThe secret is apparently not to use 
a complicated algorithm but just a huge amount of data, as Rich Sutton writes 
here http://www.incompleteideas.net/IncIdeas/BitterLesson.html-J.
 Original message From: Russ Abbott  
Date: 2/10/23  5:53 PM  (GMT+01:00) To: The Friday Morning Applied Complexity 
Coffee Group  Subject: [FRIAM] A very good piece about 
ChatGPT. It's in the New Yorker and intended for a non-scientific audience.


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] A very good piece about ChatGPT.

2023-02-10 Thread Sarbajit Roy
Thanks for the share.
This article explains things very well from aspects I had never considered.

On Fri, Feb 10, 2023 at 10:23 PM Russ Abbott  wrote:

> It's in the New Yorker
> 
> and intended for a non-scientific audience.
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] A very good piece about ChatGPT.

2023-02-10 Thread Russ Abbott
It's in the New Yorker

and intended for a non-scientific audience.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Santafe
So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.  

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.  

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a pre-emptive 
membership-in-a-list commitment is not the revealing, or by that criterion the 
interesting, part of the dialog below.  The interesting part becomes the 
suddenly much-higher need for “cleverness”, to find flowing sentences that can 
be forced to the far side of an unresolvable contradiction.  It’s like my 
laptop, when some internet-check-in step becomes unsatisfiable, and it will 
dump more and more CPU into trying to do something that got mis-configured, the 
fan ramps up, and this will spiral indefinitely until I stop what I am doing to 
kill the offending process.  One can imagine that “learning” what predicates 
“are”, that they “have” logics, and what those logics “are”, is a feedback loop 
of trying to avoid the mental costs of getting into blind alleys.  The deeper 
the backtrack to find the unbinding choice, the harder the learning program is, 
until we get up to scientific categories, where it becomes quite difficult to 
recognize that we had invalid meanings for “time”, “space”, and “state”.  But a 
kind of continuous process of rewarding laziness across all those levels of 
organization of thought.

I also, of course, cannot keep out of mind the George Santos characters or 
similar pathological liars, who live full-time in this navigating-snags mode.  
Police interrogators think they can wear even pathological liars down — catch 
them out — because the effort level is high enough that eventually nobody can 
sustain it.  But at the same time, humans have a wide talent range for 
different tasks, and I could believe that for the pathological liar, this 
snag-navigating can be done much more fluently and with lower effort, than the 
same degree of performance would demand of a “normie”.

All the above feels about as satisfying as listening to somebody explain a 
joke.  Oh well.

Eric


> On Feb 2, 2023, at 1:49 PM, Barry MacKichan  
> wrote:
> 
> or is obsessed with four-letter words. From an email from another group. 
> Paragraphs with an initial capital are ChatGPT’s responses. The others are 
> input:
> 
> Chat GPT dialog, courtesy of Paul Ginsparg:
> 
> 
> “how many states in the united states have four letter names?
> 
> “There are actually two states in the United States that have four-letter 
> names: Iowa and Ohio. Both states were named after Native American words, 
> with Iowa derived from the word "ayuxba," which means "this is the place," 
> and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.
> 
> “what about utah?
> 
> “Utah is not a four-letter word. It is a state in the western United