From the New Yorker article Russ posted:
"If a compression program knows that force equals mass times acceleration, it can discard a 
lot of words when compressing the pages about physics because it will be able to reconstruct 
them." ... "Perhaps arithmetic is a special case, one for which large language models are 
poorly suited. Is it possible that, in areas outside addition and subtraction, statistical 
regularities in text actually do correspond to genuine knowledge of the real world?"

I think EricS has identified an almost-solution to the problem Chiang implicitly 
expresses above. Chiang's expression of the problem irritates me because it assumes that 
pathological math somehow isn't math. But it is still math. When some boorish yahoo at 
the pub says "2+2 isn't 4", I have to trot out the 3 group to show that 2+2 
really isn't 4. 2+2=0. Even further, I feel fine arguing that Plato was as important as 
he was not because of his philosohy(ies), but because he wrote in *dialogue* format. When 
a moron like me asks an expert to explain some thing in their field, my *approach* to 
their facts is not somehow false ... or indicative that I don't understand something like 
basic arithmetic.  It's more akin to EricS' suggestion that the dialogue is helping me 
re-slice/re-classify [⛧] the world into the slicing used by the expert.

Of course, the extent to which I can make my slicing as crisp as that of the 
expert will vary. My thinking process requires that I fold the slicings I learn 
into some kind of semi-coherent gestalt, integrating all the slicings of every 
domain I can keep in context at the same time. So my ability to re-construct a 
slicing I learned from an expert will be lossy, not because I'm a lossy 
compressor, but because I'm trying to do both 1) snag-testing and 2) reduction 
to a grand unified model. Despite a STEMmy type's tendency to think those are 
the same thing, they're not. Inconsistency robustness (1) and reduction (2) are 
complementary. An absolute reduction to a single logic (as Chiang implies for 
arithmetic) is, as yet, impossible. You have to relinquish either consistency 
or completeness.

Anyway, this comment doesn't change either EricS' or Chiang's conclusions. It's 
just an attempt to show appreciation for both. Thanks.


[⛧] I wanted to write something about how the use case of ChatGPT severely 
biases these perspectives on what GPT3.5 is or can do. But the post was too 
long already. It's sufficient to say there are many many *other* use cases for 
GPT3.5. ChatGPT ≠ GPT3.5.

On 2/10/23 03:11, Santafe wrote:
So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.

Lexicographically, azzzzzzz is before zaaaaaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a pre-emptive 
membership-in-a-list commitment is not the revealing, or by that criterion the 
interesting, part of the dialog below.  The interesting part becomes the 
suddenly much-higher need for “cleverness”, to find flowing sentences that can 
be forced to the far side of an unresolvable contradiction.  It’s like my 
laptop, when some internet-check-in step becomes unsatisfiable, and it will 
dump more and more CPU into trying to do something that got mis-configured, the 
fan ramps up, and this will spiral indefinitely until I stop what I am doing to 
kill the offending process.  One can imagine that “learning” what predicates 
“are”, that they “have” logics, and what those logics “are”, is a feedback loop 
of trying to avoid the mental costs of getting into blind alleys.  The deeper 
the backtrack to find the unbinding choice, the harder the learning program is, 
until we get up to scientific categories, where it becomes quite difficult to 
recognize that we had invalid meanings for “time”, “space”, and “state”.  But a 
kind of continuous process of rewarding laziness across all those levels of 
organization of thought.

I also, of course, cannot keep out of mind the George Santos characters or 
similar pathological liars, who live full-time in this navigating-snags mode.  
Police interrogators think they can wear even pathological liars down — catch 
them out — because the effort level is high enough that eventually nobody can 
sustain it.  But at the same time, humans have a wide talent range for 
different tasks, and I could believe that for the pathological liar, this 
snag-navigating can be done much more fluently and with lower effort, than the 
same degree of performance would demand of a “normie”.

All the above feels about as satisfying as listening to somebody explain a 
joke.  Oh well.

Eric


On Feb 2, 2023, at 1:49 PM, Barry MacKichan <barry.mackic...@mackichan.com> 
wrote:

or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:


“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter names: Iowa and Ohio. Both states were 
named after Native American words, with Iowa derived from the word "ayuxba," which means "this is 
the place," and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.

“what about utah?

“Utah is not a four-letter word. It is a state in the western United States, named after the Ute tribe of 
Native Americans. The word "Utah" is derived from the Ute word "yuttahih," which means 
"people of the mountains."

“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches National Park 
and Bryce Canyon National Park. The state was named after the Ute tribe of Native 
Americans and has a rich history and culture.

“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to imply that the word 
"Utah" is not a valid word or that it does not refer to the state in the United 
States. My previous response was intended to address the specific question of whether 
Utah was one of the states in the United States with a four-letter name. As I mentioned, 
there are only two states in the United States with four-letter names: Iowa and Ohio. 
Utah, while it is a four-letter word, is not one of those states. I apologize if my 
previous response was confusing. Please let me know if you have any other questions.

“but didn't you just say that the word utah has four letters and is a state in 
the united states?

“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my previous 
response, it is the name of a state in the western United States. The state was named after the Ute 
tribe of Native Americans and is known for its beautiful landscapes and outdoor recreational 
opportunities. I apologize if my previous response was unclear. My previous response was intended 
to address the specific question of whether Utah was one of the two states in the United States 
with four-letter names. As I mentioned, there are only two states with four-letter names: Iowa and 
Ohio. While "Utah" is a four-letter word, it is not one of those states. I apologize for 
any confusion. Please let me know if you have any other questions

—Barry

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to