Tom, Dan, and Helmut,

We must distinguish three different systems: (1) the Large Language Modlels 
(LLMs) which are derived from large volumes of texts, (2) ChatGPT and other 
systems that use the LLMs for various purposes, and (3) the human brain + body 
+ all the human experience of interacting with the world.

!. The LLMs are derived by tensor calculus to establish a huge collection of 
sentence patterns, which were originally designed by Google for machine 
translation of natural languages.  They are also useful for translating 
artificial languages, such as many versions of logic and other kinds of 
notations used by  various computer systems.

2. Many computational systems that process the LLMs serve various purposes.  
The original versions of GPT 1, 2, 3, and 3.5 did very little processing beyond 
creating and using the collection of LLMs.  But many people around the world 
did a huge amount of work in developing an open-ended variety of applications 
with those LLMs.

3. The psychologists, neuroscientists, philosophers, and linguists who have 
collaborated for centuries on trying to understand the underlying principles of 
human use of language.  A century ago, Peirce discovered and formulated some 
fundamental principles and guidelines for analyzing, relating, and 
understanding all these issues.

Tom> ChatGPT did not evolve naturally, but was developed by humans who 
certainly do understand how language works. Those humans fed ChatGPT vast 
amounts of carefully curated (not random) examples of human language and images.

Google had a large number of people with various backgrounds, including 
linguistics.  But the development of LLMs was primarily designed for machine 
translation.    It does not depend in any way any linguistic theory or logic 
theory..  It just computes the probability of the next symbol (word,  morpheme, 
affix, or whatever may be used in any kind of notation).

There was very little curation, other than an attempt to get a representative 
sample of the many kinds of documents.  Copyright and other legal issues have 
no influence on the accuracy of how LLM technology works.

Tom> To the extent ChatGPT "learns" language, its success depends upon the *a 
priori element provided by humans. This a priori element is the equivalent of 
an "innate" potential or quality.

No.  There is no corrspondence whatever between the way children learn and GPT 
developls.  At every step from the earliest days of GPT-1, every sentence 
generated was grammatical.  But children do not use any grammatical features 
that they don't yet understand.  The psycholinguists have much deeper insights 
into the nature of language than Chomsky.  The LLMs provide zero insight into 
the nature of language.

Tom> It appears that ChatGPT infers from the uses of signs in a multitude of 
settings -- many of which represent unsuccessful, failed, or irrelevant 
efforts.  It seems that Peircean inferences about language would revolve around 
pragmatic meanings.

The derivation of LLMs and their step by step generation of text does not 
depend on anything related to logic, meaning, or reasoning.  It  just generates 
one token after another.  For machine translation, Google linguists determined 
what significant prefixes, infixes, or suffixes should be distinguished in any 
word form.  After that, the LLMs are just based on patterns of those items.

But ChatGPT does do some significant processing that does use various methods 
of reasoning.  It may use any methods that any programmer may invent.  It 
cannot be used to support or refute any theory of linguistics, psychology, or 
philosophy of any kind.

John

From: "Thomas903" <ozzie...@gmail.com>
Sent: 7/18/23 2:45 PM

Dan,

I wanted to comment briefly on a sentence from your earlier posting:
"ChatGPT simply and conclusively shows that there is no need for any innate 
learning module in the brain to learn language."

1- ChatGPT did not evolve naturally, but was developed by humans who certainly 
do understand how language works. Those humans fed ChatGPT vast amounts of 
carefully curated (not random) examples of human language and images.   
Evidence that digital computers and software can learn language on their own is 
therefore absent.  To the extent ChatGPT "learns" language, its success depends 
upon the *a priori element provided by humans. This a priori element is the 
equivalent of an "innate" potential or quality.

2- ChatGPT is a tool.  Tools do not act on their own, or learn on their own.  
They have no intentions, no interests, no responsibilities.  They are directed 
by their users/operators.  Without direction, they learn nothing.

3- It is well known that ChatGPT frequently commits gross/obvious errors, and 
those gross errors are pragmatic evidence that it has failed at learning the 
language. Pattern recognition & matching may be a better description of what it 
does.  (Does ChatGPT ever invent new words?)

4- According to press reports, ChatGPT depends upon the use (scanning) of 
*stolen articles, books, etc.  So the developers of ChatGPT do not have a 
morality/ethics algorithm, and neither does ChatGPT.  This correspondence is 
direct evidence that the potentials/qualities of ChatGPT are the *same as the 
potentials/qualities provided by its developers/users. That correspondence 
principle applies to ChatGPT's language potentials, too (I believe).

I agree with your closing sentence that ChatGPT is inferring from signs, which 
you refer to as Peircean, but do not perceive that it is inferring from the 
*meaning of signs, which reflect pragmatic objectives.  It appears that ChatGPT 
infers from the uses of signs in a multitude of settings -- many of which 
represent unsuccessful, failed, or irrelevant efforts.  It seems that Peircean 
inferences about language would revolve around pragmatic meanings.

Thanks
Tom Wyrick

.

On Wed, Apr 19, 2023 at 12:37 PM Dan Everett <danleveret...@gmail.com> wrote:
ChatGPT simply and conclusively shows that there is no need for any innate 
learning module in the brain to learn language. Here is the paper on it that 
states this best. https://ling.auf.net/lingbuzz/007180

>From a Peircean perspective, it is important to realize that this works by 
>inference over signs.

Dan

On Apr 19, 2023, at 12:58 PM, Helmut Raulien <h.raul...@gmx.de> wrote:
Dan, list,

ok, so it is like I wrote "or it is so, that ChatGPT is somehow referred to 
universal logic as well, builds its linguistic competence up from there, and so 
can skip the human grammar-module". But that neither is witchcraft, nor does it 
say, that there is no human-genetic grammar-module. And I too hope with the 
Linguist, that we dont have to fear ChatGPT more than we have to fear a 
refrigerator.

Best
Helmut
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to