On Wed, Nov 12, 2025 at 02:42 AM, Tom Walker wrote:

> 
> Odd that an allegedly "Large" Language Model would stop at the first
> instance it found of an attribution and not follow up with a scan of
> Peterson's text itself.

But that's not how llms work. It isn't "trying" to tell you what you want to 
know; it's constructing a response based on linguistic probability, 
independently of whether that response is "correct" or not. It's not looking 
for an "answer" in any meaningful sense. Sometimes, you'll get an answer that 
is "correct", and sometimes you won't, and there's no way, other than 
investigating yourself independently, to know either way. Insofar as llms mimic 
human-type responses, people are led to believe that what llms produce is 
plausible, which makes them not only useless for research, but actively 
damaging.


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#39232): https://groups.io/g/marxmail/message/39232
Mute This Topic: https://groups.io/mt/116250008/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: [email protected]
Unsubscribe: https://groups.io/g/marxmail/leave/13617172/21656/1316126222/xyzzy 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to