OpenAI just published a paper on hallucinations
<https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf>
as
well as a post summarizing the paper
<https://openai.com/index/why-language-models-hallucinate/>. The two of
them seem wrong-headed in such a simple and obvious way that I'm surprised
the issue they discuss is still alive.

The paper and post point out that LLMs are trained to generate fluent
language--which they do extraordinarily well. The paper and post also point
out that LLMs are not trained to distinguish valid from invalid statements.
Given those facts about LLMs, it's not clear why one should expect LLMs to
be able to distinguish true statements from false statements--and hence why
one should expect to be able to prevent LLMs from hallucinating.

In other words, LLMs are built to generate text; they are not built to
understand the texts they generate and certainly not to be able to
determine whether the texts they generate make factually correct or
incorrect statements.

Please see my post
<https://russabbott.substack.com/p/why-language-models-hallucinate-according>
elaborating on this.

Why is this not obvious, and why is OpenAI still talking about it?

-- Russ Abbott <https://russabbott.substack.com/>  (Click for my Substack)
Professor Emeritus, Computer Science
California State University, Los Angeles
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to