There is much news on the accomplishments of AI, adoption thereof
by leading companies, and the great amount of money invested in
LLM model research and the construction of data centers.  As such it
is not easy to convince people that AI has limitations and should be
used with great care.

An effective way to get people interested in the problem is to show
glaring examples of hallucinations.  If you ever encounter one you
should record it and show it around.

Thinking machines have defeated the best human chess and go players of
the world.  This makes people think that AI is highly intelligent.  I
believe the process computers use to analyze chess positions is
different from the large language models used to generate text
responses to questions.  Unfortunately both are loosely refered to as
"AI" and few people bother to look into the difference.

---

You Can't Lick a Badger Twice": Google's AI Is Making Up
Explanations for Nonexistent Folksy Sayings - Futurism
https://futurism.com/google-ai-overviews-fake-idioms

   This is getting ridiculous.

   Apr 23, 2:17 PM EDT by Victor Tangermann

   Have you heard of the idiom "You Can't Lick a Badger Twice?"

   We haven't, either, because it doesn't exist - but Google's AI
   seemingly has. As netizens discovered this week that adding the
   word "meaning" to nonexistent folksy sayings is causing the AI to
   cook up invented explanations for them.

   "The idiom 'you can't lick a badger twice' means you can't trick or
   deceive someone a second time after they've been tricked once,"
   Google's AI Overviews feature happily suggests. "It's a warning
   that if someone has already been deceived, they are unlikely to
   fall for the same trick again."

   Author Meaghan Wilson-Anastasios, who first noticed the bizarre bug
   in a Threads post over the weekend, found that when she asked for
   the "meaning" of the phrase "peanut butter platform heels," the AI
   feature suggested it was a "reference to a scientific experiment"
   in which "peanut butter was used to demonstrate the creation of
   diamonds under high pressure."

   There are countless other examples. ...

---

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
- New York Times
https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
   

   A new wave of "reasoning" systems from companies like OpenAI is
   producing incorrect information more often. Even the companies
   don't know why.

   By Cade Metz and Karen Weise

   Cade Metz reported from San Francisco, and Karen Weise from
   Seattle.
   * Published May 5, 2025     Updated May 6, 2025, 4:26 p.m. ET

   Last month, an A.I. bot that handles tech support for Cursor, an
   up-and-coming tool for computer programmers, alerted several
   customers about a change in company policy. It said they were no
   longer allowed to use Cursor on more than just one computer.

   In angry posts to internet message boards, the customers
   complained. Some canceled their Cursor accounts. And some got even
   angrier when they realized what had happened: The A.I. bot had
   announced a policy change that did not exist.

   "We have no such policy. You're of course free to use Cursor on
   multiple machines," the company's chief executive and co-founder,
   Michael Truell, wrote in a Reddit post.  "Unfortunately, this is an
   incorrect response from a front-line A.I. support bot."

---

"When you know something say 'I know this.'  When you don't know say
'I don't know.'  Such is 'to know'." - "The Analects of Confucius" 

_______________________________________________
libreplanet-discuss mailing list
[email protected]
https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

Reply via email to