Send Link mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        https://mailman.anu.edu.au/mailman/listinfo/link
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Link digest..."


Today's Topics:

   1. Re: Study Reveals AI Language Models' Biases in Moral
      Guidance, Urges Caution in Relying on AI Decisions (Tom Worthington)


----------------------------------------------------------------------

Message: 1
Date: Thu, 17 Jul 2025 08:13:37 +1000
From: Tom Worthington <[email protected]>
To: [email protected]
Subject: Re: [LINK] Study Reveals AI Language Models' Biases in Moral
        Guidance, Urges Caution in Relying on AI Decisions
Message-ID: <[email protected]>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 7/13/25 11:15, Antony Barry wrote:
> https://www.brief.news/ai-research/2025/07/08/ai-bias-in-moral-guidance?categories=Tech&categories=AI&categories=Electric+Vehicles&categories=AI+Research&categories=Gadgets&date=2025-07-13&utm_source=daily-brief&utm_medium=email&source=%2Femail&eid=EgyBTlGFgS&uid=cm2x5k2i4000he6nukx44a2ye

By an amazing coincidence:

                LINK INSTITUTE LINKGRAM

                Study Reveals Basing Advice on a Random Collection of Documents 
from 
the Internet is not a Good Idea

Dateline Canberra, 14 July 2025: The Link institute mourned the loss of 
two of its leading researchers today. Professor Klerphell praised 
colleagues Dr Gumbie and Professor Gullable:

        "These brave scientists gave their lives this week, in the cause of 
science. Researching cures for indigestion, through the use of Large 
Language Models, the researchers fell victim to their experimental 
technique. Asking a leading LLMs for chemical formulae which might treat 
common stomach upsets, they were fatally poisoned by one of their trial 
compounds."

Klerphell rejected any criticism of the lack of institute oversight:

        "These researchers were using a very common technique, now applied in 
industry and government, where LLMs are used for advice.

        "Up to now there was no evidence to suggest that taking advice from a 
collection of documents randomly compiled from the Internet could be 
harmful" Klerphel stated. Hir went on to say "We checked with our AI 
ethics system and it said this was permitted under the Khitomer Accords. 
It is industry practice for consultants to copy random bits of 
information from wherever they can find it. It would be prohibitively 
expensive for them to check the sources, or verify the information is 
accurate. Consulting companies can't afford to employ experts in every 
field. AI is simply making that process more efficient."



-- 
Tom Worthington http://www.tomw.net.au
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: 
<https://mailman.anu.edu.au/pipermail/link/attachments/20250717/677ef686/attachment-0001.sig>

------------------------------

Subject: Digest Footer

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link


------------------------------

End of Link Digest, Vol 392, Issue 8
************************************

Reply via email to