A study of 11 LLMs found all of them to be sycophantic to varying degrees,
agreeing with users and telling them what they want to hear instead of what
is right or wrong. The paper didn't determine the reason for this behavior,
but most chatbots let you rate their answers. You can see this behavior in
your conversation with Grok, which was not one of the AI's in the study.
https://arxiv.org/abs/2510.01395

-- Matt Mahoney, [email protected]

On Fri, Oct 24, 2025, 10:49 AM James Bowery <[email protected]> wrote:

>
>
> On Thu, Oct 23, 2025 at 7:59 PM Matt Mahoney <[email protected]>
> wrote:
>
>> ...Only Grok considered all lives to be of approximately equal value.
>>
>> https://arctotherium.substack.com/p/llm-exchange-rates-updated
>>
>
> Ironically, I had this conversation with Grok just an hour ago:
>
>
> https://grok.com/share/bGVnYWN5LWNvcHk%3D_76ebda18-f009-4683-be58-08ecbec3d8f1
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T5ded4514619a0425-Med76c500001da992ff6252f7>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ded4514619a0425-M22e96bb039b240dd2ed385ca
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to