Or ... or ... they counter the conventional wisdom that *humans* generalize their learning or reasoning beyond text. We are the OG bots.

I do really appreciate this duality/tension:   I think you were the first to alert me to this a few thousand messages back (before LLMs/GPT talk, etc erupted here) though I vaguely remember Marcus making a (qualitatively) similar statement as well.  I think his comment was about whether human (early childhood in particular) was anything different from "emulation".


On 4/7/23 09:15, Steve Smith wrote:
    These findings counter the conventional wisdom that LLMs are merely statistical next-word predictors and can’t generalize their learning or reasoning beyond text.


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to