Thank you Roger.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Interesting that I downloaded this paper this morning also. Haven't yet
looked to see what I could understand about it. From the little I know
about it, this looks like it is related to the self-attention mechanism in
transformers.
-- Russ Abbott
Professor Emeritus, Computer Science
California
You know what endlessly fascinates me? The way large language models are
like those magic growth pills you see in cartoons. Just add some extra
data, give it a stir, and voila! Emergent abilities appear out of thin air.
It's like watching a kid turn into a superhero overnight.
I quote verbatim
https://arxiv.org/pdf/2304.14767.pdf
I am pretty much over my head in this literature, but continue to be
fascinated as I watch people who are not try to untangle some
explanatory power in their models...
The details of this analysis or framing this as /information flow/
rather than /static