Eric writes:

“A colleague asked me last week (not a high level of domain familiarity, but a 
good mind overall), what will science become in the age of ML, where there can 
be claims about everything, but transparency about little of it.”

What transparency does one have about a new hire’s mind?   An employer may have 
a credential from a university, or a confirmed work history, or green wall of 
github commits from a credible open-source project, or a highly cited Google 
Scholar profile.  This doesn’t tell us anything about the mechanism of their 
reasoning; it remains a black box.   And yet LLM-based AI is held to a higher 
standard by some.  Why?

 

Marcus

Attachment: smime.p7s
Description: S/MIME cryptographic signature

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to