List,

In a brief article, "How Does A.I. Think? Here’s One Theory" in the New
York Times today, Peter Coy, after noting that "Computer scientists are
continually surprised by the creativity displayed by new generations of
A.I.," comments  on one hypothesis that might help explain that
'creativity', namely, that AI is using abduction in its machine reasoning.
He writes:

One hypothesis for how large language models such as o1 think is that they
use what logicians call abduction, or abductive reasoning. Deduction is
reasoning from general laws to specific conclusions. Induction is the
opposite, reasoning from the specific to the general.


Abduction isn’t as well known, but it’s common in daily life, not to
mention possibly inside A.I. It’s inferring the most likely explanation for
a given observation. Unlike deduction, which is a straightforward
procedure, and induction, which can be purely statistical, abduction
requires creativity.


The planet Neptune was discovered through abductive reasoning, when two
astronomers independently hypothesized that its existence was the most
likely explanation for perturbations in the orbit of its inner neighbor,
Uranus. Abduction is also the thought process jurors often use when they
decide if a defendant is guilty beyond a reasonable doubt.


Yet Peirce argues in the 1903 Lectures on Pragmatism that only abduction
"introduces any new idea" into a scientific inquiry:

" Abduction is the process of forming an explanatory hypothesis. It is the
only logical operation which introduces any new idea; for induction does
nothing but determine a value, and deduction merely evolves the necessary
consequences of a pure hypothesis."


I had always thought of abduction as the unique domain of the individual
scientist, the creative genius (say, Newton or Einstein) who, fully versed
in the most important relevant findings in his field, retroductively
connects those pieces of scientific information to posit a testable
hypothesis concerning an unresolved question in science.

But it makes sense that an AI program employing large data bases might
indeed be able to 'scan' those huge, multitudinous bases, connect the
salient information, and posit an hypothesis (or some other abductive idea).

Any thoughts on this? For example: Is it potentially a valuable feature and
power of AI and, thus, for us (the use of AI in medical research would tend
to support this view)? Is it a potential danger to us (some AI programs
have been seen to lie, to 'hide' some findings, etc.; might this get out
of control)? If AI can create testable hypotheses, is the role of the
'creative' scientist jeopardized?

Best,'

Gary R
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at 
https://cspeirce.com  and, just as well, at 
https://www.cspeirce.com .  It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to