If hypothesis generation is one form of abduction, which I think it is, then
it’s clear enough that AI has been doing that for some time. But it doesn’t do
it the same way that humans do, i.e. according to the “light of nature,” as
Peirce affirmed (EP2:54-5, CP 5.589 and elsewhere), because AI intelligence is
inorganic.
This is one of several points that Yuval Harari makes in Nexus, which traces
the development of information networks through history. They have been used in
the service of either truth or social order, with the emphasis shifting back
and forth at various times, and AI represents a radical shift in the nature of
these networks. This suggests some plausible answers to the questions
originally posed by Gary at the beginning of this thread.
Love, gary f.
Coming from the ancestral lands of the Anishinaabeg
} What a thing means is simply what habits it involves. [Peirce] {
<https://gnusystems.ca/wp/> https://gnusystems.ca/wp/ }{
<https://gnusystems.ca/TS/> Turning Signs
From: [email protected] <[email protected]> On
Behalf Of Gary Richmond
Sent: 18-Dec-24 15:40
To: Frederik Stjernfelt <[email protected]>
Cc: Tuezuen Alican <[email protected]>; Mike Bergman
<[email protected]>; Peirce-L <[email protected]>
Subject: Re: [PEIRCE-L] AI and abduction
Frederik, Mike, Tuezuen, Daniel, List,
For the last couple of years I have been dabbling in various AI programs
including those which generate visual images including diagrams. I have mainly
used ChatGPT for any number of purposes but principally for information
gathering (it hasn't completely replaced search engines and Wikipedia, but I
use it frequently when I'm looking for specific information and not, say, the
kind of overview which Wikipedia offers.
Frederik, your observation that LLMs frequently 'hallucinate' since "abduction
is neither necessary (like deduction) nor probable (like induction)." On the
other hand, abduction viewed as retroduction (reasoning from effect to cause)
seems nearly 'tailor made' for AI which can search those myriad data bases in
search of connections which might prompt plausible hypotheses. Does that seem
correct?
In my first post in this thread I noted how AI has proven useful in generating
hypotheses in the medical field (I've probably read more about this field of AI
hypothesis generation than any other because of some tech friendly, AI
enthusiastic physicians I know in the NYU Langone health system.
I'm still quite interested in List members' thoughts about the 3 questions I
concluded my original post with, namely: 1. How potentially valuable do you
think AI will be in various disciplines, especially those fields in which one
has some expertise? 2. What are the potential dangers of AI? To which I'd add:
Can they be circumvented? How?. 3. Is the role of the scientist (the creative
'hypothesizer') jeopardized? I
The rest of this post is a ChatGPT outline of some of the fields in which AI
has successfully generated hypotheses (including examples) and, in some cases,
even tested these hypotheses.
*****
*****
AI has shown considerable success in generating hypotheses across a variety of
fields, disciplines, and sciences. Here are some of the most notable areas:
1. Healthcare and Medicine
* Drug Discovery: AI has been instrumental in identifying new drug
candidates and repurposing existing drugs. Examples include identifying
potential treatments for diseases like COVID-19 and rare genetic disorders.
* Diagnostics: AI models have proposed novel diagnostic criteria, such as
identifying biomarkers for diseases like cancer or Alzheimer's from genomic or
imaging data.
* Genomics: AI is used to hypothesize relationships between genes and
diseases, and predict the functional impact of genetic mutations.
2. Biology and Biotechnology
* Protein Folding: AI systems like AlphaFold have solved hypotheses about
protein structure, paving the way for advancements in molecular biology.
* Ecology: AI has helped hypothesize the effects of climate change on
ecosystems and species interactions.
* Synthetic Biology: AI-generated hypotheses guide the design of
engineered organisms for biofuels, medicine, or agriculture.
3. Physics and Astronomy
* Astrophysics: AI has helped hypothesize about the distribution of dark
matter, the structure of the universe, and the identification of exoplanets.
* Quantum Physics: AI models have been used to generate hypotheses about
material properties and quantum states.
* Particle Physics: AI helps analyze data from particle accelerators to
hypothesize about fundamental particles.
4. Chemistry and Materials Science
* Material Design: AI hypothesizes the properties of novel materials for
use in batteries, solar cells, or superconductors.
* Reaction Mechanisms: AI can propose mechanisms for complex chemical
reactions, speeding up the discovery of catalysts.
5. Social Sciences
* Behavioral Patterns: AI generates hypotheses about human behavior by
analyzing large datasets, such as social media interactions or economic
activity.
* Policy Impact: AI models help hypothesize the effects of policy changes
on societal outcomes like education, public health, or economic growth.
6. Environmental Science
* Climate Modeling: AI hypothesizes the potential effects of greenhouse
gases, deforestation, and other factors on climate change.
* Sustainability: AI generates hypotheses about renewable energy
efficiency, waste management, and conservation efforts.
7. Economics and Finance
* Market Predictions: AI hypothesizes about market trends and economic
conditions by analyzing large-scale financial data.
* Economic Modeling: AI helps explore hypotheses about income inequality,
employment trends, and consumer behavior.
8. Engineering and Technology
* Robotics: AI hypothesizes how to optimize robot design and
functionality in various environments.
* Optimization Problems: In fields like logistics, AI hypothesizes ways
to improve efficiency and reduce costs.
9. Neuroscience and Cognitive Science
* Brain Function: AI helps generate hypotheses about how neural networks
in the brain relate to behavior and cognition.
* Mental Health: AI models propose new treatments for mental illnesses
based on patterns in psychological and neurological data.
10. Education
* Personalized Learning: AI generates hypotheses about which teaching
methods or materials work best for individual learning styles.
* Curriculum Design: AI analyzes data to hypothesize about effective
curriculum structures.
11. Agriculture
* Crop Yields: AI hypothesizes how weather patterns, soil types, and
farming techniques affect yields.
* Pest Control: AI models propose sustainable methods for pest management.
12. Linguistics and Natural Language Processing
* Language Evolution: AI helps hypothesize about how languages evolve
over time.
* Semantic Analysis: AI generates hypotheses about the relationships
between linguistic structures and meanings.
By leveraging vast amounts of data, AI not only generates hypotheses but also
tests them through simulations or by guiding experimental design. Its
versatility and data-driven approach make it an invaluable tool in advancing
knowledge across disciplines.
Best,
Gary R
On Wed, Dec 18, 2024 at 5:02 AM Frederik Stjernfelt <[email protected]
<mailto:[email protected]> > wrote:
Dear Mike, Gary, Tuezuen, list –
This is a great idea. This would also explain why LLMs “hallucinate” so much as
they do, as abduction is neither necessary (like deduction) nor probable (like
induction). Peirce, of course, stresses that abduction is indeed the source of
new ideas but that it offers no assurance of their truth which has to be
established by ensuing investigation using de- and inductions.
I have only experimented with the free versions of ChatGPT and they are,
indeed, highly error-prone.
I tend to prefer the program Perplexity which is connected to a search engine
which it utilizes to provide references to where it scraped its information.
Best
Frederik
Frederik Stjernfelt: Sheets, Diagrams, and Realism in Peirce – De Gruyter 2022
* “Peirce as a Philosopher of AI”, in Olteanu et
al.: Philosophy of AI, forthcoming
Fra: <[email protected] <mailto:[email protected]>
> på vegne af Tuezuen Alican <[email protected]
<mailto:[email protected]> >
Svar til: Tuezuen Alican <[email protected]
<mailto:[email protected]> >
Dato: onsdag den 18. december 2024 kl. 08.43
Til: Mike Bergman <[email protected] <mailto:[email protected]> >, Gary
Richmond <[email protected] <mailto:[email protected]> >, Peirce-L
<[email protected] <mailto:[email protected]> >
Emne: RE: [PEIRCE-L] AI and abduction
Dear Mike and Gary,
If I’m not mistaken, John Sowa already utilizes LLMs this way. He argues that
LLMs are great for abductive conclusions, and later, with an Ontology, he
checks whether that “hypothesis” is true or not. At least, that’s my
interpretation of his work.
<mailto:[email protected]> @Mike Bergman, sorry for the duplication; I
pressed reply instead of replying to everyone.
Best Regards,
Dipl.-Ing. Alican Tüzün, BSc
PhD Candidate
University of Applied Sciences Upper Austria
Josef Ressel Centre for Data-Driven Business Model Innovation
Wehrgrabengasse 1-3
4400 Steyr/Austria
LinkedIn: <https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/>
https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/
Phone: +43 5 0804 33813
Mobil: +43 681 20775431
E-Mail: <mailto:[email protected]> [email protected]
Web: <http://www.fh-ooe.at/imm> www.fh-ooe.at
Web: <https://coe-sp.fh-ooe.at/> https://coe-sp.fh-ooe.at/
From: [email protected] <mailto:[email protected]>
<[email protected] <mailto:[email protected]> > On
Behalf Of Mike Bergman
Sent: Wednesday, 18 December 2024 02:52
To: Gary Richmond <[email protected] <mailto:[email protected]> >;
Peirce-L <[email protected] <mailto:[email protected]> >
Subject: Re: [PEIRCE-L] AI and abduction
You don't often get email from [email protected] <mailto:[email protected]> .
Learn why this is important <https://aka.ms/LearnAboutSenderIdentification>
Hi Gary,
This is a topic near and dear to me, and one I am very actively investigating
(and using) personally (mostly with ChatGPT 4-o1, but also the latest version
of Grok). My first observation, granted based on my sample of one, is that
abductive reasoning in a Peircean sense is lacking with current LLMs (large
language models), as is true for all general ML or AI approaches. Machine
learning and deep learning have been mostly an inductive process IMO. A major
gap I have seen for quite some time has been the lack of abductive reasoning in
most ML and AI activities of recent vintage.
This assertion is most evident in the lack of "new" hypothesis generation by
these systems, the critical discriminator that you correctly point out from
Peirce. One can prompt these new chat AIs with new hypotheses, and in that
form, they are very helpful and useful. It is for these reasons that I tend to
treat current chat AIs as dedicated research assistants: able to provide very
useful background legwork, including some answers that stimulate further
questions and thoughts, often in a rapid fire give-and-take manner, but ones
that are not creative in and of themselves aside from making some non-evident
connections.
I believe that better matching of current chat AIs with Peirce's thinking (esp
abductive reasoning as he defined) is a particularly rich vein for next
generation stuff. Lastly, my own personal view is that the current state of the
art is not "dangerous", but we are also seeing very rapid increases of what
Ilya Sutskever <https://en.wikipedia.org/wiki/Ilya_Sutskever> calls
"superintelligence", the speed of which is pretty breathtaking. We may be close
to tapping out on this current phase with most Internet content already
captured for training, but like with LLMs, there are certainly new innovations
not yet foreseen that may continue to maintain this Moore's law
<https://en.wikipedia.org/wiki/Moore%27s_law> -like pace of improvements.
Best, Mike
On 12/17/2024 6:00 PM, Gary Richmond wrote:
List,
In a brief article, "How Does A.I. Think? Here’s One Theory" in the New York
Times today, Peter Coy, after noting that "Computer scientists are continually
surprised by the creativity displayed by new generations of A.I.," comments on
one hypothesis that might help explain that 'creativity', namely, that AI is
using abduction in its machine reasoning. He writes:
One hypothesis for how large language models such as o1 think is that they use
what logicians call abduction, or abductive reasoning. Deduction is reasoning
from general laws to specific conclusions. Induction is the opposite, reasoning
from the specific to the general.
Abduction isn’t as well known, but it’s common in daily life, not to mention
possibly inside A.I. It’s inferring the most likely explanation for a given
observation. Unlike deduction, which is a straightforward procedure, and
induction, which can be purely statistical, abduction requires creativity.
The planet Neptune was discovered through abductive reasoning, when two
astronomers independently hypothesized that its existence was the most likely
explanation for perturbations in the orbit of its inner neighbor, Uranus.
Abduction is also the thought process jurors often use when they decide if a
defendant is guilty beyond a reasonable doubt.
Yet Peirce argues in the 1903 Lectures on Pragmatism that only abduction
"introduces any new idea" into a scientific inquiry:
" Abduction is the process of forming an explanatory hypothesis. It is the only
logical operation which introduces any new idea; for induction does nothing but
determine a value, and deduction merely evolves the necessary consequences of a
pure hypothesis."
I had always thought of abduction as the unique domain of the individual
scientist, the creative genius (say, Newton or Einstein) who, fully versed in
the most important relevant findings in his field, retroductively connects
those pieces of scientific information to posit a testable hypothesis
concerning an unresolved question in science.
But it makes sense that an AI program employing large data bases might indeed
be able to 'scan' those huge, multitudinous bases, connect the salient
information, and posit an hypothesis (or some other abductive idea).
Any thoughts on this? For example: Is it potentially a valuable feature and
power of AI and, thus, for us (the use of AI in medical research would tend to
support this view)? Is it a potential danger to us (some AI programs have been
seen to lie, to 'hide' some findings, etc.; might this get out of control)? If
AI can create testable hypotheses, is the role of the 'creative' scientist
jeopardized?
Best,'
Gary R
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at
https://cspeirce.com <https://cspeirce.com/> and, just as well, at
https://www.cspeirce.com <https://www.cspeirce.com/> . It'll take a while to
repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected]
<mailto:[email protected]> .
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected]
<mailto:[email protected]> with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of
the message and nothing in the body. More at
https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and
co-managed by him and Ben Udell.
--
__________________________________________
Michael K. Bergman
319.621.5225
http://mkbergman.com <http://mkbergman.com/>
http://www.linkedin.com/in/mkbergman
__________________________________________
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at
https://cspeirce.com and, just as well, at
https://www.cspeirce.com . It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] .
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected]
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the
body. More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and
co-managed by him and Ben Udell.