List, Gary:
A note of caution is due with respect to AI and the chemical sciences, if
I may offer a few reasonable conjectures.
The logic of modern chemical graph theory is rather remote from CSP’s
notions of Graphs. More specifically, the identity of all molecules is
specified by a mathematically precise object that connects all the parts of
the whole into an exact spatial pattern.
If the sentence describing a semio-chemical object contains a large number
of predicates, then simple permutations of the sequences in the predicate
generate new “molecules”. Of course, these are only potentially existent
objects, flights of the imagination of a "scientific artist / armchair
chemical.” The real work requires going into the laboratory and producing
tangible objects.
Nevertheless, it is already clear that AI will continue to make major
contributions to design phase of chemical graphics. And, presumably, this
will evolve into automated laboratory synthetic methods.
One easy prediction is that within a a decade or two, drug-producing
machines will be commercially available. Affordable, DIY versions of
narcotics are on our long-range horizon! “Meth-labs” could move into the
middle/upper classes?
Looking more broadly, hallucinations in many areas of application, posse
all ranges of risks… how does the individual citizen sort out the
reasonableness / riskiness of a response to an innocent query?
And, who will be held labile in our courts of law?
Cheers
Jerry
On Dec 30, 2024, at 1:44 PM, Gary Richmond <[email protected]>
wrote:
Frederik, List,
Lately I've been reading articles, both scientific and popularm about
A.I.,, which appear in my inbox nearly daily. A couple of days ago a long
article by William Broad appeared in the New York Times which discussed
A.I. 'hallucinations' in a way that helped clarify the meaning of that
term, its value in certain areas of research, and why some researchers find
the term itself problematic. I thought a few brief excerpts from it might
prove helpful in the current discussion.
Best,
Gary R
Excerpts from "How Hallucinatory A.I. Helps Science Dream Up Big
Breakthroughs"
by William J. Broad
https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html?campaign_id=2&emc=edit_th_20241224&instance_id=143081&nl=today%27s-headlines®i_id=68716072&segment_id=186526&user_id=b1422b225dd9c2c469ac06c116c9fb08
A.I. hallucinations are reinvigorating the creative side of science. They
speed the process by which scientists and inventors dream up new ideas and
test them to see if reality concurs. It’s the scientific method — only
supercharged. What once took years can now be done in days, hours and
minutes. In some cases, the accelerated cycles of inquiry help scientists
open new frontiers.
“We’re exploring,” said James J. Collins, an M.I.T. professor who recently
praised hallucinations for speeding his research into novel antibiotics.
“We’re asking the models to come up with completely new molecules.”
The A.I. hallucinations arise when scientists teach generative computer
models about a particular subject and then let the machines rework that
information. The results can range from subtle and wrongheaded to surreal.
At times, they lead to major discoveries.
In October, David Baker of the University of Washington shared the Nobel
Prize in Chemistry for his pioneering research on proteins — the knotty
molecules that empower life. The Nobel committee praised him for
discovering how to rapidly build completely new kinds of proteins not found
in nature, calling his feat “almost impossible.”
In an interview before the prize announcement, Dr. Baker cited bursts of
A.I. imaginings as central to “making proteins from scratch.” The new
technology, he added, has helped his lab obtain roughly 100 patents, many
for medical care. One is for a new way to treat cancer. Another seeks to
aid the global war on viral infections. Dr. Baker has also founded or
helped start more than 20 biotech companies.
“Things are moving fast,” he said. “Even scientists who do proteins for a
living don’t know how far things have come.” How many proteins has his lab
designed? “Ten million — all brand-new,” he replied. “They don’t occur in
nature.”
The word [hallucinations] also gets frowned on because it can evoke the
bad old days of hallucinations from LSD and other psychedelic drugs, which
scared off reputable scientists for decades. A final downside is that
scientific and medical communications generated by A.I. can, like chatbot
replies, get clouded by false information.
*****
Researchers at the University of Texas at Austin have also embraced the
term. “Learning from Hallucination,” read the title of their paper on
improving robot navigation.
And the head of the science division at DeepMind, a Google company in
London that develops A.I. applications, praised hallucinations as promoting
discovery, doing so shortly after two of his colleagues shared this year’s
Nobel Prize in Chemistry with Dr. Baker.
“We have this amazing tool which can exhibit creativity,” the DeepMind
official, Pushmeet Kohli, said in an interview.
Despite the allure of A.I. hallucinations for discovery, some scientists
find the word itself misleading. They see the imaginings of generative A.I.
models not as illusory but prospective — as having some chance of coming
true, not unlike the conjectures made in the early stages of the scientific
method. They see the term hallucination as inaccurate, and thus avoid using
it.
ReplyForward
Add reaction
On Wed, Dec 18, 2024 at 5:12 AM Tuezuen Alican <[email protected]>
wrote:
Dear Frederik,
I was also confused earlier. However, it looks like it's working.
Best,
Alican
*From:* Frederik Stjernfelt <[email protected]>
*Sent:* Wednesday, 18 December 2024 11:11
*To:* Tuezuen Alican <[email protected]>; Mike Bergman <
[email protected]>; Gary Richmond <[email protected]>; Peirce-L <
[email protected]>
*Subject:* Re: [PEIRCE-L] AI and abduction
You don't often get email from [email protected]. Learn why this is
important <https://aka.ms/LearnAboutSenderIdentification>
Dear Gary R –
Did you receive my below message? I was informed that deliverance to
peirce-l was belated and my posting was returned by
[email protected]
Best
F
Dear Mike, Gary, Tuezuen, list –
This is a great idea. This would also explain why LLMs “hallucinate” so
much as they do, as abduction is neither necessary (like deduction) nor
probable (like induction). Peirce, of course, stresses that abduction is
indeed the source of new ideas but that it offers no assurance of their
truth which has to be established by ensuing investigation using de- and
inductions.
I have only experimented with the free versions of ChatGPT and they are,
indeed, highly error-prone.
I tend to prefer the program Perplexity which is connected to a search
engine which it utilizes to provide references to where it scraped its
information.
Best
Frederik
Frederik Stjernfelt: *Sheets, Diagrams, and Realism in Peirce* – De
Gruyter 2022
- “Peirce as a Philosopher of AI”, in Olteanu
et al.: *Philosophy of AI*, forthcoming
*Fra: *<[email protected]> på vegne af Tuezuen Alican <
[email protected]>
*Svar til: *Tuezuen Alican <[email protected]>
*Dato: *onsdag den 18. december 2024 kl. 08.43
*Til: *Mike Bergman <[email protected]>, Gary Richmond <
[email protected]>, Peirce-L <[email protected]>
*Emne: *RE: [PEIRCE-L] AI and abduction
Dear Mike and Gary,
If I’m not mistaken, John Sowa already utilizes LLMs this way. He argues
that LLMs are great for abductive conclusions, and later, with an Ontology,
he checks whether that “hypothesis” is true or not. At least, that’s my
interpretation of his work.
@Mike Bergman <[email protected]>, sorry for the duplication; I pressed
reply instead of replying to everyone.
Best Regards,
*Dipl.-Ing. Alican Tüzün, BSc*
PhD Candidate
*University of Applied Sciences Upper Austria*
*Josef Ressel Centre for Data-Driven Business Model Innovation*
Wehrgrabengasse 1-3
4400 Steyr/Austria
LinkedIn: https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/
Phone: +43 5 0804 33813
Mobil: +43 681 20775431
E-Mail: [email protected]
Web: www.fh-ooe.at <http://www.fh-ooe.at/imm>
Web: https://coe-sp.fh-ooe.at/
[image: image001.png]
*From:* [email protected] <[email protected]>
*On Behalf Of *Mike Bergman
*Sent:* Wednesday, 18 December 2024 02:52
*To:* Gary Richmond <[email protected]>; Peirce-L <
[email protected]>
*Subject:* Re: [PEIRCE-L] AI and abduction
You don't often get email from [email protected]. Learn why this is
important <https://aka.ms/LearnAboutSenderIdentification>
Hi Gary,
This is a topic near and dear to me, and one I am very actively
investigating (and using) personally (mostly with ChatGPT 4-o1, but also
the latest version of Grok). My first observation, granted based on my
sample of one, is that abductive reasoning in a Peircean sense is lacking
with current LLMs (large language models), as is true for all general ML or
AI approaches. Machine learning and deep learning have been mostly an
inductive process IMO. A major gap I have seen for quite some time has been
the lack of abductive reasoning in most ML and AI activities of recent
vintage.
This assertion is most evident in the lack of "new" hypothesis generation
by these systems, the critical discriminator that you correctly point out
from Peirce. One can prompt these new chat AIs with new hypotheses, and in
that form, they are very helpful and useful. It is for these reasons that I
tend to treat current chat AIs as dedicated research assistants: able to
provide very useful background legwork, including some answers that
stimulate further questions and thoughts, often in a rapid fire
give-and-take manner, but ones that are not creative in and of themselves
aside from making some non-evident connections.
I believe that better matching of current chat AIs with Peirce's thinking
(esp abductive reasoning as he defined) is a particularly rich vein for
next generation stuff. Lastly, my own personal view is that the current
state of the art is not "dangerous", but we are also seeing very rapid
increases of what Ilya Sutskever
<https://en.wikipedia.org/wiki/Ilya_Sutskever> calls
"superintelligence", the speed of which is pretty breathtaking. We may be
close to tapping out on this current phase with most Internet content
already captured for training, but like with LLMs, there are certainly new
innovations not yet foreseen that may continue to maintain this Moore's
law <https://en.wikipedia.org/wiki/Moore%27s_law>-like pace of
improvements.
Best, Mike
On 12/17/2024 6:00 PM, Gary Richmond wrote:
List,
In a brief article, "How Does A.I. Think? Here’s One Theory" in the New
York Times today, Peter Coy, after noting that "Computer scientists are
continually surprised by the creativity displayed by new generations of
A.I.," comments on one hypothesis that might help explain that
'creativity', namely, that AI is using abduction in its machine reasoning.
He writes:
One hypothesis for how large language models such as o1 think is that
they use what logicians call abduction, or abductive reasoning. Deduction
is reasoning from general laws to specific conclusions. Induction is the
opposite, reasoning from the specific to the general.
Abduction isn’t as well known, but it’s common in daily life, not to
mention possibly inside A.I. It’s inferring the most likely explanation for
a given observation. Unlike deduction, which is a straightforward
procedure, and induction, which can be purely statistical, abduction
requires creativity.
The planet Neptune was discovered through abductive reasoning, when two
astronomers independently hypothesized that its existence was the most
likely explanation for perturbations in the orbit of its inner neighbor,
Uranus. Abduction is also the thought process jurors often use when they
decide if a defendant is guilty beyond a reasonable doubt.
Yet Peirce argues in the 1903 Lectures on Pragmatism that only abduction
"introduces any new idea" into a scientific inquiry:
" Abduction is the process of forming an explanatory hypothesis. It is
the only logical operation which introduces any new idea; for induction
does nothing but determine a value, and deduction merely evolves the
necessary consequences of a pure hypothesis."
I had always thought of abduction as the unique domain of the individual
scientist, the creative genius (say, Newton or Einstein) who, fully versed
in the most important relevant findings in his field, retroductively
connects those pieces of scientific information to posit a testable
hypothesis concerning an unresolved question in science.
But it makes sense that an AI program employing large data bases might
indeed be able to 'scan' those huge, multitudinous bases, connect the
salient information, and posit an hypothesis (or some other abductive idea).
Any thoughts on this? For example: Is it potentially a valuable feature
and power of AI and, thus, for us (the use of AI in medical research would
tend to support this view)? Is it a potential danger to us (some AI
programs have been seen to lie, to 'hide' some findings, etc.; might this
get out of control)? If AI can create testable hypotheses, is the role of
the 'creative' scientist jeopardized?
Best,'
Gary R
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at
https://cspeirce.com and, just as well, at
https://www.cspeirce.com . It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] .
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected]
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the
body. More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and
co-managed by him and Ben Udell.
--
__________________________________________
Michael K. Bergman
319.621.5225
http://mkbergman.com
http://www.linkedin.com/in/mkbergman
__________________________________________
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at
https://cspeirce.com and, just as well, at
https://www.cspeirce.com . It'll take a while to repair / update all the
links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
PEIRCE-L to this message. PEIRCE-L posts should go to
[email protected] .
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to
[email protected] with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the
message and nothing in the body. More at
https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and
co-managed by him and Ben Udell.