Hi All,

When I initially responded to this thread I knew it had the prospect for lengthy to-and-fro, so I silently told myself I would say my piece and then hold my tongue.

But I can't resist on two levels. The first is to understand abduction in Peircean terms. The second is to comment critically on the actual capabilities of current AI systems (and to make sure we distinguish LLMs and chatbots from other more dedicated AI systems, which are broader in toto). Cosmically, two new papers came out in the past day that speak to both of these topics. Their appearance convinced me I should release my tongue.

I find Peirce's concept of abduction (which I prefer to retroduction, another argument) to be more subtle and precise and embracing than standard uses. My understanding of Peirce's concept of abduction has two complementary aspects. The first, based on the newly encountered, surprising fact, is to span the possible problem and solution 'space' to begin formulating possible explanations or hypotheses. The second, given the inexhaustible scope of the possible solution space, is to winnow possible testable approaches to the open question based on pragmatic prospects of likely rewards at acceptable investigation costs. It is a beautiful, yet difficult to operationalize, design. I think this aligns with Edwina's basic understanding, as well.

In the general literature, I find the general use of 'hypothesis' and 'abductive reasoning' to not conform to these subtleties. LLMs are inductive in their approaches, that conjure up hypothesis largely based on priors, but one can also prompt with new hypotheses and work interactively to explore new territory. That is the research assistant-mode I mentioned previously.

So, the first paper from today [1] really gets at this general topic. I think it pretty well speaks to the lack of innovative, 'new' hypotheses arising from current LLMs. The hypotheses offered by LLMs are generally priors. Also, please note there are AI capabilities from outfits like DeepMind and their protein folding that work on different bases than chatbots. There are some new hypothesis-generating capabilities in AI, but these approaches (to my knowledge) are not yet incorporated into LLMs.

The second paper that came out today [2] begins to show a glimmer of approaches that might get closer to a Peircean understanding of abduction. This paper [2] embraces the two main concepts: first, a way to create a representation of the general problem space (a lattice in this instance); and then, second, a way to use an ROI-like approach to scan the possible solution space. This paper is very mathematically and constant (also using precision) inclined one, but the general idea is something that could be adopted to imprecise text and LLMs perhaps. Or, possibly, a Peircean abduction would require even other approaches . . . .

My own conclusion is that using LLMs as human-in-the-loop research assistants is the current, most productive way to use AI for Peircean-related questions and investigations. They do not innovate new solutions without the interpretant (human) prompting the right questions and premises, but may surface previously unforeseen connections that the human might use to formulate new hypotheses. One good thing: ChatGPT or whatever never gets tired!

I will again try to hold my tongue.

Best, Mike

[1] https://arxiv.org/pdf/2412.13645
[2] http://arxiv.org/abs/2412.12361


On 12/18/2024 3:19 PM, Edwina Taborsky wrote:

Gary R, Mike, Tuezuen, Frederik,

I remain unconvinced that AI has any capacity for abduction. As I outlined in a previous post, I consider that induction [ Secondness] and Deduction ] Thirdness, are its strengths. Abduction, [Firstness] is, I feel outside of its range.  I outlined my reasoning and won’t repeat it ..

I think that the massive and rapid inductive capacities of AI enable it to expand the range of a hypothesis, which could indeed, suggest the inclusion of observed facts within a hypothesis. Abduction, to my understanding, is without a hypothesis and is focused on choosing one..

As Peirce points out [7.218, 1902] abduction operates by resemblance [ which I would define as Firstness] while induction operates by contiguity [ Secondness].

Or  “the invention, selection and entertainment of the hypothesis’ HP 2.895. 1901

Therefore, I continue to think that AI is extremely useful in inductive exploration - with the capacity to organize and critique data within existent hypotheses [ and thus, expand the function of these hypotheses].

But - since it is operative in Firstness, ‘guessing’, I don’t see it as having the capacity for abduction, ie, the creation of totally novel hypotheses or Forms or Habits.. An example - the development of a totally new beak in a finch ; the emergence of a new species. [genetic mutation etc].

Edwina




On Dec 18, 2024, at 3:40 PM, Gary Richmond <[email protected]> wrote:

Frederik, Mike, Tuezuen, Daniel, List,

For the last couple of years I have been dabbling in various AI programs including those which generate visual images including diagrams. I have mainly used ChatGPT for any number of purposes but principally for information gathering (it hasn't completely replaced search engines and Wikipedia, but I use it frequently when I'm looking for specific information and not, say, the kind of overview which Wikipedia offers.

Frederik, your observation that LLMs frequently 'hallucinate' since "abduction is neither necessary (like deduction) nor probable (like induction)." On the other hand, abduction viewed as retroduction (reasoning from effect to cause) seems nearly 'tailor made' for AI which can search those myriad data bases in search of connections which might prompt plausible hypotheses. Does that seem correct?

In my first post in this thread I noted how AI has proven useful in generating hypotheses in the medical field (I've probably read more about this field of AI hypothesis generation than any other because of some tech friendly, AI enthusiastic physicians I know in the NYU Langone health system.

I'm still quite interested in List members' thoughts about the 3 questions I concluded my original post with, namely: 1. How potentially valuable do you think AI will be in various disciplines, especially those fields in which one has some expertise? 2. What are the potential dangers of AI? To which I'd add: Can they be circumvented? How?. 3. Is the role of the scientist (the creative 'hypothesizer') jeopardized? I

The rest of this post is a ChatGPT outline of some of the fields in which AI has successfully generated hypotheses (including examples) and, in some cases, even tested these hypotheses.
*****
*****

    AI has shown considerable success in generating hypotheses across
    a variety of fields, disciplines, and sciences. Here are some of
    the most notable areas:


          *1. Healthcare and Medicine*

      o *Drug Discovery:* AI has been instrumental in identifying new
        drug candidates and repurposing existing drugs. Examples
        include identifying potential treatments for diseases like
        COVID-19 and rare genetic disorders.
      o *Diagnostics:* AI models have proposed novel diagnostic
        criteria, such as identifying biomarkers for diseases like
        cancer or Alzheimer's from genomic or imaging data.
      o *Genomics:* AI is used to hypothesize relationships between
        genes and diseases, and predict the functional impact of
        genetic mutations.


          *2. Biology and Biotechnology*

      o *Protein Folding:* AI systems like AlphaFold have solved
        hypotheses about protein structure, paving the way for
        advancements in molecular biology.
      o *Ecology:* AI has helped hypothesize the effects of climate
        change on ecosystems and species interactions.
      o *Synthetic Biology:* AI-generated hypotheses guide the design
        of engineered organisms for biofuels, medicine, or agriculture.


          *3. Physics and Astronomy*

      o *Astrophysics:* AI has helped hypothesize about the
        distribution of dark matter, the structure of the universe,
        and the identification of exoplanets.
      o *Quantum Physics:* AI models have been used to generate
        hypotheses about material properties and quantum states.
      o *Particle Physics:* AI helps analyze data from particle
        accelerators to hypothesize about fundamental particles.


          *4. Chemistry and Materials Science*

      o *Material Design:* AI hypothesizes the properties of novel
        materials for use in batteries, solar cells, or superconductors.
      o *Reaction Mechanisms:* AI can propose mechanisms for complex
        chemical reactions, speeding up the discovery of catalysts.


          *5. Social Sciences*

      o *Behavioral Patterns:* AI generates hypotheses about human
        behavior by analyzing large datasets, such as social media
        interactions or economic activity.
      o *Policy Impact:* AI models help hypothesize the effects of
        policy changes on societal outcomes like education, public
        health, or economic growth.


          *6. Environmental Science*

      o *Climate Modeling:* AI hypothesizes the potential effects of
        greenhouse gases, deforestation, and other factors on climate
        change.
      o *Sustainability:* AI generates hypotheses about renewable
        energy efficiency, waste management, and conservation efforts.


          *7. Economics and Finance*

      o *Market Predictions:* AI hypothesizes about market trends and
        economic conditions by analyzing large-scale financial data.
      o *Economic Modeling:* AI helps explore hypotheses about income
        inequality, employment trends, and consumer behavior.


          *8. Engineering and Technology*

      o *Robotics:* AI hypothesizes how to optimize robot design and
        functionality in various environments.
      o *Optimization Problems:* In fields like logistics, AI
        hypothesizes ways to improve efficiency and reduce costs.


          *9. Neuroscience and Cognitive Science*

      o *Brain Function:* AI helps generate hypotheses about how
        neural networks in the brain relate to behavior and cognition.
      o *Mental Health:* AI models propose new treatments for mental
        illnesses based on patterns in psychological and neurological
        data.


          *10. Education*

      o *Personalized Learning:* AI generates hypotheses about which
        teaching methods or materials work best for individual
        learning styles.
      o *Curriculum Design:* AI analyzes data to hypothesize about
        effective curriculum structures.


          *11. Agriculture*

      o *Crop Yields:* AI hypothesizes how weather patterns, soil
        types, and farming techniques affect yields.
      o *Pest Control:* AI models propose sustainable methods for
        pest management.


          *12. Linguistics and Natural Language Processing*

      o *Language Evolution:* AI helps hypothesize about how
        languages evolve over time.
      o *Semantic Analysis:* AI generates hypotheses about the
        relationships between linguistic structures and meanings.

    By leveraging vast amounts of data, AI not only generates
    hypotheses but also tests them through simulations or by guiding
    experimental design. Its versatility and data-driven approach
    make it an invaluable tool in advancing knowledge across disciplines.

Best,

Gary R


On Wed, Dec 18, 2024 at 5:02 AM Frederik Stjernfelt <[email protected]> wrote:

    Dear Mike, Gary, Tuezuen, list  –

    This is a great idea. This would also explain why LLMs
    “hallucinate” so much as they do, as abduction is neither
    necessary (like deduction) nor probable (like induction). Peirce,
    of course, stresses that abduction is indeed the source of new
    ideas but that it offers no assurance of their truth which has to
    be established by ensuing investigation using de- and inductions.

    I have only experimented with the free versions of ChatGPT and
    they are, indeed, highly error-prone.

    I tend to prefer the program Perplexity which is connected to a
    search engine which it utilizes to provide references to where it
    scraped its information.

    Best

    Frederik

    Frederik Stjernfelt: /Sheets, Diagrams, and Realism in Peirce/ –
    De Gruyter 2022

      * “Peirce as a Philosopher of AI”, in Olteanu et al.:
        /Philosophy of AI/, forthcoming

    *Fra: *<[email protected]> på vegne af Tuezuen
    Alican <[email protected]>
    *Svar til: *Tuezuen Alican <[email protected]>
    *Dato: *onsdag den 18. december 2024 kl. 08.43
    *Til: *Mike Bergman <[email protected]>, Gary Richmond
    <[email protected]>, Peirce-L <[email protected]>
    *Emne: *RE: [PEIRCE-L] AI and abduction

    Dear Mike and Gary,

    If I’m not mistaken, John Sowa already utilizes LLMs this way. He
    argues that LLMs are great for abductive conclusions, and later,
    with an Ontology, he checks whether that “hypothesis” is true or
    not. At least, that’s my interpretation of his work.

    @Mike Bergman <mailto:[email protected]>, sorry for the
    duplication; I pressed reply instead of replying to everyone.

    Best Regards,

    *Dipl.-Ing. Alican Tüzün, BSc*

    PhD Candidate

    *University of Applied Sciences Upper Austria*

    *Josef Ressel Centre for Data-Driven Business Model Innovation*

    Wehrgrabengasse 1-3

    4400 Steyr/Austria

    LinkedIn: https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/
    <https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/>

    Phone: +43 5 0804 33813

    Mobil: +43 681 20775431

    E-Mail: [email protected]
    <mailto:[email protected]>

    Web:www.fh-ooe.at <http://www.fh-ooe.at/imm>

    Web: https://coe-sp.fh-ooe.at/ <https://coe-sp.fh-ooe.at/>

    *From:*[email protected]
    <[email protected]> *On Behalf Of *Mike Bergman
    *Sent:* Wednesday, 18 December 2024 02:52
    *To:* Gary Richmond <[email protected]>; Peirce-L
    <[email protected]>
    *Subject:* Re: [PEIRCE-L] AI and abduction


        

    You don't often get email from [email protected]. Learn why this
    is important <https://aka.ms/LearnAboutSenderIdentification>

        

    Hi Gary,

    This is a topic near and dear to me, and one I am very actively
    investigating (and using) personally (mostly with ChatGPT 4-o1,
    but also the latest version of Grok). My first observation,
    granted based on my sample of one, is that abductive reasoning in
    a Peircean sense is lacking with current LLMs (large language
    models), as is true for all general ML or AI approaches. Machine
    learning and deep learning have been mostly an inductive process
    IMO. A major gap I have seen for quite some time has been the
    lack of abductive reasoning in most ML and AI activities of
    recent vintage.

    This assertion is most evident in the lack of "new" hypothesis
    generation by these systems, the critical discriminator that you
    correctly point out from Peirce. One can prompt these new chat
    AIs with new hypotheses, and in that form, they are very helpful
    and useful. It is for these reasons that I tend to treat current
    chat AIs as dedicated research assistants: able to provide very
    useful background legwork, including some answers that stimulate
    further questions and thoughts, often in a rapid fire
    give-and-take manner, but ones that are not creative in and of
    themselves aside from making some non-evident connections.

    I believe that better matching of current chat AIs with Peirce's
    thinking (esp abductive reasoning as he defined) is a
    particularly rich vein for next generation stuff. Lastly, my own
    personal view is that the current state of the art is not
    "dangerous", but we are also seeing very rapid increases of what
    Ilya Sutskever <https://en.wikipedia.org/wiki/Ilya_Sutskever>
    calls "superintelligence", the speed of which is pretty
    breathtaking. We may be close to tapping out on this current
    phase with most Internet content already captured for training,
    but like with LLMs, there are certainly new innovations not yet
    foreseen that may continue to maintain this Moore's law
    <https://en.wikipedia.org/wiki/Moore%27s_law>-like pace of
    improvements.

    Best, Mike

    On 12/17/2024 6:00 PM, Gary Richmond wrote:


        List,

        In a brief article, "How Does A.I. Think? Here’s One Theory"
        in the New York Times today, Peter Coy, after noting that
        "Computer scientists are continually surprised by the
        creativity displayed by new generations of A.I.," comments
         on one hypothesis that might help explain that 'creativity',
        namely, that AI is using abduction in its machine reasoning. 
        He writes:

            One hypothesis for how large language models such as o1
            think is that they use what logicians call abduction, or
            abductive reasoning. Deduction is reasoning from general
            laws to specific conclusions. Induction is the opposite,
            reasoning from the specific to the general.

            Abduction isn’t as well known, but it’s common in daily
            life, not to mention possibly inside A.I. It’s inferring
            the most likely explanation for a given observation.
            Unlike deduction, which is a straightforward procedure,
            and induction, which can be purely statistical, abduction
            requires creativity.

            The planet Neptune was discovered through abductive
            reasoning, when two astronomers independently
            hypothesized that its existence was the most likely
            explanation for perturbations in the orbit of its inner
            neighbor, Uranus. Abduction is also the thought process
            jurors often use when they decide if a defendant is
            guilty beyond a reasonable doubt.

        Yet Peirce argues in the 1903 Lectures on Pragmatism that
        only abduction "introduces any new idea" into a scientific
        inquiry:

            " Abduction is the process of forming an explanatory
            hypothesis. It is the only logical operation which
            introduces any new idea; for induction does nothing but
            determine a value, and deduction merely evolves the
            necessary consequences of a pure hypothesis."


        I had always thought of abduction as the unique domain of the
        individual scientist, the creative genius (say, Newton or
        Einstein) who, fully versed in the most important relevant
        findings in his field, retroductively connects those pieces
        of scientific information to posit a testable hypothesis
        concerning an unresolved question in science.

        But it makes sense that an AI program employing large data
        bases might indeed be able to 'scan' those huge,
        multitudinous bases, connect the salient information, and
        posit an hypothesis (or some other abductive idea).

        Any thoughts on this? For example: Is it potentially a
        valuable feature and power of AI and, thus, for us (the use
        of AI in medical research would tend to support this view)?
        Is it a potential danger to us (some AI programs have been
        seen to lie, to 'hide' some findings, etc.; might this
        get out of control)? If AI can create testable hypotheses, is
        the role of the 'creative' scientist jeopardized?

        Best,'

        Gary R

        _ _ _ _ _ _ _ _ _ _

        ARISBE: THE PEIRCE GATEWAY is now at

        https://cspeirce.com  and, just as well, at

        https://www.cspeirce.com .  It'll take a while to repair /
        update all the links!

        ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All"
        to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should
        go to [email protected] .

        ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to
        [email protected] with UNSUBSCRIBE PEIRCE-L in the SUBJECT
        LINE of the message and nothing in the body.  More at
        https://list.iupui.edu/sympa/help/user-signoff.html .

        ► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary
        Richmond;  and co-managed by him and Ben Udell.

--
    __________________________________________

    Michael K. Bergman

    319.621.5225

    http://mkbergman.com

    http://www.linkedin.com/in/mkbergman

    __________________________________________

_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at
https://cspeirce.com  and, just as well, at
https://www.cspeirce.com .  It'll take a while to repair / update all the links! ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the body.  More at https://list.iupui.edu/sympa/help/user-signoff.html . ► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and co-managed by him and Ben Udell.

--
__________________________________________

Michael K. Bergman
319.621.5225
http://mkbergman.com
http://www.linkedin.com/in/mkbergman
__________________________________________
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at 
https://cspeirce.com  and, just as well, at 
https://www.cspeirce.com .  It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to