List, Jeff: Jeff's post unconcealed some aspects of LLMs that need deeper inspection.
In the following sense: What is the nature of human reasoning such that a consequence is drawn from antecedents? Internal processing of feelings, as intentional cognitive processes, proceed from experience to action. Without any necessity of “computation” or formal models containing subjects that contain predicates. So, what is the putative “parallel” or similarity with ChatGTP? What desires push or attract the assertions that inorganic processes are the same as organic processes? I would suggest and even assert that these language games are being manufactured and driven primarily by perceived economic opportunities in crude attempts to make reality fit profit-generating algorithms. In other words, the re-making of the meanings of semio-logic terminology of various forms of rationalizations is in full swing in order to justify these bizarre calculations. At what price? Cheers Jerry > On Dec 26, 2024, at 12:57 PM, Jeffrey Brian Downard <[email protected]> > wrote: > > Hello, > > Happy holidays. > > In the hopes of focusing on a particular question about the intelligence of a > system such as ChatGPT, let me draw a comparison between three relatively > recent computing machines. > > 1. 1978 Commodore PET desktop computer > 2. 2022 Macbook Pro Laptop > 3. Chat GPT3, 3.5 and 4 > > Which of the assertions being made in the discussion about AI and Chat GPT > would also apply to (1) and (2)? Which of the assertions only apply to > ChatGPT? That is, in what respects is ChatGPT more intelligent as a computing > system than an early generation desktop or a current laptop computer? > > In an important sense, (1-3) are all computing machines. None possess the > typical hallmarks of a living system, such as a bacterium or a frog, which > have a high degree of self-sufficiency in terms of the growth, homeostasis, > reproduction and ongoing evolution of the system. > > All three computing machines run various sorts of programs, and each program > has its own logic. At the level of the user, the Commodore Pet ran a native > form of BASIC as its high-level language. The logic of Basic is a Boolean > system of algebra. > > When I needed to write a paper, I used a program called WordPro. We used > similar machines (Apple 2) to download data about the stock and commodities > markets via modem and process the data using statistical analyses. I would > say that the early Commodore and Apple computers of the late 1970’s and early > 1980’s were performing deductive operations, and the statistical analyses > involved inductive operations. > > In what sense is a current MacBook Pro or ChatGPT doing something different? > > I’d like to frame the questions in terms of the arguments Turing presents in > his “Computing Machinery and Intelligence.” Which of the machines listed > above in (1-3) discrete state machines? Which are not? > > --Jeff > > > > > > > > From: [email protected] <[email protected]> on > behalf of Frederik Stjernfelt <[email protected]> > Sent: Thursday, December 26, 2024 1:46 PM > To: Gary Richmond <[email protected]>; Frederik Stjernfelt > <[email protected]> > Subject: Re: [PEIRCE-L] AI and abduction > > PS Just to avoid misunderstandings: I do not claim LLMs use abductions only. > As the models are trained with vast material scraped from the internet, it > goes without saying that all that which you find there, will also be in the > models – including inductions and deductions – for they are there, > widespread in the text material they are fed. > > Fra: <[email protected]> på vegne af Frederik Stjernfelt > <[email protected]> > Svar til: Frederik Stjernfelt <[email protected]> > Dato: torsdag den 26. december 2024 kl. 13.59 > Til: Gary Richmond <[email protected]>, Frederik Stjernfelt > <[email protected]> > Emne: Re: [PEIRCE-L] AI and abduction > > Dear Peircers – > > To me, there is no doubt that Deep Machine Learning uses abductions and that > this, like in ordinary life, is ripe with both guesswork, errors, new ideas > and creativity – the main aspects of abduction. Like in ordinary life, most > abductions are ordinary, trivial and error-prone. When my wife is late for > our appointment, I guess: maybe she was held up at work, maybe she met a > friend in the street, maybe she saw something in a shop window, etc. etc. > etc. – so many abductions. > > This is why abductions are in need of subsequent testing, and the problems > with the Large Language Models stem from the fact that they are not able to > state which of their claims are guesses, nor to perform any testing of them > to establish which of them are “hallucinations”. > > But if you are able to make such tests, AI hallucinations may be both > creative and useful: > > https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html?unlocked_article_code=1.kU4.ncF2.oinNep2SaHa8&smid=url-share > > Happy new year! > Frederik > > Fra: <[email protected]> på vegne af Gary Richmond > <[email protected]> > Svar til: Gary Richmond <[email protected]> > Dato: tirsdag den 24. december 2024 kl. 00.49 > Til: Gary Fuhrman <[email protected]> > Cc: Peirce-L <[email protected]> > Emne: Re: [PEIRCE-L] AI and abduction > > Gary f, List, > > > Thanks for referencing Yuval Harari’s book Nexus. Although I haven't read it > (and must postpone doing so as I currently have some major vision issues > which need to be addressed), I have read several reviews online. It would > appear that Harari offers not only examples of what he sees as AI creativity, > but that he -- at least implicitly, as you noted, Gary -- offers answers to > the 3 questions I posted in my first message in this thread : > > 1) Is AI's abductive and creative capability potentially a valuable feature > and, if so, a powerful research tool? Yes. It's already proven itself to be > so in several fields. There is, indeed, a growing literature on this. > > 2) Is it also potentially dangerous; that is, does it have the potential to > 'get out of human control'? Yes. This seems to be a concern for several > scientists involved in the creation and development of AI. I think their > concerns need to be taken seriously. > > 3) Is the creative role of the scientist in abducing testable hypotheses in > any way jeopardized? Possibly, even likely. Not that natural, organic, human > moments of scientific "Aha!" won't continue to happen and prove to be of > inestimable value, but that as the power of AI as a research tool continues > to develop, there may be a reduction of the need for it in certain inquiries > (again, AI's use in medicine -- including medical technology -- suggests > this). > > > From the reviews I've read, that historical context -- which Harari > apparently covers in some considerable detail -- appears as a strength to > some reviewers, a weakness to others. I guess I'll have to see for myself in > the new year. However, the second part of his book would appear to address > contemporary questions regarding machine 'intelligence'. > > As both of us have suggested, to the extent that abduction in AI involves (is > essentially?) pattern recognition, as retroduction (from effect to cause) it > has the advantage over human agents of observing very large numbers of > patterns found in huge databases. As you commented off List, "AI is much > better at [this] than humans when it has virtually infinite access to the > data in which the patterns appear. And it already has far more data about > humans than humans have about it." > > I think that for now -- and especially given the relatively recent addition > of AI to our stockpile of research tools (which is essentially how I've been > viewing it) -- the jury must be out and many of our conclusions seen as > tentative. But given the rapid growth of AI (and not just in chat boxes to be > sure!), thinking together about its use and potential abuse is important work > for scientific (and not only scientific) communities to be undertaking. > > > This discussion brought to mind the first International Conference on > Conceptual Structures I attended in 2001 at Stanford University. One of my > fondest memories of that conference was meeting the late Doug Engelbart, a > visionary thinker regarding the human future of technology (also, the > inventor of the computer mouse among other things). As early as the late > 1960's he saw machine intelligence as having the potential for Augmenting > Human Intelligence > (https://csis.pace.edu/~marchese/CS835/Lec3/DougEnglebart.pdf). > > > His idea eventually became known as IA (Intelligence Augmentation) which > seemed to me to balance AI with a humane counterpart. At the time there was a > great deal of intellectual activity around the idea of the Semantic Web. Yet > under Engelbart's influence, Aldo de Moor, Mary Keeler, and I collaborated on > "Towards a Pragmatic Web," what we believe to be the first paper to take up > the possibility of bridging the gap between AI and IA in the direction of > Peircean pragmaticism. > > > Best, > > Gary R > > > On Mon, Dec 23, 2024 at 8:12 AM <[email protected] > <mailto:[email protected]>> wrote: > If hypothesis generation is one form of abduction, which I think it is, then > it’s clear enough that AI has been doing that for some time. But it doesn’t > do it the same way that humans do, i.e. according to the “light of nature,” > as Peirce affirmed (EP2:54-5, CP 5.589 and elsewhere), because AI > intelligence isinorganic. > > This is one of several points that Yuval Harari makes in Nexus, which traces > the development of information networks through history. They have been used > in the service of either truth or social order, with the emphasis shifting > back and forth at various times, and AI represents a radical shift in the > nature of these networks. This suggests some plausible answers to the > questions originally posed by Gary at the beginning of this thread. > > Love, gary f. > > Coming from the ancestral lands of the Anishinaabeg > > } What a thing means is simply what habits it involves. [Peirce] { > https://gnusystems.ca/wp/ }{ Turning Signs <https://gnusystems.ca/TS/> > > From: [email protected] > <mailto:[email protected]> <[email protected] > <mailto:[email protected]>> On Behalf Of Gary Richmond > Sent: 18-Dec-24 15:40 > To: Frederik Stjernfelt <[email protected] <mailto:[email protected]>> > Cc: Tuezuen Alican <[email protected] > <mailto:[email protected]>>; Mike Bergman <[email protected] > <mailto:[email protected]>>; Peirce-L <[email protected] > <mailto:[email protected]>> > Subject: Re: [PEIRCE-L] AI and abduction > > Frederik, Mike, Tuezuen, Daniel, List, > > > For the last couple of years I have been dabbling in various AI programs > including those which generate visual images including diagrams. I have > mainly used ChatGPT for any number of purposes but principally for > information gathering (it hasn't completely replaced search engines and > Wikipedia, but I use it frequently when I'm looking for specific information > and not, say, the kind of overview which Wikipedia offers. > > > Frederik, your observation that LLMs frequently 'hallucinate' since > "abduction is neither necessary (like deduction) nor probable (like > induction)." On the other hand, abduction viewed as retroduction (reasoning > from effect to cause) seems nearly 'tailor made' for AI which can search > those myriad data bases in search of connections which might prompt plausible > hypotheses. Does that seem correct? > > > In my first post in this thread I noted how AI has proven useful in > generating hypotheses in the medical field (I've probably read more about > this field of AI hypothesis generation than any other because of some tech > friendly, AI enthusiastic physicians I know in the NYU Langone health system. > > > I'm still quite interested in List members' thoughts about the 3 questions I > concluded my original post with, namely: 1. How potentially valuable do you > think AI will be in various disciplines, especially those fields in which one > has some expertise? 2. What are the potential dangers of AI? To which I'd > add: Can they be circumvented? How?. 3. Is the role of the scientist (the > creative 'hypothesizer') jeopardized? I > > > The rest of this post is a ChatGPT outline of some of the fields in which AI > has successfully generated hypotheses (including examples) and, in some > cases, even tested these hypotheses. > > ***** > > ***** > > AI has shown considerable success in generating hypotheses across a variety > of fields, disciplines, and sciences. Here are some of the most notable areas: > 1. Healthcare and Medicine > > Drug Discovery: AI has been instrumental in identifying new drug candidates > and repurposing existing drugs. Examples include identifying potential > treatments for diseases like COVID-19 and rare genetic disorders. > Diagnostics: AI models have proposed novel diagnostic criteria, such as > identifying biomarkers for diseases like cancer or Alzheimer's from genomic > or imaging data. > Genomics: AI is used to hypothesize relationships between genes and diseases, > and predict the functional impact of genetic mutations. > 2. Biology and Biotechnology > > Protein Folding: AI systems like AlphaFold have solved hypotheses about > protein structure, paving the way for advancements in molecular biology. > Ecology: AI has helped hypothesize the effects of climate change on > ecosystems and species interactions. > Synthetic Biology: AI-generated hypotheses guide the design of engineered > organisms for biofuels, medicine, or agriculture. > 3. Physics and Astronomy > > Astrophysics: AI has helped hypothesize about the distribution of dark > matter, the structure of the universe, and the identification of exoplanets. > Quantum Physics: AI models have been used to generate hypotheses about > material properties and quantum states. > Particle Physics: AI helps analyze data from particle accelerators to > hypothesize about fundamental particles. > 4. Chemistry and Materials Science > > Material Design: AI hypothesizes the properties of novel materials for use in > batteries, solar cells, or superconductors. > Reaction Mechanisms: AI can propose mechanisms for complex chemical > reactions, speeding up the discovery of catalysts. > 5. Social Sciences > > Behavioral Patterns: AI generates hypotheses about human behavior by > analyzing large datasets, such as social media interactions or economic > activity. > Policy Impact: AI models help hypothesize the effects of policy changes on > societal outcomes like education, public health, or economic growth. > 6. Environmental Science > > Climate Modeling: AI hypothesizes the potential effects of greenhouse gases, > deforestation, and other factors on climate change. > Sustainability: AI generates hypotheses about renewable energy efficiency, > waste management, and conservation efforts. > 7. Economics and Finance > > Market Predictions: AI hypothesizes about market trends and economic > conditions by analyzing large-scale financial data. > Economic Modeling: AI helps explore hypotheses about income inequality, > employment trends, and consumer behavior. > 8. Engineering and Technology > > Robotics: AI hypothesizes how to optimize robot design and functionality in > various environments. > Optimization Problems: In fields like logistics, AI hypothesizes ways to > improve efficiency and reduce costs. > 9. Neuroscience and Cognitive Science > > Brain Function: AI helps generate hypotheses about how neural networks in the > brain relate to behavior and cognition. > Mental Health: AI models propose new treatments for mental illnesses based on > patterns in psychological and neurological data. > 10. Education > > Personalized Learning: AI generates hypotheses about which teaching methods > or materials work best for individual learning styles. > Curriculum Design: AI analyzes data to hypothesize about effective curriculum > structures. > 11. Agriculture > > Crop Yields: AI hypothesizes how weather patterns, soil types, and farming > techniques affect yields. > Pest Control: AI models propose sustainable methods for pest management. > 12. Linguistics and Natural Language Processing > > Language Evolution: AI helps hypothesize about how languages evolve over time. > Semantic Analysis: AI generates hypotheses about the relationships between > linguistic structures and meanings. > By leveraging vast amounts of data, AI not only generates hypotheses but also > tests them through simulations or by guiding experimental design. Its > versatility and data-driven approach make it an invaluable tool in advancing > knowledge across disciplines. > Best, > Gary R > > > On Wed, Dec 18, 2024 at 5:02 AM Frederik Stjernfelt <[email protected] > <mailto:[email protected]>> wrote: > Dear Mike, Gary, Tuezuen, list – > > This is a great idea. This would also explain why LLMs “hallucinate” so much > as they do, as abduction is neither necessary (like deduction) nor probable > (like induction). Peirce, of course, stresses that abduction is indeed the > source of new ideas but that it offers no assurance of their truth which has > to be established by ensuing investigation using de- and inductions. > > I have only experimented with the free versions of ChatGPT and they are, > indeed, highly error-prone. > I tend to prefer the program Perplexity which is connected to a search engine > which it utilizes to provide references to where it scraped its information. > > Best > Frederik > > Frederik Stjernfelt: Sheets, Diagrams, and Realism in Peirce – De Gruyter 2022 > “Peirce as a Philosopher of AI”, in Olteanu et al.: > Philosophy of AI, forthcoming > > > Fra: <[email protected] > <mailto:[email protected]>> på vegne af Tuezuen Alican > <[email protected] <mailto:[email protected]>> > Svar til: Tuezuen Alican <[email protected] > <mailto:[email protected]>> > Dato: onsdag den 18. december 2024 kl. 08.43 > Til: Mike Bergman <[email protected] <mailto:[email protected]>>, Gary > Richmond <[email protected] <mailto:[email protected]>>, Peirce-L > <[email protected] <mailto:[email protected]>> > Emne: RE: [PEIRCE-L] AI and abduction > > Dear Mike and Gary, > > If I’m not mistaken, John Sowa already utilizes LLMs this way. He argues that > LLMs are great for abductive conclusions, and later, with an Ontology, he > checks whether that “hypothesis” is true or not. At least, that’s my > interpretation of his work. > > @Mike Bergman <mailto:[email protected]>, sorry for the duplication; I > pressed reply instead of replying to everyone. > > Best Regards, > Dipl.-Ing. Alican Tüzün, BSc > PhD Candidate > > University of Applied Sciences Upper Austria > Josef Ressel Centre for Data-Driven Business Model Innovation > Wehrgrabengasse 1-3 > 4400 Steyr/Austria > LinkedIn: https://www.linkedin.com/in/t%C3%BCz%C3%BCnalican/ > Phone: +43 5 0804 33813 > Mobil: +43 681 20775431 > E-Mail: [email protected] <mailto:[email protected]> > Web: www.fh-ooe.at <http://www.fh-ooe.at/imm> > Web: https://coe-sp.fh-ooe.at/ > Fejl! Filnavn er ikke anført. > > > > > From: [email protected] > <mailto:[email protected]> <[email protected] > <mailto:[email protected]>> On Behalf Of Mike Bergman > Sent: Wednesday, 18 December 2024 02:52 > To: Gary Richmond <[email protected] <mailto:[email protected]>>; > Peirce-L <[email protected] <mailto:[email protected]>> > Subject: Re: [PEIRCE-L] AI and abduction > > You don't often get email from [email protected] > <mailto:[email protected]>. Learn why this is important > <https://aka.ms/LearnAboutSenderIdentification> > Hi Gary, > > This is a topic near and dear to me, and one I am very actively investigating > (and using) personally (mostly with ChatGPT 4-o1, but also the latest version > of Grok). My first observation, granted based on my sample of one, is that > abductive reasoning in a Peircean sense is lacking with current LLMs (large > language models), as is true for all general ML or AI approaches. Machine > learning and deep learning have been mostly an inductive process IMO. A major > gap I have seen for quite some time has been the lack of abductive reasoning > in most ML and AI activities of recent vintage. > > This assertion is most evident in the lack of "new" hypothesis generation by > these systems, the critical discriminator that you correctly point out from > Peirce. One can prompt these new chat AIs with new hypotheses, and in that > form, they are very helpful and useful. It is for these reasons that I tend > to treat current chat AIs as dedicated research assistants: able to provide > very useful background legwork, including some answers that stimulate further > questions and thoughts, often in a rapid fire give-and-take manner, but ones > that are not creative in and of themselves aside from making some non-evident > connections. > > I believe that better matching of current chat AIs with Peirce's thinking > (esp abductive reasoning as he defined) is a particularly rich vein for next > generation stuff. Lastly, my own personal view is that the current state of > the art is not "dangerous", but we are also seeing very rapid increases of > what Ilya Sutskever <https://en.wikipedia.org/wiki/Ilya_Sutskever> calls > "superintelligence", the speed of which is pretty breathtaking. We may be > close to tapping out on this current phase with most Internet content already > captured for training, but like with LLMs, there are certainly new > innovations not yet foreseen that may continue to maintain this Moore's law > <https://en.wikipedia.org/wiki/Moore%27s_law>-like pace of improvements. > > Best, Mike > On 12/17/2024 6:00 PM, Gary Richmond wrote: > > List, > > In a brief article, "How Does A.I. Think? Here’s One Theory" in the New York > Times today, Peter Coy, after noting that "Computer scientists are > continually surprised by the creativity displayed by new generations of > A.I.," comments on one hypothesis that might help explain that 'creativity', > namely, that AI is using abduction in its machine reasoning. He writes: > > One hypothesis for how large language models such as o1 think is that they > use what logicians call abduction, or abductive reasoning. Deduction is > reasoning from general laws to specific conclusions. Induction is the > opposite, reasoning from the specific to the general. > > Abduction isn’t as well known, but it’s common in daily life, not to mention > possibly inside A.I. It’s inferring the most likely explanation for a given > observation. Unlike deduction, which is a straightforward procedure, and > induction, which can be purely statistical, abduction requires creativity. > > The planet Neptune was discovered through abductive reasoning, when two > astronomers independently hypothesized that its existence was the most likely > explanation for perturbations in the orbit of its inner neighbor, Uranus. > Abduction is also the thought process jurors often use when they decide if a > defendant is guilty beyond a reasonable doubt. > > Yet Peirce argues in the 1903 Lectures on Pragmatism that only abduction > "introduces any new idea" into a scientific inquiry: > > " Abduction is the process of forming an explanatory hypothesis. It is the > only logical operation which introduces any new idea; for induction does > nothing but determine a value, and deduction merely evolves the necessary > consequences of a pure hypothesis." > > I had always thought of abduction as the unique domain of the individual > scientist, the creative genius (say, Newton or Einstein) who, fully versed in > the most important relevant findings in his field, retroductively connects > those pieces of scientific information to posit a testable hypothesis > concerning an unresolved question in science. > > But it makes sense that an AI program employing large data bases might indeed > be able to 'scan' those huge, multitudinous bases, connect the salient > information, and posit an hypothesis (or some other abductive idea). > > Any thoughts on this? For example: Is it potentially a valuable feature and > power of AI and, thus, for us (the use of AI in medical research would tend > to support this view)? Is it a potential danger to us (some AI programs have > been seen to lie, to 'hide' some findings, etc.; might this get out of > control)? If AI can create testable hypotheses, is the role of the 'creative' > scientist jeopardized? > > Best,' > > Gary R > > > > _ _ _ _ _ _ _ _ _ _ > ARISBE: THE PEIRCE GATEWAY is now at > https://cspeirce.com <https://cspeirce.com/> and, just as well, at > https://www.cspeirce.com <https://www.cspeirce.com/> . It'll take a while to > repair / update all the links! > ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON > PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] > <mailto:[email protected]> . > ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] > <mailto:[email protected]> with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of > the message and nothing in the body. More at > https://list.iupui.edu/sympa/help/user-signoff.html . > ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and > co-managed by him and Ben Udell. > -- > __________________________________________ > > Michael K. Bergman > 319.621.5225 > http://mkbergman.com <http://mkbergman.com/> > http://www.linkedin.com/in/mkbergman > __________________________________________ > _ _ _ _ _ _ _ _ _ _ > ARISBE: THE PEIRCE GATEWAY is now at > https://cspeirce.com <https://cspeirce.com/> and, just as well, at > https://www.cspeirce.com <https://www.cspeirce.com/> . It'll take a while to > repair / update all the links! > ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON > PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] > <mailto:[email protected]> . > ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but [email protected] > <mailto:[email protected]> with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of > the message and nothing in the body. More at > https://list.iupui.edu/sympa/help/user-signoff.html . > ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and > co-managed by him and Ben Udell. > _ _ _ _ _ _ _ _ _ _ > ARISBE: THE PEIRCE GATEWAY is now at > https://cspeirce.com and, just as well, at > https://www.cspeirce.com . It'll take a while to repair / update all the > links! > ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON > PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] > . > ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] > with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in > the body. More at https://list.iupui.edu/sympa/help/user-signoff.html . > ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and > co-managed by him and Ben Udell.
_ _ _ _ _ _ _ _ _ _ ARISBE: THE PEIRCE GATEWAY is now at https://cspeirce.com and, just as well, at https://www.cspeirce.com . It'll take a while to repair / update all the links! ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the body. More at https://list.iupui.edu/sympa/help/user-signoff.html . ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and co-managed by him and Ben Udell.
