Hi John,

Thanks for your kind words about the book [1]. I included it in the prior thread, the subject of which I have changed, because Alex had inquired as to what my theoretical interest was in ChatGPT prompt engineering for possible use in assistance in ontology mapping. I try not to use these fora for self-promotion, but I welcomed the question nonetheless. In that regard, I also 'hijacked' the thread a bit. Besides being fallible, an essential point repeatedly emphasized by Peirce, we are all hypocritical at times. Color me guilty.

I would welcome discussing Peirce and KR topics with you should you have criticisms or observations based on my book (or elsewise). As a Peirce scholar, I hope you agree with me that Peirce is perhaps the philosopher/logician/mathematician /par excellence/ on this very topic. I think many on this forum, and the Peirce forum that I have added to this list, would benefit from learning more about his cogent insights.

Best, Mike

[1] https://www.mkbergman.com/a-knowledge-representation-practionary/

On 12/4/2023 5:54 PM, John F Sowa wrote:
Mike,

I apologize for not seeing the original note about your book.  I downloaded a copy, which is very interesting indeed.   I have only had a chance to browse through it.  But from what I've read so far, it appears to be an excellent overview of Peirce's theories and strong evidence of their importance for knowledge representation theory and practice.  I would recommend it for Ontolog subscribers as a presentation of Peirce's theories of logic and ontology and their use as a foundation for knowledge systems and applications.

There are topics and comments that I would quibble about.  For example, Peirce's existential graphs have the full expressive power of the ISO standard for Common Logic, and they are much clearer, simpler, and more powerful than OWL2.    But anybody who read or adopted the methods in your book could extend them to Common Logic.

As you said,  my remarks below "have nothing to do with the topics and discussion of this thread" -- as you wrote in the note I had not seen seen.  Please note that your book has nothing to do with quantum mechanics (the subject line).  I was responding to the following point by Mike Denny:  "But is comparing quantum mechanics to pointillism indeed a clever idea?  Perhaps the analogy is more misleading than helpful."

GPT had found an article with just one sentence about that topic, and it did not cite the original source.  In the original, the person who made that comparison was a physicist who knew perfectly well that it was just a one-line comment that had very little justification in a deeper theoretical sense.  But GPT made it sound like a summary of a serious theory.   Since GPT did not cite the source, there was no way of knowing (1) who said it first, (2) what was the context, (3) what was the scientific justification for it,  and most importantly (4) how could the reader find the original source and check those points?

Those are very serious flaws of GPT.  I believe that my response to Mike D (which you quoted below) was justified.   People who understand the limitations of GPT can use it effectively -- as you do.   But the great majority of people (of all ages and backgrounds) include a huge number who do not understand its limitations.  For them, it can be highly misleading -- even to the danger point, if taken seriously.

John

------------------------------------------------------------------------
*From*: "Mike Bergman" <m...@mkbergman.com>

Hi John,

My god, John, your lack of self-introspection on my response to you is astounding. You respond:

As I keep repeating, I am enthusiastic about the LLM technology for many valuable purposes, such as the ones you mention.  But I have been reading many articles by GPT users and developers who are making very strong claims about what LLMs do.   Many of them claim that GPT is passing the Turing test for a human-level of intelligence.  Others are claiming that GPT technology is getting better every day, and it will soon make all other AI technology obsolete.

Whenever I see notes that repeat those claims I cannot let them pass.   What I do is emphasize several points:  (1) the most reliable applications use LLMs as a basis for translating languages, natural and artificial; (2) for question answering, their answers do not have citations that can be checked, and there is no way of knowing where they got their data.

A very dangerous trend:   Google stopped putting Wikipedia at the top left of the first page of their responses.  Instead, they put their own automatically generated answers (most likely generated by LLMs).  Sometimes their list of responses are useful.. But they are never as good, as comprehensive, or as accurate as well researched Wikipedia pages.

Bing is even worse than Google.  They generate summaries without citations.   I do everything I can to avoid Bing, but it keeps hijacking (to use your word) the search when I don't want it.  For simple searches, I use Duck Duck Go.  it's closer to Google in the good old days:  clean searches with Wikipedia  at the top left.  But I admit that I have to use Google when I need to search for more obscure items.
ALL of these points have nothing to do with the topics and discussion of this thread. These are your justifications for your beliefs, repeated ad nauseam, that bear no relation to any assertions in this thread. This fact, and it is a fact, is the basis for my earlier response to you. If you can not recognize that your constant refrains about these points are not a 'hijack' of a thread, then draw your own conclusions.

I really do not want to continue this. Let's find a way to let this conversation slowly echo into the silent vastness, and engage on fruitful topics another day.

Truly, my best, Mike

On 12/3/2023 9:32 PM, John F Sowa wrote:
As I keep repeating, I am enthusiastic about the LLM technology for many valuable purposes, such as the ones you mention.  But I have been reading many articles by GPT users and developers who are making very strong claims about what LLMs do.   Many of them claim that GPT is passing the Turing test for a human-level of intelligence.  Others are claiming that GPT technology is getting better every day, and it will soon make all other AI technology obsolete.

Whenever I see notes that repeat those claims I cannot let them pass.   What I do is emphasize several points:  (1) the most reliable applications use LLMs as a basis for translating languages, natural and artificial; (2) for question answering, their answers do not have citations that can be checked, and there is no way of knowing where they got their data.

A very dangerous trend:   Google stopped putting Wikipedia at the top left of the first page of their responses.  Instead, they put their own automatically generated answers (most likely generated by LLMs).  Sometimes their list of responses are useful.. But they are never as good, as comprehensive, or as accurate as well researched Wikipedia pages.

Bing is even worse than Google.  They generate summaries without citations.   I do everything I can to avoid Bing, but it keeps hijacking (to use your word) the search when I don't want it.  For simple searches, I use Duck Duck Go.  it's closer to Google in the good old days:  clean searches with Wikipedia  at the top left.  But I admit that I have to use Google when I need to search for more obscure items.
--
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at 
https://cspeirce.com  and, just as well, at 
https://www.cspeirce.com .  It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to