I probably should just stay silent, but...

I did a lot of 'testing' with ChatGPT.  I believe that it is just a very sophisticated data collection/summarization tool.   For example, I asked it to compare and contrast book x and book y, which were from different authors on the same subject.   Then book x and book z, book y and book z, book x and book w, etc.   The result was very similar essays (mostly the same introduction, and same conclusion with generalized differences in between) - clearly ChatGPT did not 'read' the books, it just found reviews/abstracts/summaries that others had published and identified the various differences (no real details).    I asked it a dozen times or so to create a set of bylaws for a non profit youth sports organization, but specified slight differences in the organization.   It came up with some very detailed, and some very vague, bylaws.   If I were to guess, it was using various examples (which can be found in many, many places) so the content was based upon what those examples were.   All were valid, but some were simply a framework while others were mostly complete.

My final test was to ask it to create a set of practice plans for a youth sports team (basedball, basketball, football and water polo).   I am a water polo coach, and it turns out there is very little published regarding specific drills and skills, or even strategies, because it is a niche sport (lots of videos from college coaches, but not very many written materials).   For the mainstream sports, there was a fairly detailed set of plans.   For water polo, it was all very basic and not very detailed at all - which matches what I've found in my own searches.   The 'creative' part would have been to take what other sports do and try to apply them to water polo - but it did not do that, even though I asked it to make plans a number of times.    So no, I don't believe that there is any creativity, insight, understanding or other term that implies 'thought' going on right now in the current AI implementations.

My son calls it a very sophisticated Google search engine.  And I don't think that is surprising to most here.

On 4/10/2023 1:09 PM, Matt Hogstrom wrote:
On Apr 10, 2023, at 3:10 PM, Bob Bridges <robhbrid...@gmail.com> wrote:

Rex, the scary part about AI is that nobody programmed it to respond that
way-it learned how to do so on its own.
I don’t believe AI “learned” anything but rather adjusted the algorithms based 
on a set of changes in probability of what the algorithm has ingested and 
processed for its LLM.

Matt Hogstrom

“The person who says it cannot be done, should not interrupt the person who is 
doing it”





----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to