On 3/22/24 17:39, Ben Koenig wrote:
On Friday, March 22nd, 2024 at 5:04 PM, American Citizen 
<website.read...@gmail.com> wrote:

A few years ago, I took my Linux OS which is openSuse Leap v15.3 or so
and ran a check on the documentation such as the man1 through man9 pages
(run the %man man command to pull all this up) versus the actual
executables on the system.

I was surprised to find < 15% of the command executables were
documented. Naturally I was hoping for something like 50% to 75%.

If I am going to talk to an AI program, such as ChatBot or one of the
newer popular AI program and ask it to generate the documentation for
the complete OS, what AI chatbot would you choose?

My idea is to clue the AI program into the actual OS, then ask it to
finish documenting 100% of all the executables, or report to me all
executables which have no available documentation at all, period.

This means the AI program would scour the internet for any and all
documentation for each command, and there are 10,000's of executables to
examine. (which is why I believe this is an AI task)

Your thoughts?

- Randall
That would be an interesting experiment to see what it comes up with. I would 
question the results simply due to the quality of current LLM implementations.

 From recent anecdotal experience, I recently bought an expensive Logitech keyboard and 
it was behaving strangely so I tried to look up how to perform a "factory 
reset" for this model. The search results I found via DDG were interesting, there 
were multiple duplicate hits for what appeared to be a tech blog with generic instruction 
pages for my device. However there were multiple iterations of this page, for this 
keyboard model, each of which had instructions referencing physical features that do not 
exist on this actual keyboard. These appeared to be AI generated help pages that were 
clogging up actual search results. They were very well written, If I hadn't had the 
actual device in front of my I might have actually believed that there was a pinhole 
reset button next to the USB port.

If you do this, you may need to find a way to define a "web of trust" that 
allows the AI to differentiate between human written articles, and AI written summaries. 
As it is right now, you might find yourself telling an AI to summarize help pages that 
are AI written summaries of
AI written summaries of (
   AI written summaries of (
     AI written summaries of (
       AI written summaries of (actual manuals)
     )
   )
)

Recursion FTW! :)

It seems inevitable that the AI serpent will stupidly eat its tail and devolve into even more of a stochastic septic tank than it is now. If I was an investor, I would be shorting hard into the AI bubble. To me, the only open question is whether humans get stupider faster than the machines.

--
Russell

Reply via email to