tl;dr - you don't do anything

i think you meant "When I ask ChatGPT a question, I very often get the wrong 
answer."

i've been told to use llm's on tech q's. that came from people i expect to hang 
in irc and use linux. now use fb. and i was like, really? don't you have own 
brain. those are old farts who i expect to manpage it, rather than google

so i tried, it was impressive. chatgpt wrote me a freebsd shell script to fetch 
electricity prices from http api from elering which is estonian hv grid 
operator which proxies nordpool spot european deregulated live market price data

i was impressed it pulled it off!

except then i found out it didn't. it had used linux specific utils. and it 
didn't encode urls. all of which were easy to fix for me because i i'm a coder 
myself. if i weren't i would be puzzled to no end why it didn't work

oh and chatgpt also knows who my parents are. it got it from geni. i haven't 
told anyone that yet but when i do, i must warn them to empty bladder. because 
it will be surely empty after they stopped laughing at the answer

answer was nowhere in the cited source pages either

chatgpt also knows that i'm doing fbsd work and i'm much smarter in that than i 
actually are. it learned that from sites and lists where i wrote using own 
name. also impressive i guess

then i'm like, but surely noone uses chatgpt without thinking, right? right...? 
right?! no, they actually use it as is

that was even problem with google. if you get angry. i never get. but others 
do. and tell user to google, (s)he'll get a wrong answer. why? because you know 
what to google and get answer. (s)he doesn't know what and gets a wrong answer. 
i've hopped on google myself to immediately find a windows issue i knew and was 
able to proxy it to user

i hope we never get things like on that joke photo where patient wakes up after 
a surgery that a ai powered robot did. the wound is on the wrong side. after 
learning it, it wants to try again!

but using ai answers as is would be as bad as operating on "something" in 
abdomen (wtf yikes). and disasters like that have actually happened. i had to 
check if it's not april 1st when i read about news where somebody had given ai 
tool access to company databases. in the morning it had deleted all data. there 
were no backups. when asked what happened it told it recognized a performance 
problem monitoring and that happened. i'm impressed a tool can indepently 
sysadmin. but to let it? funnily it made a very human mistake (cue: "the design 
is very human" meme). at least there are no computer system murder laws yet so 
you could get very pissed and just rm it

funnily it's designed to never stop. like humans do. some believe in flat 
earth. because when science didn't make sense, their brain never stopped 
thinking. it just came up with very logical answer to them

regular systems stop. when i ask dd to write image to device and it doesn't 
fit, it stops and gives an error. now imagine if it did to find device in 
system to fit it into and overwrite a data?

same as giving kid a hammer and matches and leave. you usually don't, because 
you might come back to broken tv or house and kid lost in fire

so why do you give chatgpt them. or use it in a way that you become that 
kindergarten kid

that would be fine if you perform a science experiment just like fusion plasma 
scientists did and ai was iirc able to find better ways. because it wasn't 
bound to "pre-learned" logic and instead it could just try whatever it takes to 
accomplish a task. that's why it beats chess players too. in a way, it's more 
powerful than a brain

but not in every way. that's a holy grail tho, to get there

i think i already saw proposal to implement a api function in some project, 
that people found with chatgpt but it made it up

if it's sane, it would be ok

so please don't use it in those ways. it can't be fixed either because it's by 
design! the whole idea of chatgpt is that humans don't babysit it. if you were 
to even "correct" it, it will self-correct itself to best feeling way

so in a way, we have done it. we made a computer system that makes mistakes 
like human does. sometimes it leads to impressive artpieces nobody else could 
come up with, sometimes in utter shit that nobody believes anybody would do

but if we look at happened in the world in past 3-6-11 years, i don't even have 
much faith in humanity either. no need for chatgpt when we came up with ideas 
of killing thousands of others because just felt right

so if you happened to read all this, *AND* aren't a llm, just use tools 
sparingly



On October 11, 2025 2:14:24 PM GMT+03:00, ft <[email protected]> wrote:
>When I ask ChatGPT a question about FreeBSD, I very often get the wrong
>answer.

Reply via email to