Hi Wolfgang,

Wolfgang Pfeiffer wrote on Sat, Nov 01, 2025 at 03:28:46PM +0100:

> what people call AI (I call it software),

I strongly disagree with the your terminology.

I do not think that what people usually call AI (in particular,
large language models and even applications of artificial neural
networks in general) can reasonably be called "software".  One
requirement i would impose before calling something "software" is
that at least some humans can understand what it does, and how.

If nobody can understamd what it does, it is not a tool, because
the definition of the term "tool" is "a thing that helps with a
set of well-defined tasks", and hence "software" is "a computer
program that helps with a set of well-defined tasks".

Nobody knows yet what AI is, but calling it a "tool" or "software"
is incredibly naive and irresponsible.  That's certainly compounded
further by the fact that even psychologists start scratching their
heads when you ask them what "intelligence" is, let alone how to
measure it.

A tool can be dangerous when mishandled: with a hammer, i might hit
my finger.  A chat bot can be dangerous even when used as intended.
For example, recognizing that the bot is hallucinating is a task
that can be as hard as whatever my original task was.  Occasionally,
it might be even objectively harder, and i'd go as far as guessing
it will be subjectively harder most of the time when factoring in
the various psychological biases that humans have.  Even abusing
an AI as a tool for a tasks where on first sight it seems relatively
benign - for example, generating a list of candidate URIs in a web
search when you then inspect those websites critically to find the
information you need - is problematic to such a degree that i reject
the designation "tool" even in such relatively carefully guarded
cases.  Every AI is known to contain biases, but what those biases
are is unknown.  Measuring those biases requires, among other tools,
a non-AI search engine - but non-AI search engines essentially no
longer exist, at least as far as i'm aware.  As a particle physicist,
i call that a systematic error of unknown and unbounded size, which
is the physicist's way of saying that the whole measurement is a
complete and utter failure that does not provide any information
whatsoever.

Is this relevant to OpenBSD?

Yes, in one sense it is.  Making software simple, reliable, and
secure is among the chief project goals of OpenBSD, and encouraging
code review is amomg the chief methods employed to further those
goals.

Go ahead and do some code review on your favourite AI.  Good luck.
  Ingo

P.S.
OpenBSD is also known for its stance that documentation matters.
Go ahead and write some documentation for your favourite AI
explaining to users how to use it and which results it will produce.
Again, good luck.

Reply via email to