I apologize in advance for the length of the following explanation: I am
certainly implicated by the positions taken to ban those who use AI from
the ConTeXt discussion list, positions that are clearly based on a moral
judgment—which does not (yet) apply to the use of washing machines,
scooters, hair dryers, pizza selling machines, cell phones, computers,
and other machines that run on electricity and would contribute (I use
the conditional) “to the relief of the human condition.” (Lord
Chancellor F. Bacon). I am coming to participate modestly in the debate,
as I began to do. I completely agree with Pablo when he emphasizes the
need to declare the use of AI when it is used. I did not do so (although
I reread the summary produced by my questions to detect any
inconsistencies) and I was wrong, even though my primary objective was
not at all to propose an eternal truth that sprang from a brain other
than my own, but to provide information, synthesized by a computing
device. After all, we are entitled to consider AI as an additional tool
about which everyone is free to have their own opinion, but which has a
slight tendency, and certainly a flaw in some cases, to imitate human
knowledge: in fact, I do not recommend asking AI to write a chapter of a
book (on non-technical subjects), because the result is a set of
platitudes that call into question the “I” in “AI”! But if a technical
device can also assemble data and propose viable solutions, after
verification, undoubtedly as quickly and in some cases much more
efficiently than any other device, I don't see why we shouldn't examine it.
There have been moral debates in certain circles about the uses of
certain operating systems. If anyone here uses Windows as a platform for
their work with ConTeXt (or for other purposes), or prefers an iMac, or
a Fedora or Ubuntu-type platform, there is no reason to call for the
immediate formation of resistance leagues against the hegemonic and
abusive nature of such a system used preferentially and to proceed with
a kind of “purge” (I write this word with trembling fingers on the
keyboard, as it is a term that has been used in connection with
terrifying actions).
Nevertheless, the fear of AI, or more likely the generally skeptical and
slightly fearful approach to what could become the structuring framework
of human relations (and the dependence of human beings on all these
calculation processes), makes me realize that the general exponential
development of technology under the pressure of digitization makes us
forget the now ancient nature of the lifestyle change caused by the use
of electromagnetic energy. A German philosopher powerfully lamented the
ravages in modern societies of “Zuhandenheit/Vorhandenheit” (In short,
demanding immediate availability of whatever we believe we need; making
just about everything available for consumption—this was the
philosopher's criticism of the onslaught of modern technology.) As Hans
so aptly puts it, who seems to me to be rather moderate and temperate at
this early stage of the discussion, the question is not whether you used
a screwdriver, an electric screwdriver, or called in a craftsman to
install a camera in your garden, but why you did it. Hans doesn't use
this example, but he points out too modestly that the key (as in any
other field) is to remain as reasonable as possible. Especially since
the use of various programming methods (via Perl, Python, Java, etc.)
consists of different approaches aimed at “automating routines” that we
don't want to have to do and redo over and over again by hand.
There is something mechanically artificial about this—for example, using
a Perl script on a very long text—which is not intelligent (neither
“smart” nor ‘brilliant’), but “clever.” My opinion is that “tricks”
(which are produced by “clever” minds) should be stored in the toolbox,
and not at all in the library, however impressive they may be. Finally,
I don't know if examining thousands of X-rays by a machine trained to
detect tumors at an early stage allows a doctor to make a definitive
diagnosis, or if it's a matter of fluctuating probabilities. In any
case, I don't know whether I should file to suit in justice against this
doctor and have him struck off the Association of Oncologists because he
himself has not had the long experience of viewing thousands of X-rays.
Similarly, what would we think of a restaurant chef who, after being
awarded the medal for best chef in Belgium, admitted that the recipe
that impressed the jury was suggested by ChatGPT — a recipe suggested by
ChatGPT that he “would have adapted according to his own sensibility”?
“He's a fraud!” might be one possible response. I don't know if the AI
that examines the thousands of legal texts produced by the European
administration is capable of giving legal advice that is “never” absurd.
But it is possible that a team of lawyers could verify the conclusions
reached by the computer. What I see is that a range of digital
techniques is being used to manage areas in which developed societies
need powerful tools.
From there, these systems can be developed for social control purposes
(for example, cell phones are currently used for all transactions in
China). It is therefore possible to reflect on this now very old
question: is the use of a technique in itself problematic, or does the
problem lie in the objective pursued?
Thank you, Hans, for your constant and impressive work and for your
sober remarks.
JP
Le 19/12/2025 à 09:31, Hans Hagen via ntg-context a écrit :
On 12/18/2025 11:54 PM, vm via ntg-context wrote:
Maybe this is the time to put a *complete ban* of any AI generated
text postings to this forum now that we still can.
Before you realize it you'll be wasting your time in replying to a
machine who's sole purpose is to keep you distracted form your work.
Anyone who gets caught ought to be banned, for ever.
As this forum is for (real) people to share and exchange thoughts and
information.
One cannot really put a ban on this. We don't put as ban on other
technologies either. It's more about not using ai the wrong way. The
problem is that, as a tool, generative ml can have its use although it
can interfere badly with creativity. So it's about using with care and
I have confidence that users here take care it it. After all, we're
not in a competitive space here (looking for the next typesetting hype
every few years). Also, people will likely get bored about ai at some
point and companies relying it it will fade away, as history shows us
even large ones seldom survive that long.
So, take a manual or an example snippet: one can use these tools to
write (generate) one, but where does the content come from .. at some
point one has to feed the system. Can you still call it your work and
call yourself an author? I definitely don't want to end up in editing
stuff that i could as well as written from scratch. The term author
has th be recallibrated then.
The same has always been true for programming: with the exception of
science based algoritms beyond my imagination (think perlin noise)
it's more efficient to just look atthe problem, think of a soluition
and wrote one (at least for me) and then I don't care if I spend more
time on it than someone else would. How would I know anyway.
For the record: some time ago Frans G and I had good laught about his
conversation with chat that ended up with funy mixups of context and
latex syntax (commands, color specifications etc) but chat was very
pleased about the positive feedback which was then not applied. He
turned it into a MAPS article. How are users supposed to know the
truth, that is the question.
But also keep in mind that one can find rather weird *human* comments
on ther web (like SE) on e.g. context from non-users that makes one
wonder if they ever looked at it or are capable figuring out tex
(beyond their narrow scope) at all. And those are indeed humans, maybe
even considered experts. Part of the problem is that anyone can write
/ bash / complain / suggest anything these days and some actually
could have benefit from cheecking-by-ai first. When I first ran into
what was assumes ai, it actually was called 'expert systems' (prolog,
lisp times) and as far as i understood experts were supposed to be
involved, not web scrapers.
So .. no ban needed as I'm not too worried here. Now back to extending
manuals written in poor english,
Hans
-----------------------------------------------------------------
Hans Hagen | PRAGMA ADE
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------
___________________________________________________________________________________
If your question is of interest to others as well, please add an entry
to the Wiki!
maillist : [email protected] /
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage : https://www.pragma-ade.nl / https://context.aanhet.net
(mirror)
archive : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___________________________________________________________________________________
___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the
Wiki!
maillist : [email protected] /
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___________________________________________________________________________________