* chris <[email protected]> [2026-03-13 18:50]:
> On Wednesday, 11 March 2026 07:31:27 UTC Jean Louis wrote:
> > * chris <[email protected]> [2026-03-09 22:40]:
> > > I completely understand the problem of having to sift through large
> > > amounts of poor code that arrive frequently.
> > > 
> > > But how can you tell whether the code was generated by an LLM?
> > > 
> > > The problem with LLMs is that poor code can easily resemble good code,
> > > which makes triage more difficult.
> > > 
> > > Again: how can you tell if the code is from an LLM and, if so, to what
> > > extent?
> > > 
> > > It's a complicated question to enforce in practice.
> > 
> > Thank you for confirmation!
> > 
> > I understand how maintainers try to “minimize their burden,” but they
> > are imposing impossible and unenforceable conditions, effectively
> > adding to their workload.
> > 
> > Instead of focusing on which user used which LLM to correct or
> > generate code, maintainers should focus on the usefulness of the
> 
> 
> Actually, I'm not sure that I agree, especially regarding the usefulness 
> aspect. I don't think there is any.

I am fully confident that no generalized statement holds value in
intelligent discussion.

The LLM is just another tool. Generalizing and protesting without
examining its actual utility isn't wise.

Personally:

The advent of Large Language Models (LLMs) has introduced a
transformative capability: automating the creation of test
functions. Rather than manually writing code, I can delegate these
repetitive tasks to the computer, allowing it to operate autonomously
and resolve bugs without altering source files. This leap in
technological advancement, which I anticipated during the 20th
century, is truly remarkable to witness in the 21st.

That is just one example, among many personally experienced examples.

I prefer if you make those personal experiences, that you do not
generalize, as many people are reading and getting influenced by
generalized statements.

Let's go back in time before spelling correction features. There were
definitely people who would say "Why would I need spelling correction
feature" and "they are not useful".

We can talk and argue, but the fact is that... let me ask DeepSeek to
help: https://chat.deepseek.com/

> like count-matches, need function:

> count-lines-matching

> which will count single time one matching word per line in the buffer,
> to get how many lines match the word "spell" in Elisp

Answer:

(defun count-lines-matching (regexp)
  "Count number of lines in current buffer that match REGEXP.
Counts a line only once even if it contains multiple matches."
  (interactive "sCount lines matching regexp: ")
  (save-excursion
    (goto-char (point-min))
    (let ((count 0))
      (while (re-search-forward regexp nil t)
        ;; Move to beginning of line to avoid counting same line multiple times
        (beginning-of-line)
        ;; Increment count
        (setq count (1+ count))
        ;; Move to next line to continue search
        (forward-line 1))
      (when (called-interactively-p 'any)
        (message "%d line%s match%s \"%s\""
                 count
                 (if (= count 1) "" "s")
                 (if (= count 1) "es" "")
                 regexp))
      count)))

Maybe function doesn't work well, I would need to test it, but it's
not much relevant, point is that it can count matches, maybe there is
some glitch. I leave it, because I want only to count one time per
line, more or less.

Okay, let me see the list of M-x package-list-packages which includes
MELPA on my side.

M-x count-lines-matching for "spell"

it says 28 lines. I did not make thorough research really, but I am
assuming there are 28 packages related to "spelling".

There are definitely today many people not using spelling correction,
but the fact is that M-x package-list-packages and M-x
count-lines-matches give me those counts for "spell" in that buffer.

So definitely, personally, I don't use spelling correction too many
times, but I can see there are authors who do use it, and make
packages out of it.

Do I need list of scientific references to understand that a spell
checking function in any kind of text editor is useful? I don't.

I am not going to even discuss the fact if spell checking is useful or
not. Because those facts are overwhelming.

How about searching for "AI", the term not publicly used in GNU
society, though it is there in MELPA.

- ai-code
- aider
- aidermacs
- claudia
- eca
- gptel-aibo
- ollama-buddy
- tabby-mode
- tabbymacs

So if it is not useful to one person, it was definitely useful to
those package authors.

> First, there's bad code masquerading as good code, which creates a heavy 
> workload for the maintainer and doesn't produce any useful work. It's akin to 
> a DOS attack on reviewer.

The generalization that code produced by language models is inherently
bad and imposes a burden ignores that the quality of code depends on
how the tool is used, not the tool itself, just as manually written
code can also create workload for maintainers when poorly crafted.

This concern applies equally to human-generated code, as poorly
written contributions from any source create the same burden on
maintainers, making the issue about quality control and contributor
responsibility rather than the tool used to produce it.

Here are 3 ways to instruct an LLM better to produce higher quality
code:

1. Provide specific context — Include relevant project patterns,
   coding standards, and examples of existing code to guide the model
   toward style-consistent output.

   Personally:

   - Language models know some languages better, some not so. Emacs
     Lisp isn't very well supported. Solution: run Opencode
     https://opencode.ai/ and even smallest models start understanding
     Elisp. Why? It runs with self-introspection giving more context
     to the language model. It can run tests, it can invoke "emacs" as
     command line, and verify what is wrong, even it can find
     documentation for functions and thus correct itself. Without
     participation in new advances, we will have slow future.

   - Other solution: Emacs Lisp Documentation MCP server. Model
     Context Protocol gives clues to language model that for any
     function not known to model, model can ask for it's definition,
     to understand the function. This avoids any language model
     training excuses, as it the better the MCP server, the better the
     output.

2. Break down complex requests — Ask for smaller, focused functions
   rather than entire systems, then review and assemble them
   incrementally.

   Here are examples for breaking down complex requests into smaller,
   focused functions:

   Instead of: "Write a complete email client for Emacs"

   Break it down into:

   "Write a function that fetches email headers from an IMAP server
   and displays them in a buffer"

   "Write a function that opens a selected email and displays its body
   in a new buffer"

   "Write a function that composes a new email with a editable buffer
   and sends it via SMTP"

   "Write a function that saves attachments to a specified directory"

3. Request explanations alongside code — Instruct the model to include
   comments or a brief summary of its approach, ensuring you
   understand the logic before integration.

   Examples:

   Write a function that sorts files by modification date. Include
   comments explaining the approach and any potential edge cases
   handled.

   Create a function to parse CSV data. After the code, provide a
   brief summary of how it handles quoted fields and escaped
   characters.

   Write a recursive function to traverse a directory tree. In
   comments, explain why recursion was chosen over iteration and what
   safeguards prevent stack overflow.

======================================================================
Generalizing about the usefulness of any tool based solely on personal
experience ignores the countless examples of others finding value in
it, as evidenced by the existence of packages for spelling correction
and language model integration in Emacs.
======================================================================

> Second, clicking the button that says, "Please fix that bug and send
> the patch to the maintainer. Don't bother showing me the code; I
> don't care," doesn't carry out any work, and therefore isn't
> useful.

You are currently working with a model that may not fully handle this
functionality effectively. I haven't tested it, nor do I have
experience submitting patches in this area.

However, as technology advances, we will eventually reach a point
where submitting patches becomes seamless and error-free. While we are
still refining the process and learning from our experiences, the
trajectory of technological progress suggests that the complexities we
face today will soon be resolved, making patch submission nearly
effortless for everyone involved.

Git and patches will eventually become obsolete as technology advances
to a point where automated, seamless error correction eliminates the
need for manual patch submission and version control tools entirely.

While man-made programming will also become obsolete since natural
language will serve as the primary command for machines to program on
behalf of humans.

And next in the future, we will see we forgot about programing at all
as machines can decide when to program for human.

It is futile to argue how today's LLM cannot submit patches correctly,
but did the author Chris ever try using the right tools? We are merely
observers of the present, but the future will bring such profound
changes that git and patching will be forgotten entirely.

I did not try these tools, but I can see humanity advancing:

relaycode - AI-native patch engine that applies LLM responses directly
to source tree, watches clipboard, parses code blocks, applies
patches, runs linters, and commits changes
URL: https://www.npmjs.com/package/relaycode

FlexiPatch - Forgiving patch utility designed for LLM-generated code 
modifications, handles whitespace and line-ending differences that break 
traditional patch tools
URL: https://github.com/ParisNeo/flexipatch

So I guess it must be useful for someone.

I am compiling complex business quotations, a core part of my sales
process involving curating media, contact details, and matching client
needs to investment scales for international partners. Language models
save me weeks or months of work, while Emacs Lisp minimizes my daily
effort.

> It's useful if someone else does measurable work for you. However,
> clicking carries an area under the work curve equal to
> zero. Therefore, there is no help there.

As I once said, it is envy disguised in ethics — a convenient moral
cloak for those who resent watching others accomplish in minutes what
took them years to master.

The sole purpose of computing is for computers to perform measurable
work for humans whenever possible. If humans are performing measurable
work on computers for other humans, it indicates that technology has
not yet advanced enough to achieve the ideal state of automation,
where machines handle tasks to minimize human effort.

-- 
Jean Louis

Reply via email to