Before anything, I must inform that I'm not a programmer. I'm an
undergraduate in music and music teaching, and also sound tech; programming
is a hobby derived from Linux use. With regards to Scheme, I'm an absolute
potato. However, from what I studied of programming and computer science,
and experienced things on my own, I'd to point out somethings.

*I asked Grok to self-design a prompt for accurately converting sheet music
images into Lilypond code. Early results are promising.*

*My test: I showed Grok a screenshotted page from a cello arrangement I
recently constructed in Lilypond, gave it the prompt below, compiled the
code it suggested, and compared the results*

I'd be wary to prompt a self-designed program, specially black box code.
Image recognition is an area of it's one, not just an isolated program.

I try to write functions on my own until I hit a roadblock. After searching
stacks, mailing list and Google, and still not able, I ask AI. I try to be
as clear while prompting and to analyze it's code and make sense of it.
However, most of the time I must reformat or redo the internal logic's.
When I ask the community for guidance, the community always has better
solutions to my problems.

So, my advice is to at most vibe code Scheme functions and to generally use
it like a fancy search tool. Don't prompt for applications or black box
functions.

*This prompt might be a great shortcut for importing existing sheet music
into Lilypond code.*

*My test: I showed Grok a screenshotted page from a cello arrangement I
recently constructed in Lilypond [...]*

So you attached a Lilypond score which might have contained source code or
additional information. Furthermore, what is the original score like? Did
the AI correctly used ritornelos and segnos, or just added markup symbols?
Did it at least give clean variables to copy and paste to another project?
How well did it compare to other scores? Did you try to print, scan and
analyze it again? Have you tested handwritten scores?

We mustn't lose track of what AI is: a smooth talking yes-man guessing
machine. It doesn't know what middle c is; it guesses a string of binary is
likely to be next. Every single time I used AI I encountered hallucinations
of commands and had to keep it in track.

Prompts aren't definitive. They can change from user to user and model to
model. If even code isn't absolutely definite, how could AI be?

Em qua., 7 de jan. de 2026, 08:28, Kieren MacMillan <
[email protected]> escreveu:

> Hi Richard,
>
> > I gave grok a screenshot of a single movement/single page of a sonata
> > of my own composition. The written response sounded extremely
> > convincing - it  had detected that the piece was written in Baroque
> > style for a start - but in detail it was wildly out, it failed to
> > detect the time signature, declared that the piece was in a different
> > key from the correct one, failed to detect the systems beyond the first
> > one and generated LilyPond syntax that doesn't compile, and which,
> > while it showed insight into quite esoteric aspects of LilyPond, didn't
> > have any obvious relation to the notes in the piece of music it was
> > attempting to interpret.
> > That it sounded extremely convincing could be the worst aspect of AI
>
> That aspect — which is intentionally designed into AI — is definitely one
> of the terrible aspects of AI (cf. psychosis).
>
> > it would be easy for a person to innocently re-post the results of an
> > enquiry which the AI learning algorithms would find on the internet and
> > incorporate into future responses.
>
> This is already happening in a widespread manner (cf. model collapse).
>
> Cheers,
> Kieren.
>
> __________________________________________________
>
> My work day may look different than your work day. Please do not feel
> obligated to read or respond to this email outside of your normal working
> hours.
>
>
>

Reply via email to