> These sub-pixel shifts I must admit worry me a bit. [...]
Again, it seems to me that you are starting to ride the wrong horse.
What we have right now is, as far as I can see, good enough to handle
the specific case of slight image shifts. Please invest your energy
in the *other* problem, name
>> [*] After a MR has been merged you have to create a branch locally and
>> apply the changes manually.
>>
>
> Just so I understand: After merge, can I just checkout the repo at
> the commit right before the merge and build the baseline from that?
> Then I'd checkout HEAD and get my diffs?
On Sat, Aug 3, 2024 at 7:44 AM Werner LEMBERG wrote:
> Han-Wen wrote:
>
> > The idea is that we'd want to trigger less on diffuse lines of
> > difference (shifted staffline), but more on concentrated blobs
> > (disappearing symbol). Here is a suggestion
> >
> > * generate images without anti-alia
On Sat, Aug 3, 2024 at 7:36 AM Werner LEMBERG wrote:
> > we need _tests_ [...]
>
> The case at hand is Harm's current work[...]
brilliant, thanks a lot
> [*] After a MR has been merged you have to create a branch locally and
> apply the changes manually.
>
Just so I understand:
After mer
> I had an idea of what one could do, but really, we first need a
> representative test set of image pairs, both pairs with important
> differences and spurious differences, so we can see what a new
> algorithm looks like.
See my other e-mail to Luca how to one of Harm's MR to do that.
On the ot
> And mind you: we need _tests_ not just the images, I want to
> investigate if making the images in a different format helps
> (with/withou aa, messing with resolution, ... whatever the case
> might be)
The case at hand is Harm's current work, i.e.,
https://gitlab.com/lilypond/lilypond/-/mer
> The original example that you came up with was a false negative,
> namely a missing object that stayed unnoticed. Now we're discussing
> all kinds of complicated algorithms to reduce the probability of
> false negatives, while also trying to avoid false positives. My
> question is: Do we real
Point taken. Maybe it would be good to take a step back, though. The
original example that you came up with was a false negative, namely a
missing
object that stayed unnoticed. Now we're discussing all kinds of
complicated algorithms
to reduce the probability of false negatives, while also trying
"backshift" -> do something to negate the translation, for example compute
the bbox of the two and align the center of one to the other. Or maybe
better the midpoint of the top of the bbox of one to the other. Taking the
top-left corner might be an idea, but it makes it hard to realign by a
subpix
Yes we do need a few tests set aside for this. Agreed. I was thinking about
antialiasing and such, it's possible a different approach might work:
render at a somewhat higher resolution, then backshift and blur everything
a bit (say a gaussian of 2 pixels or so, just a touch). That might make
enough
On Mon, Jul 29, 2024 at 11:10 AM Werner LEMBERG wrote:
>
>
> >> It would seem that though shifts and changes in the lengths of the
> >> staves are "common", small and relatively benign problems,
> >> rotations and scales (magnifications) should be considered major
> >> disasters, right?
> >
> > Ro
On Mon, Jul 29, 2024 at 4:54 PM Werner LEMBERG wrote:
> Please don't change the topic in this thread
>
yeah fair enough
--
Luca Fascione
> Or you run it "horizontally": [...]
Please don't change the topic in this thread on how to
improve/modify/whatever the regression test system. It runs just
fine, and IMHO we don't have to change that (except if you *insist* on
doing your suggested changes by yourself, also maintaining them fo
On Mon, Jul 29, 2024 at 3:32 PM Han-Wen Nienhuys wrote:
> On Mon, Jul 29, 2024 at 12:10 PM Luca Fascione
> wrote:
> > So once you have the above, you add hierarchies to the above so you can
> deploy a branch-and-bound strategy
> >
> > Make bigger tests that check several things at once (these ar
On Mon, Jul 29, 2024 at 12:10 PM Luca Fascione wrote:
> I was actually thinking about this situation upside-down from how you're
> seeing it,
> details below
>
> On Mon, Jul 29, 2024 at 10:30 AM Han-Wen Nienhuys wrote:
>>
>> On Mon, Jul 29, 2024 at 8:56 AM Luca Fascione wrote:
>> > [shifts are]
On Mon, Jul 29, 2024 at 12:20 PM Luca Fascione wrote:
>
>
>
> On Mon, Jul 29, 2024 at 11:09 AM Werner LEMBERG wrote:
>>
>> This means that it would be sufficient to make the Cairo backend also
>> create logging output. In case I'm not missing something this
>> shouldn't be too hard to add.
>
> C
On Mon, Jul 29, 2024 at 11:09 AM Werner LEMBERG wrote:
> This means that it would be sufficient to make the Cairo backend also
> create logging output. In case I'm not missing something this
> shouldn't be too hard to add.
>
Can this be done?
If one were to log all the calls and arguments to Ca
Hi Han-Wen,
I was actually thinking about this situation upside-down from how you're
seeing it,
details below
On Mon, Jul 29, 2024 at 10:30 AM Han-Wen Nienhuys wrote:
> On Mon, Jul 29, 2024 at 8:56 AM Luca Fascione
> wrote:
> > [shifts are] going to be some random, non-integer quantity, right?
>> It would seem that though shifts and changes in the lengths of the
>> staves are "common", small and relatively benign problems,
>> rotations and scales (magnifications) should be considered major
>> disasters, right?
>
> Rotations do not generally happen. Virtually all the positioning is
> r
> I do like Michael's idea, used for selected tests (saybe maybe 50 to
> 80% of the tests). Agreed with Werner that there is the issue we
> don't have such a backend at the moment, although there is a
> question as to whether a normalizing postprocess on the SVG output
> might be enough, assuming
On Mon, Jul 29, 2024 at 8:56 AM Luca Fascione wrote:
>
> Werner,
> Could you write me a line or two about these shifts we get?
> Like: they're going to be some random, non-integer quantity, right?
Yes, but since the comparison works on pixel images, you can't see the
non-integer part of the shift
On Sun, Jul 28, 2024 at 10:44 PM Michael Käppler wrote:
> I would like to bring up the question if it is really necessary that we
> do all testing
> "end-to-end", i.e. from input ly code to pixel-based graphical output.
> IIUC, we have an intermediate graphical language, consisting of the
> variou
Werner,
Could you write me a line or two about these shifts we get?
Like: they're going to be some random, non-integer quantity, right?
Also, the rasterization that gets performed, is it anti aliased?
It would seem that though shifts and changes in the lengths of the staves
are "common", small and
> I would like to bring up the question if it is really necessary that
> we do all testing "end-to-end", i.e. from input ly code to
> pixel-based graphical output.
It's definitely necessary to do that, since we regularly had rendering
issues with Ghostscript. However, ...
> IIUC, we have an in
Hi Werner et al.,
I am thinking about a different approach, knowing, that I am likely
missing some details...
I would like to bring up the question if it is really necessary that we
do all testing
"end-to-end", i.e. from input ly code to pixel-based graphical output.
IIUC, we have an intermediate
> On 28 Jul 2024, at 20:43, Werner LEMBERG wrote:
>
>>> On top of the MAE algorithm we currently use, another algorithm
>>> might increase the penalty for small feature differences so that it
>>> is above our threshold.
>>
>> It gets into the construction of image comparison algorithms.
>> Per
>> On top of the MAE algorithm we currently use, another algorithm
>> might increase the penalty for small feature differences so that it
>> is above our threshold.
>
> It gets into the construction of image comparison algorithms.
> Perhaps using some exponential for differences, or make another
>
> On 28 Jul 2024, at 19:49, Werner LEMBERG wrote:
>
>>> I'm quite sure that behind the scene, for all GUIs, a metric gets
>>> computed to decide whether there are differences at all. We 'just'
>>> have to find one that fits our needs.
>>
>> It looks like you want a different property than an
>> I'm quite sure that behind the scene, for all GUIs, a metric gets
>> computed to decide whether there are differences at all. We 'just'
>> have to find one that fits our needs.
>
> It looks like you want a different property than an average metric,
> so perhaps what you have can be modified
> On 28 Jul 2024, at 16:26, Werner LEMBERG wrote:
>
>
>>> Anyway, AFAICS, it doesn't provide what we need for LilyPond.
>>
>> It seems hard to find what you are asking for, as most programs are
>> GUI oriented.
>
> I'm quite sure that behind the scene, for all GUIs, a metric gets
> computed
>> Anyway, AFAICS, it doesn't provide what we need for LilyPond.
>
> It seems hard to find what you are asking for, as most programs are
> GUI oriented.
I'm quite sure that behind the scene, for all GUIs, a metric gets
computed to decide whether there are differences at all. We 'just'
have to
> On 28 Jul 2024, at 12:05, Werner LEMBERG wrote:
…
> Anyway, AFAICS, it doesn't provide what we need for LilyPond.
It seems hard to find what you are asking for, as most programs are GUI
oriented. Some inputs, though: A program switching the images so that
differences like in your example ca
> I made a net search on “image compare program”, and found the link
> below. It does capture the image difference in your post above,
> music is listed as an application, and it says it is “open source”
> without detailing.
>
> https://www.robots.ox.ac.uk/~vgg/software/image-compare/
Thanks.
> On 27 Jul 2024, at 18:56, Werner LEMBERG wrote:
>
> I've posted a question on StackExchange, searching for a better
> regtest comparison algorithm
>
>
> https://computergraphics.stackexchange.com/questions/14143/search-for-special-image-difference-metric
I made a net search on “image comp
Am Sa., 27. Juli 2024 um 18:56 Uhr schrieb Werner LEMBERG :
>
>
> I've posted a question on StackExchange, searching for a better
> regtest comparison algorithm
>
>
> https://computergraphics.stackexchange.com/questions/14143/search-for-special-image-difference-metric
>
>
> Werner
>
We also
Hello Jürgen,
> maybe you want to do some normalization on the image as a
> preprocessing step before actually doing the comparison?
Yes, perhaps a two-stage algorithm is the way to go. However, a
complete shift of the image might be an indication of a problem, too,
so in the end I want an alg
Ok awesome
On Sat, 27 Jul 2024, 21:35 Werner LEMBERG, wrote:
>
> > Yes, I might be a moment due to friends visiting and such, but
> > definitely can.
>
> Great!
>
> > Could you get me going pointing me to a few image pairs and an
> > indication (like you did on SE) of the defect you see?
>
> The
> Yes, I might be a moment due to friends visiting and such, but
> definitely can.
Great!
> Could you get me going pointing me to a few image pairs and an
> indication (like you did on SE) of the defect you see?
The example I gave on SE is *the* example – a small object of about
the size of a n
Hi Werner,
hi all,
maybe you want to do some normalization on the image as a preprocessing
step before actually doing the comparison?
E.g. first crop the image, and then do the comparison. Of course, if
there is even a _large_ shift, you will no more detect it at all after
no
On Sat, Jul 27, 2024 at 8:18 PM Werner LEMBERG wrote:
> Can you provide a demo?
>
Yes, I might be a moment due to friends visiting and such, but definitely
can.
Could you get me going pointing me to a few image pairs and an indication
(like you did on SE) of the defect you see?
Python/NumPy/Sci
> Werner the case you have on SE seems to indicate you need a
> translation invariant test, I think.
Whatever you say :-) Great that there are people on this list who can
actually contribute to the topic.
> In your case you could compute min diff over all possible
> translations that would bri
At my previous work I had written the image comparison framework for our
regression test suite, it worked very well for us and it was in
python/numpy/scipy (which threads well internally).
Werner the case you have on SE seems to indicate you need a translation
invariant test, I think. Another thing
FWIW, I wrote a version of the comparison in Go that does the entire
comparison in-memory, without shelling out to any program. I did this
because it parallelized much better (ie. is faster), but it also means
you can easily test alternative algorithms.
See here: https://github.com/hanwen/lilypond
I've posted a question on StackExchange, searching for a better
regtest comparison algorithm
https://computergraphics.stackexchange.com/questions/14143/search-for-special-image-difference-metric
Werner
44 matches
Mail list logo