On Sat, Jun 05, 2021 at 08:56:24PM +0200, Johan Corveleyn wrote:
> Hmm, I tried this patch with my "annotate large XML file with deep
> history test", but the result isn't the same as with 1.14. I'd have to
> investigate a bit more to find out where it took another turn, and
> what that diff looks like (and whether or not the "other diff" is a
> problem for me).

It would be good to know how many iterations your file needs to produce
a diff without a limit. You can use something like SVN_DBG(("p=%d\n", p))
to print this number.

Could it help to use the new heuristics only in cases where our existing
performance tricks fail? I.e. enforce a limit only when the amount of
common tokens between the files is low, and the amount of common prefix/suffix
lines relative to the total number of lines is low?

Reply via email to