On Wed, Apr 22, 2015 at 09:30:20PM +0200, erik elfström wrote:

> On Tue, Apr 21, 2015 at 11:24 PM, Jeff King <p...@peff.net> wrote:
> >
> > If I understand correctly, the reason that you need per-run setup is
> > that your "git clean" command actually cleans things, and you need to
> > restore the original state for each time-trial. Can you instead use "git
> > clean -n" to do a dry-run? I think what you are timing is really the
> > "figure out what to clean" step, and not the cleaning itself.
> 
> Yes, that is the problem. A dry run will spot this particular performance
> issue but maybe we lose some value as a general performance test if
> we only do "half" the clean? Admittedly we clearly lose some value in
> the current state as well due to the copying taking more time than the
> cleaning. I could go either way here.

I guess it is a matter of opinion. I think testing only the "find out
what to clean" half separately is actually beneficial, because it helps
us isolate any slowdown. If we want to add a test for the other half, we
can, but I do not actually think it is currently that interesting (it is
just calling unlink() in a loop).

So even leaving the practical matters aside, I do not think it is a bad
thing to split it up. When you add in the fact that it is practically
much easier to test the first half, it seems to me that testing just
that is a good first step.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to