On Mon, May 13, 2019 at 06:05:54PM +0100, Filipe Manana wrote:
> On Mon, May 13, 2019 at 5:57 PM David Sterba <dste...@suse.cz> wrote:
> >
> > On Mon, May 13, 2019 at 05:18:37PM +0100, Filipe Manana wrote:
> > > I would leave it as it is unless users start to complain. Yes, the
> > > test does this on purpose.
> > > Adding such code/state seems weird to me, instead I would change the
> > > rate limit state so that the messages would repeat much less
> > > frequently.
> >
> > The difference to the state tracking is that the warning would be
> > printed repeatedly, which I find unnecessary and based on past user
> > feedback, there will be somebody asking about that.
> >
> > The rate limiting can also skip a message that can be for a different
> > subvolume, so this makes it harder to diagnose problems.
> >
> > Current state is not satisfactory at least for me because it hurts
> > testing, the test runs for about 2 hours now, besides the log bloat. The
> 
> You mean the test case for fstests (btrfs/187) takes 2 hours for you?

This is on a VM with file-backed devices, that I use for initial tests
of patches before they go to other branches. It's a slow setup but helps
me to identify problems early as I can run a few in parallel.  I'd still
like to have the run time below say 5 hours (currently it's 4). Though I
can skip some thests I'd rather not due to coverage, but if there's no
other way I'll have to.

> For me it takes under 8 minutes for an unpatched btrfs, while a
> patched btrfs takes somewhere between 1 minute and 3 minutes. This is
> on VMs, with a debug kernel, average/cheap host hardware, etc.

On a another host, VM with physical disks it's closer to that time, it
took about 13 minutes which is acceptable.

Reply via email to