On Mon, Nov 13, 2017 at 8:01 PM, Mike Snitzer wrote:
>
> But feel free to remove the cache for now. Should be as simple as:
> lvconvert --uncache VG/CacheLV
I did a --splitcache yesterday and ran a scrub again, which completed
in 3h. More than the ~2h of the cached version
On Tue, Nov 14, 2017 at 12:00 PM, Joe Thornber wrote:
> I'm not sure what's going on here. Would you mind sending me the
> metadata please? Either a cache_dump of it, or a copy of the metadata
> dev?
Ok, I've copied the device to a file and run cache_dump on it:
On Tue, Nov 14, 2017 at 12:00 PM, Joe Thornber wrote:
>
> I'm not sure what's going on here. Would you mind sending me the
> metadata please? Either a cache_dump of it, or a copy of the metadata
> dev?
I'd like to create a cache dump, but I'm not very experienced with
this
On Mon, Nov 13, 2017 at 02:01:11PM -0500, Mike Snitzer wrote:
> On Mon, Nov 13 2017 at 12:31pm -0500,
> Stefan Ring wrote:
>
> > On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring wrote:
> > > On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber
On Mon, Nov 13 2017 at 12:31pm -0500,
Stefan Ring wrote:
> On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring wrote:
> > On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber wrote:
> >> On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring wrote:
> On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber wrote:
>> On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
>>> It strikes me as odd that the amount read from the spinning disk is
>>> actually
On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber wrote:
> On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
>> It strikes me as odd that the amount read from the spinning disk is
>> actually more than what comes out of the combined device in the end.
>
> This suggests
On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
> It strikes me as odd that the amount read from the spinning disk is
> actually more than what comes out of the combined device in the end.
This suggests dm-cache is trying to promote too way too much.
I'll try and reproduce the issue,
Having just upgraded from a 4.11 kernel to a 4.13 one, I see a
significantly higher scrub time for a ZFS on Linux (=ZoL) pool that
lives on a dm-cache device consisting of a 800 GB partition on one
spinning 1TB disk and one partition on an SDD (something between 100
and 200 GB). ZFS scrubbing