2015-04-23 17:42 GMT+02:00 Marc Cousin :
> On 20/04/2015 11:51, Marc Cousin wrote:
>> On 31/03/2015 19:05, David Sterba wrote:
>>> On Mon, Mar 30, 2015 at 05:09:52PM +0200, Marc Cousin wrote:
>>>>> So it would be good to sample the active threads and see where
On 20/04/2015 11:51, Marc Cousin wrote:
> On 31/03/2015 19:05, David Sterba wrote:
>> On Mon, Mar 30, 2015 at 05:09:52PM +0200, Marc Cousin wrote:
>>>> So it would be good to sample the active threads and see where it's
>>>> spending the time. It could be the
On 31/03/2015 19:05, David Sterba wrote:
> On Mon, Mar 30, 2015 at 05:09:52PM +0200, Marc Cousin wrote:
>>> So it would be good to sample the active threads and see where it's
>>> spending the time. It could be the somewhere in the rb-tree representing
>>> ex
On 30/03/2015 16:25, David Sterba wrote:
> On Wed, Mar 25, 2015 at 11:55:36AM +0100, Marc Cousin wrote:
>> On 25/03/2015 02:19, David Sterba wrote:
>>> Snapper might add to that if you have
>>>
>>> EMPTY_PRE_POST_CLEANUP="yes"
>>>
>>>
On 25/03/2015 02:19, David Sterba wrote:
>
> The snapshots get cleaned in the background, which usuall touches lots
> of data (depending on the "age" of the extents, IOW the level of sharing
> among the live and deleted snapshots).
>
> The slowdown is caused due to contention on the metadata (loc
On 22/03/2015 09:11, Marc Cousin wrote:
Hi,
I've noticed this problem for a while (I started to use snapper a while ago):
while destroying snapshots, it's almost impossible to do IO on the volume.
There is almost no IO active on this volume (it is made of sdb,sdc and sdd).
Hi,
I've noticed this problem for a while (I started to use snapper a while ago):
while destroying snapshots, it's almost impossible to do IO on the volume.
There is almost no IO active on this volume (it is made of sdb,sdc and sdd).
Device: rrqm/s wrqm/s r/s w/srMB/s