On Tue, 22 Oct 2019, Gionatan Danti wrote:
The main thing that somewhat scares me is that (if things had not changed)
thinvol uses a single root btree node: losing it means losing *all* thin
volumes of a specific thin pool. Coupled with the fact that metadata dump are
not as handy as with the
Hi,
Il 22-10-2019 18:15 Stuart D. Gathman ha scritto:
"Old" snapshots are exactly as efficient as thin when there is exactly
one. They only get inefficient with multiple snapshots. On the other
hand, thin volumes are as inefficient as an old LV with one snapshot.
An old LV is as efficient,
On Tue, 22 Oct 2019, Zdenek Kabelac wrote:
Dne 22. 10. 19 v 17:29 Dalebjörk, Tomas napsal(a):
But, it would be better if the cow device could be recreated in a faster
way, mentioning that all blocks are present on an external device, so that
the LV volume can be restored much quicker using
Thanks for feedback,
I know that thick LV snapshots are out dated, and that one should use
thin LV snapshots.
But my understanding is that the dm- cow and dm - origin are still
present and available in thin too?
Example of a scenario:
1. Create a snapshot of LV testlv with the name
Dne 22. 10. 19 v 12:47 Dalebjörk, Tomas napsal(a):
Hi
When you create a snapshot of a logical volume.
A new virtual dm- device will be created with the content of the changes from
the origin.
This cow device can than be used to read changed contents etc.
In case of an incident, this cow
Hi
When you create a snapshot of a logical volume.
A new virtual dm- device will be created with the content of the changes
from the origin.
This cow device can than be used to read changed contents etc.
In case of an incident, this cow device can be used to read back the
changed content
Hello List & David,
This patch is responsible for legacy mail:
[linux-lvm] pvresize will cause a meta-data corruption with error message
"Error writing device at 4096 length 512"
I had send it to our customer, the code ran as expected. I think this code is
enough to fix this issue.
Thanks
zhm
Hi,
pvmove seems to fail on VG, which is managed by PCS Resource agent with
'exclusive' activation enabled.
The volume group(VG) is created on a shared disk, with '--addtag test'
added.
Content of my lvm.conf is
#lvmconfig activation/volume_list
volume_list=["@test"]
I am able to create Logical
Hi,
pvmove seems to fail on VG, which is managed by PCS Resource agent with
'exclusive' activation enabled.
The volume group(VG) is created on a shared disk, with '--addtag test'
added.
Content of my lvm.conf is
#lvmconfig activation/volume_list
volume_list=["@test"]
I am able to create Logical