Re: [linux-lvm] how to convert a disk containing a snapshot to a snapshot lv?

2021-12-27 Thread Tomas Dalebjörk
Hi
Yes, it is an incremental backup based of the cow device
no worries about the backup, that works fine, and now by just creating a lv 
snapshot on it, works too
sent you an example before
eg
# extend vg with new bu disk
vgextend xxx newdisk
# create lv structure on disk
lvcreate -s -L wholedisksize -n s1 xxx/lv newdisk
# merge has to be started offline?
lvchange -a n xxx/lv
# start merge
lvconvert -merge -b xxx/s1
# online the lv
lvchange -a y xxx/s1

The backup is provisioned through the disk, and makes the data available 
immediately

But, I guess I can skip some steps?

Restoring data in just a matter of minutes, regardless of size, or will say, 
making the data available anywhere in just a few minutes, regardless of size

Also working with writes, without destroying the backup

are there a way to get a signal when a merge has completed?
or do I have to implement a monitor 
dmsetup xxx
or lvs
to check when merge has completed

Reference Tomas

Sent from my iPhone

> On 28 Dec 2021, at 06:32, Stuart D Gathman  wrote:
> 
> 
>> 
>> If you want to give it a try, just create a snapshot on a specific device
>> And change all the blocks on the origin, there you are, you now have a cow
>> device containing all data needed.
>> How to move this snapshot device to another server, reattach it to an empty
>> lv volume as a snapshot.
>> lvconvert -s, command requires an argument of an existing snapshot volume
>> name.
>> But there is no snapshot on the new server, so it can't re-attach the
>> volume.
>> So what procedures should be invoked to create just the detached references
>> in LVM, so that the lvconver -s command can work?
> 
> Just copy the snapshot to another server, by whatever method you would
> use to copy the COW and Data volumes (I prefer partclone for supported
> filesystems).  No need for lvconvert.  You are trying WAY WAY too hard.
> Are you by any chance trying to create an incremental backup system
> based on lvm snapshot COW?  If so, say so.
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] how to convert a disk containing a snapshot to a snapshot lv?

2021-12-27 Thread Stuart D Gathman

If you want to give it a try, just create a snapshot on a specific device
And change all the blocks on the origin, there you are, you now have a cow
device containing all data needed.
How to move this snapshot device to another server, reattach it to an empty
lv volume as a snapshot.
lvconvert -s, command requires an argument of an existing snapshot volume
name.
But there is no snapshot on the new server, so it can't re-attach the
volume.
So what procedures should be invoked to create just the detached references
in LVM, so that the lvconver -s command can work?


Just copy the snapshot to another server, by whatever method you would
use to copy the COW and Data volumes (I prefer partclone for supported
filesystems).  No need for lvconvert.  You are trying WAY WAY too hard.
Are you by any chance trying to create an incremental backup system
based on lvm snapshot COW?  If so, say so.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] LVM: Metadata on ... has wrong VG name

2021-12-27 Thread linux-lvm


...>On Mon, Dec 20, 2021 at 10:52 teigland @redhat.com wrote:
...>
...>On Sun, Dec 19, 2021 at 07:14:08AM -0500, linux-lvm@ harrier.ch wrote:
...>>   >sudo pvscan
...>>Metadata on /dev/sdd2 at 12800 has wrong VG name "fedora32 {
...>>id = "gJgZM9-n2Rd-V7us-RWae-cpT6-H84E-g7dAsk"
...>>seqno = 9
...>>format = "lvm2"
...>>status = ["RESIZEABLE", "READ", "WRITE"]
...>>flag" expected fedora.
...>>WARNING: Reading VG fedora on /dev/sdd2 failed.
...>>WARNING: PV /dev/sdd2 is marked in use but no VG was found using it.
...>>WARNING: PV /dev/sdd2 might need repairing.
...>
...>It's not clear if there's a problem with the on-disk metadata, or if the
...>pvscan is confused and mixing info from the duplicated disks.  You don't
...>want to "fix" metadata if it's just a reporting error, so use filters to
...>report each device in isolation:
...>
...>pvs --config 'devices/filter=["a|/dev/sdd2|", "r|.*|"]' /dev/sdd2
...>pvs --config 'devices/filter=["a|/dev/sda2|", "r|.*|"]' /dev/sda2

The output contains no obvious errors.

...>If each is disk is displayed correctly without error, then the on-disk
...>metadata is fine (please create a bz, or send the output of pvs -v so
...>we can fix it.)  

Enclosed below


Thanks for your prompt, detailed message and apologies for the delay in 
response. After a bit of juggling of hardware, I realised that I could access 
the renamed Logical Volume, but not the clone and renamed original 
concurrently. As soon as it became obvious that I could transfer the needed 
data thru an intermediary, I was able to successfully complete my upgrade and 
the issue became academic for me.

Feel free to request more information if needed.

j







[liveuser@localhost-live ~]$ ls -l /etc/system-release
lrwxrwxrwx. 1 root root 14 Apr 12  2021 /etc/system-release -> fedora-release

[liveuser@localhost-live ~]$ cat /etc/system-release
Fedora release 34 (Thirty Four)

[liveuser@localhost-live ~]$ uname -a
Linux localhost-live 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 16:31:13 UTC 2021 
x86_64 x86_64 x86_64 GNU/Linux

[liveuser@localhost-live ~]$ ldconfig -v | grep lvm
liblvm2cmd.so.2.03 -> liblvm2cmd.so.2.03
libbd_lvm.so.2 -> libbd_lvm.so.2.0.0
libdevmapper-event-lvm2.so.2.03 -> libdevmapper-event-lvm2.so.2.03





[liveuser@localhost-live ~]$ sudo pvs --config 'devices/filter=["a|/dev/sda2|", 
"r|.*|"]' /dev/sda2
  PV VG Fmt  Attr PSizePFree
  /dev/sda2  fedora lvm2 a--  <931.02g0 


[liveuser@localhost-live ~]$ sudo pvs --config 'devices/filter=["a|/dev/sdd2|", 
"r|.*|"]' /dev/sdd2
  PV VG   Fmt  Attr PSizePFree
  /dev/sdd2  fedora32 lvm2 a--  <931.02g0 


[liveuser@localhost-live ~]$ sudo pvscan
  Metadata on /dev/sdd2 at 12800 has wrong VG name "fedora32 {
id = "gJgZM9-n2Rd-V7us-RWae-cpT6-H84E-g7dAsk"
seqno = 9
format = "lvm2"
status = ["RESIZEABLE", "READ", "WRITE"]
flag" expected fedora.
  WARNING: Reading VG fedora on /dev/sdd2 failed.
  PV /dev/sda2   VG fedora  lvm2 [<931.02 GiB / 0free]
  WARNING: PV /dev/sdd2 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdd2 might need repairing.
  PV /dev/sdd2  lvm2 [465.27 GiB]
  Total: 2 [1.36 TiB] / in use: 1 [<931.02 GiB] / in no VG: 1 [465.27 GiB]




[liveuser@localhost-live ~]$ sudo pvs -v
08:36:21.676022 pvs[3546] lvmcmdline.c:2999  Parsing: pvs -v
08:36:21.676085 pvs[3546] lvmcmdline.c:1990  Recognised command pvs_general (id 
122 / enum 103).
08:36:21.676299 pvs[3546] filters/filter-sysfs.c:331  Sysfs filter initialised.
08:36:21.676332 pvs[3546] filters/filter-internal.c:82  Internal filter 
initialised.
08:36:21.676354 pvs[3546] filters/filter-type.c:61  LVM type filter initialised.
08:36:21.676372 pvs[3546] filters/filter-usable.c:209  Usable device filter 
initialised (scan_lvs 0).
08:36:21.676470 pvs[3546] filters/filter-mpath.c:402  mpath filter initialised.
08:36:21.676627 pvs[3546] filters/filter-partitioned.c:78  Partitioned filter 
initialised.
08:36:21.676787 pvs[3546] filters/filter-signature.c:95  signature filter 
initialised.
08:36:21.676815 pvs[3546] filters/filter-md.c:157  MD filter initialised.
08:36:21.676840 pvs[3546] filters/filter-composite.c:103  Composite filter 
initialised.
08:36:21.676896 pvs[3546] filters/filter-persistent.c:196  Persistent filter 
initialised.
08:36:21.676919 pvs[3546] device_mapper/libdm-config.c:986  devices/hints not 
found in config: defaulting to all
08:36:21.676941 pvs[3546] device_mapper/libdm-config.c:1085  
metadata/record_lvs_history not found in config: defaulting to 0
08:36:21.676962 pvs[3546] lvmcmdline.c:3056  DEGRADED MODE. Incomplete RAID LVs 
will be processed.
08:36:21.676998 pvs[3546] lvmcmdline.c:3062  Processing command: pvs -v
08:36:21.677023 pvs[3546] lvmcmdline.c:3063  Command pid: 3546
08:36:21.677044 pvs[3546] lvmcmdline.c:3064  System ID: 
08:36:21.677070 pvs[3546] lvmcmdline.c:3067  O_DIRECT will be used

Re: [linux-lvm] how to convert a disk containing a snapshot to a snapshot lv?

2021-12-27 Thread Tomas Dalebjörk
Thanks for explaining all that details about how a snapshot is formatted on
the COW device.
I already know that part.

I am more interested in how the disk containing the COW data can be merged
back to an LV volume.
The second part only mentioned that it is possible, but not which steps are
involved.

As documented in the manual.
To split a snapshot from its origin (our words detach) one can use:
lvconvert --splitsnapshot vg/s1
Right?

To reverse that process, according to the manual; one can use:
lvconvert -s vg/s1
Right?

But as I mentioned before, this requires that the vg/s1 exists as an object
in the LVM metadata.
What if you are on a new server, that does not have vg/s1?
How to create that object or whatever you like to call this on the server?
The only way I got it is to use the
vgextend
lvcreate
lvconvert --splitsnapshot

And now reattach it, so that the actual merge can happen.
The object should exist now, so that the command: lvconvert -s vg/s1 can
work

Or how can the object vg/s1 be created so that it can be referenced to by
the lvconvert command?
The disk is formated as a COW device, and contains all of the data.
So how hard can it be to just reattach that volume to an empty or existing
LV volume on the server?

If it works on same server, why can't it work on any other new servers, as
the COW device contains ALL the data needed (we make sure it contains all
the data)

If you want to give it a try, just create a snapshot on a specific device
And change all the blocks on the origin, there you are, you now have a cow
device containing all data needed.
How to move this snapshot device to another server, reattach it to an empty
lv volume as a snapshot.
lvconvert -s, command requires an argument of an existing snapshot volume
name.
But there is no snapshot on the new server, so it can't re-attach the
volume.
So what procedures should be invoked to create just the detached references
in LVM, so that the lvconver -s command can work?

Regards Tomas

Den tis 21 dec. 2021 kl 16:30 skrev Zdenek Kabelac :

> Dne 21. 12. 21 v 15:44 Tomas Dalebjörk napsal(a):
> > hi
> >
> > I think I didn’t explain this clear enough
> > Allthe lvm data is present in the snapshot that I provision from our
> backup system
> > I can guarantee that!
> >
> > If I just mount that snapshot from our backup system, it works perfectly
> well
> >
> > so we don’t need the origin volumes in other way than copying back to it
> > we just need to reanimate it as a cow volume
> > mentioning that all data has been changed
> > the cow is just referencing to the origin location, so no problem there
> > All our data is in the cow volume, not just the changes
> >
> > just to compare
> > if you change just 1 byte on every chunksize in the origin volume, than
> the snapshot will contain all data, plus some meta data etc.
> > That is what I talk about here.
> > So how do I retach this volume to a new server?
> >
> > as the only argument acceptable argument by the lvconvert is vg/s1 ?
> >
> > That assumes that vg/s1 is present
> > so how to make it present?
>
> Hi
>
> As said in my previous post - the 'format' of data stored on COW storage
> (which is the 'real' meaning of snapshot LV) does NOT in any way resembles
> the
> 'normal' LV.
>
> So the COW LV could be really ONLY use together with 'snapshot' target.
>
> The easiest way how to 'copy' this snapshot to normal LV is like this:
>
>
> lvcreate -L size  -n newLV  vg
>
> dd if=/dev/vg/snapshotLV  of=/dev/vg/newLV  bs=512K
>
>
> (so with 'DD' you copy data in 'correct' format)
>
> You cannot convert snapshot LV to 'normal' LV in any other way then to
> merge
> this snapshot LV into your origin LV  (so origin is gone)
> (lvconvert --merge)
>
> You can also  'split' snapshot COW LV and 'reattach' such snapshot to
> other LV
> - but this requires rather good knowledge about whole functioning of this
> snapshotting - so you know what can you do and what can you expect. But
> I'd
> likely recommend  'dd'.
> You cannot use 'splitted' COW LV for i.e. filesystem - as it contains
> 'mixed'
> snapshot metadata and snapshot blocks.
>
> Old snapshot meaning was - to take 'time consistent' snapshot of LV which
> then
> you can use for i.e. taking backup
>
> Regards
>
> Zdenek
>
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/