Re: [linux-lvm] LVM and RO device/partition(s

2023-03-22 Thread Zdenek Kabelac

Dne 20. 03. 23 v 17:37 lacsaP Patatetom napsal(a):

hi,

I come back to you with the memo mentioned : 
https://github.com/patatetom/lvm-on-readonly-block-device 

I hope that it will allow you to better understand this problem of alteration 
of the disk.


as I mentioned, LVM should normally/theoretically not touch the disk as long 
as it is read-only, but what bothers me the most is the fact that I can't 
"catch up" by correcting the new 6.1.15 kernel as I did before.


regards, lacsaP.

Le lun. 20 mars 2023 à 15:15, lacsaP Patatetom 

Hi

So I'm possibly finally starting to understand your problem here.

You are using your own patched kernel - where you were reverting
a32e236eb93e62a0f692e79b7c3c9636689559b9  linux kernel patch.

without likely understanding the consequences.

With kernel 6.X there is commit bdb7d420c6f6d2618d4c907cd7742c3195c425e2
modifying bio_check_ro() to return void  - so your 'reverting' patch
is no longer usable the way it's been created.


From your github report it seems you are creating  'raid' across 3 sdb drives.

So with normal kernel - it happens to be that  'dm' drives are allowed to 
bypass any 'read-only' protection set on a device.


So when you actually creating raid LV on  loop0 & loop1 - deactivate, then you 
make loop0 & loop1  read-only, active raid LV - then you can easily call 
'mkfs' and it will normally work.


Raid device consist or  '_rimage' & '_rmeta' LV per leg - where _rmeta is 
metadata device updated with activation of raid LV.


So when your local 'revert' patch for 6.X kernel no longer works - there is no 
surprise that your  'sdbX' drives are being actually modified - since ATM  dm 
targets are allowed to bypass  read-only protection.


Since the reason for the  'bypass' (snapshot read-only activation)  was fixed 
5 years ago we should probably build some better way how to restore to 
'read-only' protection - and allow to disable it only when user requests such 
behavior due to use of old user-space tooling.


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] LVM and RO device/partition(s

2023-03-22 Thread lacsaP Patatetom
Le mer. 22 mars 2023 à 15:11, Zdenek Kabelac
 a écrit :
>
> Dne 20. 03. 23 v 17:37 lacsaP Patatetom napsal(a):
> > hi,
> >
> > I come back to you with the memo mentioned :
> > https://github.com/patatetom/lvm-on-readonly-block-device
> > 
> > I hope that it will allow you to better understand this problem of 
> > alteration
> > of the disk.
> >
> > as I mentioned, LVM should normally/theoretically not touch the disk as long
> > as it is read-only, but what bothers me the most is the fact that I can't
> > "catch up" by correcting the new 6.1.15 kernel as I did before.
> >
> > regards, lacsaP.
> >
> > Le lun. 20 mars 2023 à 15:15, lacsaP Patatetom 
> Hi
>
> So I'm possibly finally starting to understand your problem here.

:-)

>
> You are using your own patched kernel - where you were reverting
> a32e236eb93e62a0f692e79b7c3c9636689559b9  linux kernel patch.
>
> without likely understanding the consequences.

indeed, I did not go down in the meanders of LVM and Linux kernel, I
have neither the capacity nor the time :-(
my goal was/is to prevent any modification of a read-only configured
media and this little `return true;` was doing the job for LVM :-)

>
> With kernel 6.X there is commit bdb7d420c6f6d2618d4c907cd7742c3195c425e2
> modifying bio_check_ro() to return void  - so your 'reverting' patch
> is no longer usable the way it's been created.

yes.

>
>
>  From your github report it seems you are creating  'raid' across 3 sdb 
> drives.

I don't do anything special at this level, it's my system (ArchLinux)
that takes care of it.
the "only" thing I introduce is the udev rule which allows to switch
devices/partitions to read-only as soon as they appear.

>
> So with normal kernel - it happens to be that  'dm' drives are allowed to
> bypass any 'read-only' protection set on a device.
>
> So when you actually creating raid LV on  loop0 & loop1 - deactivate, then you
> make loop0 & loop1  read-only, active raid LV - then you can easily call
> 'mkfs' and it will normally work.
>
> Raid device consist or  '_rimage' & '_rmeta' LV per leg - where _rmeta is
> metadata device updated with activation of raid LV.

I think indeed that only the metadatas are concerned by these
modifications but they are stored somewhere on a read-only disk and
theoretically should not be modified .

>
> So when your local 'revert' patch for 6.X kernel no longer works - there is no
> surprise that your  'sdbX' drives are being actually modified - since ATM  dm
> targets are allowed to bypass  read-only protection.
>
> Since the reason for the  'bypass' (snapshot read-only activation)  was fixed
> 5 years ago we should probably build some better way how to restore to
> 'read-only' protection - and allow to disable it only when user requests such
> behavior due to use of old user-space tooling.

that would be cool ;-)
I'm currently working around my problem with a specific configuration
of lvm.conf and the use of /dev/nbd in snapshot mode which allows an
apparent change without real change.

>
> Regards
>
> Zdenek
>

thank you for these exchanges and your work.
regards, lacsaP.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/