Re: [linux-lvm] Issue after upgrading the LVM2 package from 2.02.176 to 2.02.180

2020-09-15 Thread Roger Heflin
#1:
Device /dev/sda3 excluded by a filter.)
Failed to execute command: pvcreate -ffy /dev/sda3
ec=0

excluded by filter is likely the issue, I think there was a bug were
it allowed that pvcreate to work when it should have blocked it
because of the filter.  It should not allow a pvcreate against
something blocked by a filter.

#2: Read-only locking type set. Write locks are prohibited.
I am going to guess either / is not mounted rw, or you don't have the
directory mounted rw that is needed to create the locks (/var/run/lvm
usually).

On Tue, Sep 15, 2020 at 1:42 AM KrishnaMurali Chennuboina
 wrote:
>
> Hi Roger,
>
> I have tried this with the older LVM package(.176) and this issue was not 
> seen. Issue was seen with .180 version every time.
> # executing command: vgchange -ay
> (status, output): (0,   WARNING: Failed to connect to lvmetad. Falling back 
> to device scanning.)
>   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
> # executing command: pvcreate -ffy /dev/sda3
> (status, output): (5,   WARNING: Failed to connect to lvmetad. Falling back 
> to device scanning.
>   Error reading device /dev/sda3 at 0 length 4096.
>   Error reading device /dev/ram0 at 0 length 4096.
>   Error reading device /dev/loop0 at 0 length 4096.
>   Error reading device /dev/sda at 0 length 512.
>   Error reading device /dev/sda at 0 length 4096.
>   Error reading device /dev/ram1 at 0 length 4096.
>   Error reading device /dev/sda1 at 0 length 4096.
>   Error reading device /dev/ram2 at 0 length 4096.
>   Error reading device /dev/sda2 at 0 length 4096.
>   Error reading device /dev/ram3 at 0 length 4096.
>   Error reading device /dev/ram4 at 0 length 4096.
>   Error reading device /dev/ram5 at 0 length 4096.
>   Error reading device /dev/ram6 at 0 length 4096.
>   Error reading device /dev/ram7 at 0 length 4096.
>   Error reading device /dev/ram8 at 0 length 4096.
>   Error reading device /dev/ram9 at 0 length 4096.
>   Error reading device /dev/ram10 at 0 length 4096.
>   Error reading device /dev/ram11 at 0 length 4096.
>   Error reading device /dev/ram12 at 0 length 4096.
>   Error reading device /dev/ram13 at 0 length 4096.
>   Error reading device /dev/ram14 at 0 length 4096.
>   Error reading device /dev/ram15 at 0 length 4096.
>   Device /dev/sda3 excluded by a filter.)
> Failed to execute command: pvcreate -ffy /dev/sda3
> ec=0
>
> I have tried with different options of pvcreate but didnt helped much. After 
> the system got halted with the above error, i tried to executing pvs command 
> but got the below error.
> bash-4.4# pvs
>   Error reading device /dev/ram0 at 0 length 4096.
>   Error reading device /dev/loop0 at 0 length 4096.
>   Error reading device /dev/sda at 0 length 512.
>   Error reading device /dev/sda at 0 length 4096.
>   Error reading device /dev/ram1 at 0 length 4096.
>   Error reading device /dev/sda1 at 0 length 4096.
>   Error reading device /dev/ram2 at 0 length 4096.
>   Error reading device /dev/sda2 at 0 length 4096.
>   Error reading device /dev/ram3 at 0 length 4096.
>   Error reading device /dev/sda3 at 0 length 4096.
>   Error reading device /dev/ram4 at 0 length 4096.
>   Error reading device /dev/sda4 at 0 length 4096.
>   Error reading device /dev/ram5 at 0 length 4096.
>   Error reading device /dev/ram6 at 0 length 4096.
>   Error reading device /dev/ram7 at 0 length 4096.
>   Error reading device /dev/ram8 at 0 length 4096.
>   Error reading device /dev/ram9 at 0 length 4096.
>   Error reading device /dev/ram10 at 0 length 4096.
>   Error reading device /dev/ram11 at 0 length 4096.
>   Error reading device /dev/ram12 at 0 length 4096.
>   Error reading device /dev/ram13 at 0 length 4096.
>   Error reading device /dev/ram14 at 0 length 4096.
>   Error reading device /dev/ram15 at 0 length 4096.
>   Error reading device /dev/sdb at 0 length 512.
>   Error reading device /dev/sdb at 0 length 4096.
>   Error reading device /dev/sdb1 at 0 length 4096.
>   Error reading device /dev/sdb2 at 0 length 4096.
>   Error reading device /dev/sdb3 at 0 length 4096.
>   Read-only locking type set. Write locks are prohibited.
>   Recovery of standalone physical volumes failed.
>   Cannot process standalone physical volumes
> bash-4.4#
>
> Attached the complete log in initial mail.
>
> Thanks.
>
> On Mon, 14 Sep 2020 at 20:29, Roger Heflin  wrote:
>>
>> In general I would suggest fully disabling lvmetad from the config
>> files and from being started up.
>>
>> Issues around it not answering (like above) and answering but somehow
>> having stale/wrong info have burned me too many times to trust it.  It
>> may be a lvmetad bug, or be udevd weirdness.
>>
>> The only significant improvement it makes is it reduces the lvm
>> command time on installs with significant numbers of devices, but
>> given that the info has been wrong often enough that is not worth the
>> risk.
>>
>> On Mon, Sep 14, 2020 at 2:25 AM KrishnaMurali Chennuboina
>>  wrote:
>> >
>> > Hi Team,
>> >
>> > Whil

Re: [linux-lvm] vgcfgrestore + pvcreate using liblockdev api in C

2020-09-15 Thread Vojtěch Trefný
Hi, libblockdev is not part of the LVM project so it's usually better to
ask on our GitHub: https://github.com/storaged-project/libblockdev

Example for extra arguments is available in the documentation:
http://storaged.org/libblockdev/libblockdev-Utils.html#bd-extra-arg-new
(the example is for the filesystem plugin, but it works the same for LVM
plugin).

We don't have full coverage of all LVM commands so there is no
vgcfgrestore function in the LVM plugin. We are currently working on new
major release so this would be a good time for adding new functions
and/or adjusting API so please open an issue on GitHub for things you'd
like to be added to libblockdev.

On 9/14/20 7:56 AM, Tomas Dalebjörk wrote:
> hi
> 
> Is there an api available today or an example how to perform the below OS 
> commands in C using a function or api instead of system calls?
> 
> 1. create an unique uid on the server, and store it on the PV disk
> 
> # pvcreatue -u  
> 
> 2. create the LVM metadata backup content (as /etc/lvm/backup/...)
> - add PV uid generated above as there is no parameter to save the data to a 
> device
> - add unique VG
> - add exactly 1 contiguous LV
> 
> 3. save LVM metadata to the PV disk as the OS command does
> 
> # vgcfgrestore VG
> 
> Can this be done by instead of using system() calls, instead using native C 
> commands either via an API or other methods?
> 
> the examples for bd_lvm_pvcreate() in libblockdevapi for example does not say 
> how to add -u flag
> and I couldn’t find any vgcfgrestore there too?
> 
> besides this, I found it strange why pvcreate -u is needed at all? would be 
> better to have a  parameter directly to the vgcfgrestore command instead
> 
> looking forward to hearing from you soon again 
> 
> regards Tomas
> 
> Sent from my iPhone
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvmcache with vdo - inconsistent block size

2020-09-15 Thread Zdenek Kabelac

Dne 14. 09. 20 v 23:44 Gionatan Danti napsal(a):

Hi all,
I am testing lvmcache with VDO and I have issue with devices block size.

The big & slow VDO device is on top of a 4-disk MD RAID 10 device (itself on 
top of dm-integrity). Over the VDO device I created a thinpool and a thinvol 
[1]. When adding the cache device to the volume group via vgextend, I get an 
error stating "Devices have inconsistent logical block sizes (4096 and 512)." [2]


Now, I know why the error shows and what i means. However, I don't know how to 
force the cache device to act as a 4k sector device, and/if this is really 
required to cache a VDO device.


My current workaround is to set VDO with --emulate512=enabled, but this can be 
suboptimal and it is not recommended.


Any idea on what I am doing wrong?


Hi

LVM currently does not support mixing devices of different sector sizes within
a single VG as it brings lot of troubles we have not yet clear vision what
to do with all of them.

Also this combination of provisioned devices is not advised - since you are 
combining 2 kind of devices on top of each other and it can be a big problem

to solve recovery case.

On lvm2 side we do not allow to use 'VDO LV' as backend for thin-pool device.

So ATM it's on a user to solve all the possible scenarios that may appear on
such device stack.

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Issue after upgrading the LVM2 package from 2.02.176 to 2.02.180

2020-09-15 Thread Zdenek Kabelac

Dne 15. 09. 20 v 13:05 Roger Heflin napsal(a):

#1:
Device /dev/sda3 excluded by a filter.)
Failed to execute command: pvcreate -ffy /dev/sda3
ec=0

excluded by filter is likely the issue, I think there was a bug were
it allowed that pvcreate to work when it should have blocked it
because of the filter.  It should not allow a pvcreate against
something blocked by a filter.

#2: Read-only locking type set. Write locks are prohibited.
I am going to guess either / is not mounted rw, or you don't have the
directory mounted rw that is needed to create the locks (/var/run/lvm
usually).

On Tue, Sep 15, 2020 at 1:42 AM KrishnaMurali Chennuboina
 wrote:




Hi

Please consider first to use the recent/upstream version of lvm2
(ATM either 2.02.187  or  2.03.10 or git  master/stable-2.0)

Unfortunately we can't be analyzing all the bugs from all the versions out 
there.

So please try to provide reproducer with versions listed above.
If you can't reproduced - then you may try to bisect to discover fixing patch.

If you need to fix a particular version on your distro,
you may need to ask your package maintainer to backport
list of patches for you.
In some case however this might be hard - aka it might be more efficient
to simply go with newer version.

Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm limitations

2020-09-15 Thread Tomas Dalebjörk
thanks for the feedback 

in previous older versions of LVM, I guess that each lv requires a minor, major 
and these are might have limitations of how many can be addressed 

but how about LVM2?

if I intend to have many hundred thousands of LV, would that be any issue?

Sent from my iPhone

> On 1 Sep 2020, at 15:21, Gionatan Danti  wrote:
> 
> Il 2020-08-30 21:30 Zdenek Kabelac ha scritto:
>> Hi
>> Lvm2 has only ascii metadata (so basically what is stored in
>> /etc/lvm/archive is the same as in PV header metadata area -
>> just without spaces and some comments)
>> And while this is great for manual recovery, it's not
>> very efficient in storing larger number of LVs - there basically
>> some sort of DB attemp would likely be needed.
>> So far however there was no real worthy use case - so safety
>> for recovery scenarios wins ATM.
> 
> Yes, I agree.
> 
>> Thin - just like any other LV takes some 'space' - so if you want
>> to go with higher amount - you need to specify bigger metadata areas
>> to be able to store such large lvm2 metadata.
>> There is probably not a big issue with lots of thin LVs in thin-pool as long
>> as user doesn't need to have them active at the same time. Due to a nature of
>> kernel metadata handling, the larger amount of active thin LVs from
>> the same thin-pool v1 may start to compete for the locking when
>> allocating thin pool chunks thus killing performance - so here is
>> rather better to stay in some 'tens' of actively provisioning thin
>> volumes when the 'performance' is factor.
> 
> Interesting.
> 
>> Worth to note there is fixed strict limit of the ~16GiB maximum
>> thin-pool kernel metadata size - which surely can be exhausted -
>> mapping holds info about bTree mappings and sharing chunks between
>> devices
> 
> Yes, I know about this specific limitation.
> 
> Thanks.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Zdenek Kabelac

Dne 15. 09. 20 v 21:16 Tomas Dalebjörk napsal(a):

thanks for the feedback

in previous older versions of LVM, I guess that each lv requires a minor, major 
and these are might have limitations of how many can be addressed

but how about LVM2?

if I intend to have many hundred thousands of LV, would that be any issue?


Hi


If you are asking about these limits -

major number is 12bits  4096
minor number is 20bits  1048576

Since DM uses 1 single major - you are limited to ~1 million active LVs.
But I assume system would be already slowed down to unsable level
if you would ever reach 100 000 devices on a single system.
(but you will experience major system slow-down with even just 1
active LVs...)

So as had been said - sensible/practical limit for lvm2 is around couple 
thousands of LVs. You can use 'more' but it would be more and more painful 
experience ;)


If you would have enough time :) and massive CPU power behind -
you probably can even create VG with 1 million LVs - but the usability
of such VG would be really for long-suffering users ;)

So yes hundred thousands of LVs in a single VG would be a BIG problem,
but I don't see any useful use-case why anyone would need to manipulate
with so many device within a single VG.

And if you really need hundred thousands - you will need to write much more 
efficient metadata management system...


And purely theoretically it's worth to note there is nontrivial amount of 
kernel memory and other resources needed per single device - so to run with 
million devices you would probably need some expensive hw (many TiB of RAM...)
just to make the available - and then there should be some caching and 
something to use them


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Tomas Dalebjörk
thanks 

ok, lets say that I have 10 LV on a server, and want create a thin lv snapshot 
every hour and keep that for 30 days
that would be 24h * 30days * 10lv = 720 lv

if I want to keep snapshot copies from more nodes, to serve a single repository 
of snapshot copies, than these would easily become several hundred thousands of 
lv

not sure if this is a good idea, but I guess it can be very useful in some 
sense as block level incremental forever and instant recovery can be 
implemented for open sourced based applications

what reflections do you have on this idea?

regards Tomas

Sent from my iPhone

> On 15 Sep 2020, at 22:08, Zdenek Kabelac  wrote:
> 
> Dne 15. 09. 20 v 21:16 Tomas Dalebjörk napsal(a):
>> thanks for the feedback
>> in previous older versions of LVM, I guess that each lv requires a minor, 
>> major and these are might have limitations of how many can be addressed
>> but how about LVM2?
>> if I intend to have many hundred thousands of LV, would that be any issue?
> 
> Hi
> 
> 
> If you are asking about these limits -
> 
> major number is 12bits  4096
> minor number is 20bits  1048576
> 
> Since DM uses 1 single major - you are limited to ~1 million active LVs.
> But I assume system would be already slowed down to unsable level
> if you would ever reach 100 000 devices on a single system.
> (but you will experience major system slow-down with even just 1
> active LVs...)
> 
> So as had been said - sensible/practical limit for lvm2 is around couple 
> thousands of LVs. You can use 'more' but it would be more and more painful 
> experience ;)
> 
> If you would have enough time :) and massive CPU power behind -
> you probably can even create VG with 1 million LVs - but the usability
> of such VG would be really for long-suffering users ;)
> 
> So yes hundred thousands of LVs in a single VG would be a BIG problem,
> but I don't see any useful use-case why anyone would need to manipulate
> with so many device within a single VG.
> 
> And if you really need hundred thousands - you will need to write much more 
> efficient metadata management system...
> 
> And purely theoretically it's worth to note there is nontrivial amount of 
> kernel memory and other resources needed per single device - so to run with 
> million devices you would probably need some expensive hw (many TiB of RAM...)
> just to make the available - and then there should be some caching and 
> something to use them
> 
> Regards
> 
> Zdenek
> 


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Zdenek Kabelac

Dne 15. 09. 20 v 23:24 Tomas Dalebjörk napsal(a):

thanks

ok, lets say that I have 10 LV on a server, and want create a thin lv snapshot 
every hour and keep that for 30 days
that would be 24h * 30days * 10lv = 720 lv

if I want to keep snapshot copies from more nodes, to serve a single repository 
of snapshot copies, than these would easily become several hundred thousands of 
lv

not sure if this is a good idea, but I guess it can be very useful in some 
sense as block level incremental forever and instant recovery can be 
implemented for open sourced based applications

what reflections do you have on this idea?



Hi

You likely don't need such amount of 'snapshots' and you will need to 
implement something to remove snapshot without need, so i.e. after a day you 
will keep maybe 'every-4-hour' and after couple days maybe only a day-level 
snapshot. After a month per-week and so one.


You need to be aware that as soon as your volumes have some 'real-live'
you will soon need to keep many blocks in many 'different' copies which will
surely be reflected by the usage of the pool device itself - aka you will
most likely hit out-of-space sooner then you'll run out of devices.

Speaking of thin volumes - there can be at most 2^24 thin devices
(this is hard limit you've ask for ;)) - but you have only  ~16GiB of metadata 
to store all of them - which gives you ~1KiB of data per such volume -

quite frankly this is not too much  - unless as said - your volumes
are not changed at all - but then why you would be building all this...

That all said -  if you really need that intensive amount of snapshoting,
lvm2 is likely not for you - and you will need to build something on your own,
as you will need way more efficient and 'targeted' solution for your purpose.

There is no practical way to change current LVM2 into a tool handling i.e. 
100.000 LV at decent speed - it's never been a target of this tool



Regards

Zdenek




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Stuart D Gathman

On Tue, 15 Sep 2020, Tomas Dalebjörk wrote:


ok, lets say that I have 10 LV on a server, and want create a thin lv
snapshot every hour and keep that for 30 days that would be 24h *
30days * 10lv = 720 lv



if I want to keep snapshot copies from more nodes, to serve a single
repository of snapshot copies, than these would easily become several
hundred thousands of lv



not sure if this is a good idea, but I guess it can be very useful in
some sense as block level incremental forever and instant recovery can
be implemented for open sourced based applications

what reflections do you have on this idea?


My feeling is that btrfs is a better solution for the hourly snapshots.
(Unless you are testing a filesystem :-)

I find "classic" LVs a robust replacement for partitions that are easily
resized without moving data around.  I would be more likely to try
RAID features on classic LVs than thin LVs.___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Gionatan Danti

Il 2020-09-15 23:30 Stuart D Gathman ha scritto:

My feeling is that btrfs is a better solution for the hourly snapshots.
(Unless you are testing a filesystem :-)


For fileserver duty, sure - btrfs is adequate.
For storing VMs and/or databases - no way, thinvol is much faster

Side note: many btrfs guides suggest disabling CoW fixes btrfs 
performance issue. Reality is that noCoW fixes them partially at best, 
while at the same time disabling all advanced feature (checksum, 
compression, etc). Snapshot automatically re-enable CoW for the 
overwritten data.


I find "classic" LVs a robust replacement for partitions that are 
easily

resized without moving data around.  I would be more likely to try
RAID features on classic LVs than thin LVs.


I agree for classical LVM.
However thinvols permit much more interesting scenario.

Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm limitations

2020-09-15 Thread Gionatan Danti

Il 2020-09-15 23:47 Zdenek Kabelac ha scritto:

You likely don't need such amount of 'snapshots' and you will need to
implement something to remove snapshot without need, so i.e. after a
day you will keep maybe 'every-4-hour' and after couple days maybe
only a day-level snapshot. After a month per-week and so one.


Agree. "Snapshot-thinning" is an essential part of snapshot management.


Speaking of thin volumes - there can be at most 2^24 thin devices
(this is hard limit you've ask for ;)) - but you have only  ~16GiB of
metadata to store all of them - which gives you ~1KiB of data per such
volume -
quite frankly this is not too much  - unless as said - your volumes
are not changed at all - but then why you would be building all this...

That all said -  if you really need that intensive amount of 
snapshoting,
lvm2 is likely not for you - and you will need to build something on 
your own,
as you will need way more efficient and 'targeted' solution for your 
purpose.


Thinvols are not activated by default - this means it should be not a 
big problem managing some hundreds of them, as the OP ask. Or am I 
missing something?


Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvmcache with vdo - inconsistent block size

2020-09-15 Thread Gionatan Danti

Il 2020-09-15 20:34 Zdenek Kabelac ha scritto:

Dne 14. 09. 20 v 23:44 Gionatan Danti napsal(a):

Hi all,
I am testing lvmcache with VDO and I have issue with devices block 
size.


The big & slow VDO device is on top of a 4-disk MD RAID 10 device 
(itself on top of dm-integrity). Over the VDO device I created a 
thinpool and a thinvol [1]. When adding the cache device to the volume 
group via vgextend, I get an error stating "Devices have inconsistent 
logical block sizes (4096 and 512)." [2]


Now, I know why the error shows and what i means. However, I don't 
know how to force the cache device to act as a 4k sector device, 
and/if this is really required to cache a VDO device.


My current workaround is to set VDO with --emulate512=enabled, but 
this can be suboptimal and it is not recommended.


Any idea on what I am doing wrong?


Hi

LVM currently does not support mixing devices of different sector sizes 
within
a single VG as it brings lot of troubles we have not yet clear vision 
what

to do with all of them.


Hi Zdenek, yes, I understand. What surprised me is that lvmvdo *can* be 
combined with caching, and it does not suffer from this issue. Can you 
elaborate on why it works in this case?



Also this combination of provisioned devices is not advised - since
you are combining 2 kind of devices on top of each other and it can be
a big problem
to solve recovery case.


True.

On lvm2 side we do not allow to use 'VDO LV' as backend for thin-pool 
device.


I noticed it. However, from what I can read on RedHat docs, thinpool 
over VDO device should be perfectly fine (the other way around, not so 
much).


So ATM it's on a user to solve all the possible scenarios that may 
appear on

such device stack.

Zdenek


Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm limitations

2020-09-15 Thread Tomas Dalebjörk
thanks for all the good reflections

the intention is to see if backups can be done in a much more easy way than 
have ever been built before

commercial hardware based snapshots limits the amount of snapshots to 256 per 
filesystem forcing operators to select this only for short living retentions
some have even fewer than that

another solution could be to store the lv as file into a larger fs, and 
snapshot these fs that than works as a group; just to decrease the amount of lv 
needed by the repository host. possible to do this, but will make the retention 
management more complex.
and especially if operators want to change retention for a given snapshot, as 
retention only can be the same for everything stored on the same lv.
that would require a move of the ‘base’ lv file into a separate lv, unless 
there is a way to rebuild vdo dedup data without a move of the block to 
represent the file/lv object elsewhere.
like clone of a lv...

perhaps a snapshot of the origin can be a new master? if that is ok, which I 
think is ok, than that will work without a move

thanks 

Sent from my iPhone

> On 16 Sep 2020, at 00:26, Gionatan Danti  wrote:
> 
> Il 2020-09-15 23:47 Zdenek Kabelac ha scritto:
>> You likely don't need such amount of 'snapshots' and you will need to
>> implement something to remove snapshot without need, so i.e. after a
>> day you will keep maybe 'every-4-hour' and after couple days maybe
>> only a day-level snapshot. After a month per-week and so one.
> 
> Agree. "Snapshot-thinning" is an essential part of snapshot management.
> 
>> Speaking of thin volumes - there can be at most 2^24 thin devices
>> (this is hard limit you've ask for ;)) - but you have only  ~16GiB of
>> metadata to store all of them - which gives you ~1KiB of data per such
>> volume -
>> quite frankly this is not too much  - unless as said - your volumes
>> are not changed at all - but then why you would be building all this...
>> That all said -  if you really need that intensive amount of snapshoting,
>> lvm2 is likely not for you - and you will need to build something on your 
>> own,
>> as you will need way more efficient and 'targeted' solution for your purpose.
> 
> Thinvols are not activated by default - this means it should be not a big 
> problem managing some hundreds of them, as the OP ask. Or am I missing 
> something?
> 
> Regards.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Tomas Dalebjörk
I also consider to build this application as an open source project
but of course it would be good to have some sponsors

I have been working as a C developer for over 35 years, and have created kernel 
drivers for both commercial and open source operating system
and developed several commercial applications that have been used on many 
places by banks, military, education etc

do you know where to get sponsors from if any are likely to help?

regards Tomas

Sent from my iPhone

> On 16 Sep 2020, at 00:26, Gionatan Danti  wrote:
> 
> Il 2020-09-15 23:47 Zdenek Kabelac ha scritto:
>> You likely don't need such amount of 'snapshots' and you will need to
>> implement something to remove snapshot without need, so i.e. after a
>> day you will keep maybe 'every-4-hour' and after couple days maybe
>> only a day-level snapshot. After a month per-week and so one.
> 
> Agree. "Snapshot-thinning" is an essential part of snapshot management.
> 
>> Speaking of thin volumes - there can be at most 2^24 thin devices
>> (this is hard limit you've ask for ;)) - but you have only  ~16GiB of
>> metadata to store all of them - which gives you ~1KiB of data per such
>> volume -
>> quite frankly this is not too much  - unless as said - your volumes
>> are not changed at all - but then why you would be building all this...
>> That all said -  if you really need that intensive amount of snapshoting,
>> lvm2 is likely not for you - and you will need to build something on your 
>> own,
>> as you will need way more efficient and 'targeted' solution for your purpose.
> 
> Thinvols are not activated by default - this means it should be not a big 
> problem managing some hundreds of them, as the OP ask. Or am I missing 
> something?
> 
> Regards.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm limitations

2020-09-15 Thread Tomas Dalebjörk
just curious about this:

> Worth to note there is fixed strict limit of the ~16GiB maximum
> thin-pool kernel metadata size - which surely can be exhausted -
> mapping holds info about bTree mappings and sharing chunks between
> devices
would that mean that one single thin-pool can maximum hold 16GiB/16 nr of 
blocks?

and how about if LVM2 uses VDO as backend, are there more limitations that I 
need to consider there that are not reflected here

Sent from my iPhone

> On 16 Sep 2020, at 00:26, Gionatan Danti  wrote:
> 
> Il 2020-09-15 23:47 Zdenek Kabelac ha scritto:
>> You likely don't need such amount of 'snapshots' and you will need to
>> implement something to remove snapshot without need, so i.e. after a
>> day you will keep maybe 'every-4-hour' and after couple days maybe
>> only a day-level snapshot. After a month per-week and so one.
> 
> Agree. "Snapshot-thinning" is an essential part of snapshot management.
> 
>> Speaking of thin volumes - there can be at most 2^24 thin devices
>> (this is hard limit you've ask for ;)) - but you have only  ~16GiB of
>> metadata to store all of them - which gives you ~1KiB of data per such
>> volume -
>> quite frankly this is not too much  - unless as said - your volumes
>> are not changed at all - but then why you would be building all this...
>> That all said -  if you really need that intensive amount of snapshoting,
>> lvm2 is likely not for you - and you will need to build something on your 
>> own,
>> as you will need way more efficient and 'targeted' solution for your purpose.
> 
> Thinvols are not activated by default - this means it should be not a big 
> problem managing some hundreds of them, as the OP ask. Or am I missing 
> something?
> 
> Regards.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/