Re: [linux-lvm] lvm limitations
Dne 16. 09. 20 v 0:26 Gionatan Danti napsal(a): Il 2020-09-15 23:47 Zdenek Kabelac ha scritto: Speaking of thin volumes - there can be at most 2^24 thin devices (this is hard limit you've ask for ;)) - but you have only ~16GiB of metadata to store all of them - which gives you ~1KiB of data per such volume - quite frankly this is not too much - unless as said - your volumes are not changed at all - but then why you would be building all this... That all said - if you really need that intensive amount of snapshoting, lvm2 is likely not for you - and you will need to build something on your own, as you will need way more efficient and 'targeted' solution for your purpose. Thinvols are not activated by default - this means it should be not a big problem managing some hundreds of them, as the OP ask. Or am I missing something? Hundreds should be 'fine' - but hundred thousands does mean the lvm2 metadata will reach GiB range - and this is definitely NOT fine ;) - since you would probably need around way more RAM to even manages this ;) and I'm not talking about some places with O^2 complexity in the lvm2 code... The metadata format is simply not going to fly here... Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 16. 09. 20 v 6:25 Tomas Dalebjörk napsal(a): thanks for all the good reflections the intention is to see if backups can be done in a much more easy way than have ever been built before Assuming you have checked projects like 'snapper' ? another solution could be to store the lv as file into a larger fs, and snapshot these fs that than works as a group; We do not advice/support any user to use device on top of filesystem on top another device - this is only good as a 'toy' - never for anything serious - it has numerous troubles and the noticeable slower performance is one the easy one... perhaps a snapshot of the origin can be a new master? if that is ok, which I think is ok, than that will work without a move With thin-snapshot - each snapshot as just another 'thin' device - so it doesn't really matter what is origin. You can easily chain snapshot over snapshot over snapshot Regards Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 16. 09. 20 v 6:58 Tomas Dalebjörk napsal(a): just curious about this: Worth to note there is fixed strict limit of the ~16GiB maximum thin-pool kernel metadata size - which surely can be exhausted - mapping holds info about bTree mappings and sharing chunks between devices would that mean that one single thin-pool can maximum hold 16GiB/16 nr of blocks? thin_metadata_size is the tool that you can play with and see the needed space of metadata to handle various sized of data volumes. The bigger the thin-pool chunk is - the less metadata is needed - but the less 'efficient' snapshot becomes. and how about if LVM2 uses VDO as backend, are there more limitations that I need to consider there that are not reflected here certainly yes - you need to check documentation for VDO how to configure parameters to held given expected amount of data (with lvm lot of them is ATM configurable from lvm.conf or profiles) Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
just curious about this: > Worth to note there is fixed strict limit of the ~16GiB maximum > thin-pool kernel metadata size - which surely can be exhausted - > mapping holds info about bTree mappings and sharing chunks between > devices would that mean that one single thin-pool can maximum hold 16GiB/16 nr of blocks? and how about if LVM2 uses VDO as backend, are there more limitations that I need to consider there that are not reflected here Sent from my iPhone > On 16 Sep 2020, at 00:26, Gionatan Danti wrote: > > Il 2020-09-15 23:47 Zdenek Kabelac ha scritto: >> You likely don't need such amount of 'snapshots' and you will need to >> implement something to remove snapshot without need, so i.e. after a >> day you will keep maybe 'every-4-hour' and after couple days maybe >> only a day-level snapshot. After a month per-week and so one. > > Agree. "Snapshot-thinning" is an essential part of snapshot management. > >> Speaking of thin volumes - there can be at most 2^24 thin devices >> (this is hard limit you've ask for ;)) - but you have only ~16GiB of >> metadata to store all of them - which gives you ~1KiB of data per such >> volume - >> quite frankly this is not too much - unless as said - your volumes >> are not changed at all - but then why you would be building all this... >> That all said - if you really need that intensive amount of snapshoting, >> lvm2 is likely not for you - and you will need to build something on your >> own, >> as you will need way more efficient and 'targeted' solution for your purpose. > > Thinvols are not activated by default - this means it should be not a big > problem managing some hundreds of them, as the OP ask. Or am I missing > something? > > Regards. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it - i...@assyoma.it > GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
I also consider to build this application as an open source project but of course it would be good to have some sponsors I have been working as a C developer for over 35 years, and have created kernel drivers for both commercial and open source operating system and developed several commercial applications that have been used on many places by banks, military, education etc do you know where to get sponsors from if any are likely to help? regards Tomas Sent from my iPhone > On 16 Sep 2020, at 00:26, Gionatan Danti wrote: > > Il 2020-09-15 23:47 Zdenek Kabelac ha scritto: >> You likely don't need such amount of 'snapshots' and you will need to >> implement something to remove snapshot without need, so i.e. after a >> day you will keep maybe 'every-4-hour' and after couple days maybe >> only a day-level snapshot. After a month per-week and so one. > > Agree. "Snapshot-thinning" is an essential part of snapshot management. > >> Speaking of thin volumes - there can be at most 2^24 thin devices >> (this is hard limit you've ask for ;)) - but you have only ~16GiB of >> metadata to store all of them - which gives you ~1KiB of data per such >> volume - >> quite frankly this is not too much - unless as said - your volumes >> are not changed at all - but then why you would be building all this... >> That all said - if you really need that intensive amount of snapshoting, >> lvm2 is likely not for you - and you will need to build something on your >> own, >> as you will need way more efficient and 'targeted' solution for your purpose. > > Thinvols are not activated by default - this means it should be not a big > problem managing some hundreds of them, as the OP ask. Or am I missing > something? > > Regards. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it - i...@assyoma.it > GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
thanks for all the good reflections the intention is to see if backups can be done in a much more easy way than have ever been built before commercial hardware based snapshots limits the amount of snapshots to 256 per filesystem forcing operators to select this only for short living retentions some have even fewer than that another solution could be to store the lv as file into a larger fs, and snapshot these fs that than works as a group; just to decrease the amount of lv needed by the repository host. possible to do this, but will make the retention management more complex. and especially if operators want to change retention for a given snapshot, as retention only can be the same for everything stored on the same lv. that would require a move of the ‘base’ lv file into a separate lv, unless there is a way to rebuild vdo dedup data without a move of the block to represent the file/lv object elsewhere. like clone of a lv... perhaps a snapshot of the origin can be a new master? if that is ok, which I think is ok, than that will work without a move thanks Sent from my iPhone > On 16 Sep 2020, at 00:26, Gionatan Danti wrote: > > Il 2020-09-15 23:47 Zdenek Kabelac ha scritto: >> You likely don't need such amount of 'snapshots' and you will need to >> implement something to remove snapshot without need, so i.e. after a >> day you will keep maybe 'every-4-hour' and after couple days maybe >> only a day-level snapshot. After a month per-week and so one. > > Agree. "Snapshot-thinning" is an essential part of snapshot management. > >> Speaking of thin volumes - there can be at most 2^24 thin devices >> (this is hard limit you've ask for ;)) - but you have only ~16GiB of >> metadata to store all of them - which gives you ~1KiB of data per such >> volume - >> quite frankly this is not too much - unless as said - your volumes >> are not changed at all - but then why you would be building all this... >> That all said - if you really need that intensive amount of snapshoting, >> lvm2 is likely not for you - and you will need to build something on your >> own, >> as you will need way more efficient and 'targeted' solution for your purpose. > > Thinvols are not activated by default - this means it should be not a big > problem managing some hundreds of them, as the OP ask. Or am I missing > something? > > Regards. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it - i...@assyoma.it > GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Il 2020-09-15 23:47 Zdenek Kabelac ha scritto: You likely don't need such amount of 'snapshots' and you will need to implement something to remove snapshot without need, so i.e. after a day you will keep maybe 'every-4-hour' and after couple days maybe only a day-level snapshot. After a month per-week and so one. Agree. "Snapshot-thinning" is an essential part of snapshot management. Speaking of thin volumes - there can be at most 2^24 thin devices (this is hard limit you've ask for ;)) - but you have only ~16GiB of metadata to store all of them - which gives you ~1KiB of data per such volume - quite frankly this is not too much - unless as said - your volumes are not changed at all - but then why you would be building all this... That all said - if you really need that intensive amount of snapshoting, lvm2 is likely not for you - and you will need to build something on your own, as you will need way more efficient and 'targeted' solution for your purpose. Thinvols are not activated by default - this means it should be not a big problem managing some hundreds of them, as the OP ask. Or am I missing something? Regards. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.da...@assyoma.it - i...@assyoma.it GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Il 2020-09-15 23:30 Stuart D Gathman ha scritto: My feeling is that btrfs is a better solution for the hourly snapshots. (Unless you are testing a filesystem :-) For fileserver duty, sure - btrfs is adequate. For storing VMs and/or databases - no way, thinvol is much faster Side note: many btrfs guides suggest disabling CoW fixes btrfs performance issue. Reality is that noCoW fixes them partially at best, while at the same time disabling all advanced feature (checksum, compression, etc). Snapshot automatically re-enable CoW for the overwritten data. I find "classic" LVs a robust replacement for partitions that are easily resized without moving data around. I would be more likely to try RAID features on classic LVs than thin LVs. I agree for classical LVM. However thinvols permit much more interesting scenario. Regards. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.da...@assyoma.it - i...@assyoma.it GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
On Tue, 15 Sep 2020, Tomas Dalebjörk wrote: ok, lets say that I have 10 LV on a server, and want create a thin lv snapshot every hour and keep that for 30 days that would be 24h * 30days * 10lv = 720 lv if I want to keep snapshot copies from more nodes, to serve a single repository of snapshot copies, than these would easily become several hundred thousands of lv not sure if this is a good idea, but I guess it can be very useful in some sense as block level incremental forever and instant recovery can be implemented for open sourced based applications what reflections do you have on this idea? My feeling is that btrfs is a better solution for the hourly snapshots. (Unless you are testing a filesystem :-) I find "classic" LVs a robust replacement for partitions that are easily resized without moving data around. I would be more likely to try RAID features on classic LVs than thin LVs.___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 15. 09. 20 v 23:24 Tomas Dalebjörk napsal(a): thanks ok, lets say that I have 10 LV on a server, and want create a thin lv snapshot every hour and keep that for 30 days that would be 24h * 30days * 10lv = 720 lv if I want to keep snapshot copies from more nodes, to serve a single repository of snapshot copies, than these would easily become several hundred thousands of lv not sure if this is a good idea, but I guess it can be very useful in some sense as block level incremental forever and instant recovery can be implemented for open sourced based applications what reflections do you have on this idea? Hi You likely don't need such amount of 'snapshots' and you will need to implement something to remove snapshot without need, so i.e. after a day you will keep maybe 'every-4-hour' and after couple days maybe only a day-level snapshot. After a month per-week and so one. You need to be aware that as soon as your volumes have some 'real-live' you will soon need to keep many blocks in many 'different' copies which will surely be reflected by the usage of the pool device itself - aka you will most likely hit out-of-space sooner then you'll run out of devices. Speaking of thin volumes - there can be at most 2^24 thin devices (this is hard limit you've ask for ;)) - but you have only ~16GiB of metadata to store all of them - which gives you ~1KiB of data per such volume - quite frankly this is not too much - unless as said - your volumes are not changed at all - but then why you would be building all this... That all said - if you really need that intensive amount of snapshoting, lvm2 is likely not for you - and you will need to build something on your own, as you will need way more efficient and 'targeted' solution for your purpose. There is no practical way to change current LVM2 into a tool handling i.e. 100.000 LV at decent speed - it's never been a target of this tool Regards Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
thanks ok, lets say that I have 10 LV on a server, and want create a thin lv snapshot every hour and keep that for 30 days that would be 24h * 30days * 10lv = 720 lv if I want to keep snapshot copies from more nodes, to serve a single repository of snapshot copies, than these would easily become several hundred thousands of lv not sure if this is a good idea, but I guess it can be very useful in some sense as block level incremental forever and instant recovery can be implemented for open sourced based applications what reflections do you have on this idea? regards Tomas Sent from my iPhone > On 15 Sep 2020, at 22:08, Zdenek Kabelac wrote: > > Dne 15. 09. 20 v 21:16 Tomas Dalebjörk napsal(a): >> thanks for the feedback >> in previous older versions of LVM, I guess that each lv requires a minor, >> major and these are might have limitations of how many can be addressed >> but how about LVM2? >> if I intend to have many hundred thousands of LV, would that be any issue? > > Hi > > > If you are asking about these limits - > > major number is 12bits 4096 > minor number is 20bits 1048576 > > Since DM uses 1 single major - you are limited to ~1 million active LVs. > But I assume system would be already slowed down to unsable level > if you would ever reach 100 000 devices on a single system. > (but you will experience major system slow-down with even just 1 > active LVs...) > > So as had been said - sensible/practical limit for lvm2 is around couple > thousands of LVs. You can use 'more' but it would be more and more painful > experience ;) > > If you would have enough time :) and massive CPU power behind - > you probably can even create VG with 1 million LVs - but the usability > of such VG would be really for long-suffering users ;) > > So yes hundred thousands of LVs in a single VG would be a BIG problem, > but I don't see any useful use-case why anyone would need to manipulate > with so many device within a single VG. > > And if you really need hundred thousands - you will need to write much more > efficient metadata management system... > > And purely theoretically it's worth to note there is nontrivial amount of > kernel memory and other resources needed per single device - so to run with > million devices you would probably need some expensive hw (many TiB of RAM...) > just to make the available - and then there should be some caching and > something to use them > > Regards > > Zdenek > ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 15. 09. 20 v 21:16 Tomas Dalebjörk napsal(a): thanks for the feedback in previous older versions of LVM, I guess that each lv requires a minor, major and these are might have limitations of how many can be addressed but how about LVM2? if I intend to have many hundred thousands of LV, would that be any issue? Hi If you are asking about these limits - major number is 12bits 4096 minor number is 20bits 1048576 Since DM uses 1 single major - you are limited to ~1 million active LVs. But I assume system would be already slowed down to unsable level if you would ever reach 100 000 devices on a single system. (but you will experience major system slow-down with even just 1 active LVs...) So as had been said - sensible/practical limit for lvm2 is around couple thousands of LVs. You can use 'more' but it would be more and more painful experience ;) If you would have enough time :) and massive CPU power behind - you probably can even create VG with 1 million LVs - but the usability of such VG would be really for long-suffering users ;) So yes hundred thousands of LVs in a single VG would be a BIG problem, but I don't see any useful use-case why anyone would need to manipulate with so many device within a single VG. And if you really need hundred thousands - you will need to write much more efficient metadata management system... And purely theoretically it's worth to note there is nontrivial amount of kernel memory and other resources needed per single device - so to run with million devices you would probably need some expensive hw (many TiB of RAM...) just to make the available - and then there should be some caching and something to use them Regards Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
thanks for the feedback in previous older versions of LVM, I guess that each lv requires a minor, major and these are might have limitations of how many can be addressed but how about LVM2? if I intend to have many hundred thousands of LV, would that be any issue? Sent from my iPhone > On 1 Sep 2020, at 15:21, Gionatan Danti wrote: > > Il 2020-08-30 21:30 Zdenek Kabelac ha scritto: >> Hi >> Lvm2 has only ascii metadata (so basically what is stored in >> /etc/lvm/archive is the same as in PV header metadata area - >> just without spaces and some comments) >> And while this is great for manual recovery, it's not >> very efficient in storing larger number of LVs - there basically >> some sort of DB attemp would likely be needed. >> So far however there was no real worthy use case - so safety >> for recovery scenarios wins ATM. > > Yes, I agree. > >> Thin - just like any other LV takes some 'space' - so if you want >> to go with higher amount - you need to specify bigger metadata areas >> to be able to store such large lvm2 metadata. >> There is probably not a big issue with lots of thin LVs in thin-pool as long >> as user doesn't need to have them active at the same time. Due to a nature of >> kernel metadata handling, the larger amount of active thin LVs from >> the same thin-pool v1 may start to compete for the locking when >> allocating thin pool chunks thus killing performance - so here is >> rather better to stay in some 'tens' of actively provisioning thin >> volumes when the 'performance' is factor. > > Interesting. > >> Worth to note there is fixed strict limit of the ~16GiB maximum >> thin-pool kernel metadata size - which surely can be exhausted - >> mapping holds info about bTree mappings and sharing chunks between >> devices > > Yes, I know about this specific limitation. > > Thanks. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it - i...@assyoma.it > GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
[linux-lvm] lvm limitations
hi I am trying to find the limitations of lvm2 1. how many logical volumes can be created on a single volume group? - is there more limitations such as minor,major nr, .. I do understand that increasing metadata is needed to allow more volumes. regards Tomas Sent from my iPhone ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Il 2020-08-30 21:30 Zdenek Kabelac ha scritto: Hi Lvm2 has only ascii metadata (so basically what is stored in /etc/lvm/archive is the same as in PV header metadata area - just without spaces and some comments) And while this is great for manual recovery, it's not very efficient in storing larger number of LVs - there basically some sort of DB attemp would likely be needed. So far however there was no real worthy use case - so safety for recovery scenarios wins ATM. Yes, I agree. Thin - just like any other LV takes some 'space' - so if you want to go with higher amount - you need to specify bigger metadata areas to be able to store such large lvm2 metadata. There is probably not a big issue with lots of thin LVs in thin-pool as long as user doesn't need to have them active at the same time. Due to a nature of kernel metadata handling, the larger amount of active thin LVs from the same thin-pool v1 may start to compete for the locking when allocating thin pool chunks thus killing performance - so here is rather better to stay in some 'tens' of actively provisioning thin volumes when the 'performance' is factor. Interesting. Worth to note there is fixed strict limit of the ~16GiB maximum thin-pool kernel metadata size - which surely can be exhausted - mapping holds info about bTree mappings and sharing chunks between devices Yes, I know about this specific limitation. Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.da...@assyoma.it - i...@assyoma.it GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 30. 08. 20 v 20:01 Gionatan Danti napsal(a): Il 2020-08-30 19:33 Zdenek Kabelac ha scritto: For illustration for 12.000 LVs you need ~4MiB just store Ascii metadata itself, and you need metadata space for keeping at least 2 of them. Hi Zdenek, are you speaking of classical LVM metadata, right? Hi Lvm2 has only ascii metadata (so basically what is stored in /etc/lvm/archive is the same as in PV header metadata area - just without spaces and some comments) And while this is great for manual recovery, it's not very efficient in storing larger number of LVs - there basically some sort of DB attemp would likely be needed. So far however there was no real worthy use case - so safety for recovery scenarios wins ATM. Handling of operations like 'vgremove' with so many LVs requires signification amount of your CPU time. Basically to stay within bounds - unless you have very good reasons you should probably stay in range of low thousands to keep lvm2 performing reasonably well. What about thin vols? Can you suggest any practical limit with lvmthin? Thin - just like any other LV takes some 'space' - so if you want to go with higher amount - you need to specify bigger metadata areas to be able to store such large lvm2 metadata. There is probably not a big issue with lots of thin LVs in thin-pool as long as user doesn't need to have them active at the same time. Due to a nature of kernel metadata handling, the larger amount of active thin LVs from the same thin-pool v1 may start to compete for the locking when allocating thin pool chunks thus killing performance - so here is rather better to stay in some 'tens' of actively provisioning thin volumes when the 'performance' is factor. Worth to note there is fixed strict limit of the ~16GiB maximum thin-pool kernel metadata size - which surely can be exhausted - mapping holds info about bTree mappings and sharing chunks between devices Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Il 2020-08-30 19:33 Zdenek Kabelac ha scritto: For illustration for 12.000 LVs you need ~4MiB just store Ascii metadata itself, and you need metadata space for keeping at least 2 of them. Hi Zdenek, are you speaking of classical LVM metadata, right? Handling of operations like 'vgremove' with so many LVs requires signification amount of your CPU time. Basically to stay within bounds - unless you have very good reasons you should probably stay in range of low thousands to keep lvm2 performing reasonably well. What about thin vols? Can you suggest any practical limit with lvmthin? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.da...@assyoma.it - i...@assyoma.it GPG public key ID: FF5F32A8 ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] lvm limitations
Dne 30. 08. 20 v 1:25 Tomas Dalebjörk napsal(a): hi I am trying to find out what limitations exists in LVM2 nr of logical volumes allowed to be created per volume group Hi There is no 'strict' maximum in term of we would limit to i.e. 1LV per VG. It's rather limitation from overall practical usability and the space you may need to allocate to store the metadata (pv/vgcreate --metadatasize) The bigger the metadata gets with more LVs - the slower the performance of processing gets (as there is rather slow code doing all sorts of validation). You need much bigger metadata areas during 'pv/vgcreate' to be prepared AHEAD of time (since lvm2 does not support expansion of metadata space). For illustration for 12.000 LVs you need ~4MiB just store Ascii metadata itself, and you need metadata space for keeping at least 2 of them. Handling of operations like 'vgremove' with so many LVs requires signification amount of your CPU time. Basically to stay within bounds - unless you have very good reasons you should probably stay in range of low thousands to keep lvm2 performing reasonably well. If there would be some big reason to support 'more' - it's doable - but currently it's deep-down on the TODO list ;) Zdenek ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
[linux-lvm] lvm limitations
hi I am trying to find out what limitations exists in LVM2 nr of logical volumes allowed to be created per volume group Sent from my iPhone ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/