Re: [linux-lvm] LVM performance vs direct dm-thin

2022-02-03 Thread Demi Marie Obenour
On Thu, Feb 03, 2022 at 01:28:37PM +0100, Zdenek Kabelac wrote:
> Dne 03. 02. 22 v 5:48 Demi Marie Obenour napsal(a):
> > On Mon, Jan 31, 2022 at 10:29:04PM +0100, Marian Csontos wrote:
> > > On Sun, Jan 30, 2022 at 11:17 PM Demi Marie Obenour <
> > > d...@invisiblethingslab.com> wrote:
> > > 
> > > > On Sun, Jan 30, 2022 at 04:39:30PM -0500, Stuart D. Gathman wrote:
> > > > > Your VM usage is different from ours - you seem to need to clone and
> > > > > activate a VM quickly (like a vps provider might need to do).  We
> > > > > generally have to buy more RAM to add a new VM :-), so performance of
> > > > > creating a new LV is the least of our worries.
> > > > 
> > > > To put it mildly, yes :).  Ideally we could get VM boot time down to
> > > > 100ms or lower.
> > > > 
> > > 
> > > Out of curiosity, is snapshot creation the main culprit to boot a VM in
> > > under 100ms? Does Qubes OS use tweaked linux distributions, to achieve the
> > > desired boot time?
> > 
> > The goal is 100ms from user action until PID 1 starts in the guest.
> > After that, it’s the job of whatever distro the guest is running.
> > Storage management is one area that needs to be optimized to achieve
> > this, though it is not the only one.
> 
> I'm wondering from where those 100ms came from?
> 
> Users often mistakenly target for wrong technologies for their tasks.
> 
> If they need to use containerized software they should use containers like
> i.e. Docker - if they need full virtual secure machine - it certainly has
> it's price (mainly way higher memory consumption)
> I've some doubts there is some real good reason to have quickly created VMs
> as they surely are supposed to be a long time living entities
> (hours/days...)

Simply put, Qubes OS literally does not have a choice.  Qubes OS is
intended to protect against very high-level attackers who are likely to
have 0day exploits against the Linux kernel.  And it is trying to do the
best possible given that constraint.  A microkernel *could* provide
sufficiently strong isolation, but there are none that have sufficiently
broad hardware support and sufficiently capable userlands.

In the long term, I would like to use unikernels for at least some of
the VMs.  Unikernels can start up so quickly that the largest overhead
is the hypervisor’s toolstack.  But that is very much off-topic.

> So unless you want to create something for marketing purposes aka - my table
> is bigger then yours - I don't see the point.
> 
> For quick instancies of software apps I'd always recommend containers -
> which are vastly more efficient and scalable.
> 
> VMs and containers have its strength and weaknesses..
> Not sure why some many people try to pretend VMs can be as efficient as
> containers or containers as secure as VMs. Just always pick the right
> tool...

Qubes OS needs secure *and* fast.  To quote the seL4 microkernel’s
mantra, “Security is no excuse for poor performance!”.

> > > Back to business. Perhaps I missed an answer to this question: Are the
> > > Qubes OS VMs throw away?  Throw away in the sense like many containers are
> > > - it's just a runtime which can be "easily" reconstructed. If so, you can
> > > ignore the safety belts and try to squeeze more performance by sacrificing
> > > (meta)data integrity.
> > 
> > Why does a trade-off need to be made here?  More specifically, why is it
> > not possible to be reasonably fast (a few ms) AND safe?
> 
> Security, safety and determinism always takes away efficiency.
> 
> The higher amount of randomness you can live with, the faster processing you
> can achieve - you just need to cross you fingers :)
> (i.e. drop transaction synchornisation :))
> 
> Quite frankly - if you are orchestrating mostly same VMs, it would be more
> efficient, to just snapshot them with already running memory environment -
> so instead of booting VM always from 'scratch', you restore/resume those VMs
> at some already running point - from which it could start deviate.
> Why wasting CPU on processing over and over same boot
> There you should hunt your miliseconds...

Qubes OS used to do that, but it was a significant maintenance burden.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvmdevices - impossible to remove/cleanup device

2022-02-03 Thread David Teigland
On Thu, Feb 03, 2022 at 10:20:41AM +, lejeczek wrote:
> Hi guys.
> 
> What is the issue here?
> 
> -> $ lvs
>   Devices file sys_wwid eui.6479a7305083 PVID none last seen on
> /dev/nvme1n1p4 not found.

That device was added to the lvm devices file at some point (see
/etc/lvm/devices/system.devices), but the device no longer exists.
If that device won't be reattached or if you don't want lvm to use it,
then it should be removed from system.devices.

> -> $ lvmdevices --deldev /dev/nvme1n1 ; echo $?
>   Devices file sys_wwid eui.6479a7305083 PVID none last seen on
> /dev/nvme1n1p4 not found.
>   Device not found in devices file.

That's the right idea, but two issues,

1. lvmdevices --deldev is missing the ability to remove a missing device
   by name (or by wwid), and that will be fixed.
2. when that's fixed, you'd need to use the name /dev/nvme1n1p4

Until that's fixed, edit system.devices and remove that line.

Dave

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] LVM performance vs direct dm-thin

2022-02-03 Thread Zdenek Kabelac

Dne 03. 02. 22 v 5:48 Demi Marie Obenour napsal(a):

On Mon, Jan 31, 2022 at 10:29:04PM +0100, Marian Csontos wrote:

On Sun, Jan 30, 2022 at 11:17 PM Demi Marie Obenour <
d...@invisiblethingslab.com> wrote:


On Sun, Jan 30, 2022 at 04:39:30PM -0500, Stuart D. Gathman wrote:

Your VM usage is different from ours - you seem to need to clone and
activate a VM quickly (like a vps provider might need to do).  We
generally have to buy more RAM to add a new VM :-), so performance of
creating a new LV is the least of our worries.


To put it mildly, yes :).  Ideally we could get VM boot time down to
100ms or lower.



Out of curiosity, is snapshot creation the main culprit to boot a VM in
under 100ms? Does Qubes OS use tweaked linux distributions, to achieve the
desired boot time?


The goal is 100ms from user action until PID 1 starts in the guest.
After that, it’s the job of whatever distro the guest is running.
Storage management is one area that needs to be optimized to achieve
this, though it is not the only one.


I'm wondering from where those 100ms came from?

Users often mistakenly target for wrong technologies for their tasks.

If they need to use containerized software they should use containers like 
i.e. Docker - if they need full virtual secure machine - it certainly has it's 
price (mainly way higher memory consumption)
I've some doubts there is some real good reason to have quickly created VMs as 
they surely are supposed to be a long time living entities  (hours/days...)


So unless you want to create something for marketing purposes aka - my table 
is bigger then yours - I don't see the point.


For quick instancies of software apps I'd always recommend containers - which 
are vastly more efficient and scalable.


VMs and containers have its strength and weaknesses..
Not sure why some many people try to pretend VMs can be as efficient as 
containers or containers as secure as VMs. Just always pick the right tool...




Back to business. Perhaps I missed an answer to this question: Are the
Qubes OS VMs throw away?  Throw away in the sense like many containers are
- it's just a runtime which can be "easily" reconstructed. If so, you can
ignore the safety belts and try to squeeze more performance by sacrificing
(meta)data integrity.


Why does a trade-off need to be made here?  More specifically, why is it
not possible to be reasonably fast (a few ms) AND safe?


Security, safety and determinism always takes away efficiency.

The higher amount of randomness you can live with, the faster processing you 
can achieve - you just need to cross you fingers :)

(i.e. drop transaction synchornisation :))

Quite frankly - if you are orchestrating mostly same VMs, it would be more 
efficient, to just snapshot them with already running memory environment -
so instead of booting VM always from 'scratch', you restore/resume those VMs 
at some already running point - from which it could start deviate.

Why wasting CPU on processing over and over same boot
There you should hunt your miliseconds...


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/