My 2 cents.

*OpenVZ vs LXC*
OpenVZ requires a patched kernel, but it's finally updated with OVZ7
OpenVZ is gradually porting its technology to mainline Linux kernel
OpenVZ has a more battle tested OS virtualization technology
LXC is still more insecure
LXC has a more complex way to configure networks

*Virtuozzo vs Proxmox*
Virtuozzo better integrates OpenVZ with its features and capabilities, like
live migration, distributed storage, live snapshots, etc
Virtuozzo is made and maintained by the same company that maintains OpenVZ
itself, and buying its licenses helps the future of OpenVZ
Virtuozzo includes specialized tools to manage and ensure the healthy of
your distributed storage cluster, subdivides it in different layers of
performance and purpose, etc
Virtuozzo has a great and responsive support team, in my experience
Virtuozzo enhanced KVM a lot, that provides more server density and
performance
Virtuozzo is one of the main KVM contributors, and contributes to other
projects as well, as Linux Kernel, OpenStack, etc
Proxmox was famous when offered OpenVZ on its platform (not offering
anymore, replacing it by LXC)
Proxmox is made and maintained by a company not related to OpenVZ project
itself
Proxmox has a great and responsive support team, in my experience

On Mon, Apr 29, 2019 at 10:48 AM Narcis Garcia <informat...@actiu.net>
wrote:

> Yes, these are the right comparisons:
>
> OpenVZ vs LXC
> Virtuozzo distro vs Proxmox distro
> CentOS vs Debian vs Other general purpose distros
>
> + Interesting to know the support to run OpenVZ 7 on CentOS.
> It should be documented at OpenVZ wiki!
>
>
> El 29/4/19 a les 4:16, Website Solution - George ha escrit:
> >
> > From my understanding, Virtuozzo 7 (or OpenVZ 7) supports user quota
> > inside guest container.
> >
> > However, for unprivileged LXC guest, it does not support quota inside
> > container natively.
> >
> > It is important if we run the guest container for multiple end-users.
> >
> > (Privileged LXC guests support user quota inside container, but they
> > share the same root UID between guest and host, which implies some kind
> > of potential security)
> >
> >
> >
> > On 29-Apr-19 3:55 AM, Jehan PROCACCIA wrote:
> >> regarding distros and virtuozzo vs proxmox (reason I modified the
> >> subject, orig: SSD trim support over a LUKS layer)
> >> I understand that it could be frustrating to rely on a dedidcated
> >> distro (virtuozzo 7), but I guess it comes with simplicity and
> >> consistency regarding set of packages and updates
> >> after all it's very similar to centos/rhel 7 as it is based on it, and
> >> if you wish , you could add openvz7 feature to native centos7 :
> >>
> https://enjoyko.blogspot.com/2018/05/how-to-install-openvz-7-to-centos-7.html
> >>
> >>
> >> I guess that https://wiki.openvz.org/Comparison is quite up to date as
> >> it dates from jan/2019
> >> but i am still wondering what technology virtuozzo 7 uses for
> >> containers if not LXC ?
> >>
> >> I'll be glad to know as I have regularly discussions between sysadmins
> >> around proxmox and virtuozzo , and finally it ends on debian vs
> >> centos/rhel !
> >>
> >> ----- Mail original -----
> >> De: "Narcis Garcia" <informat...@actiu.net>
> >> À: "OpenVZ users" <users@openvz.org>
> >> Envoyé: Samedi 27 Avril 2019 19:19:43
> >> Objet: Re: [Users] SSD trim support over a LUKS layer
> >>
> >> The problem of Virtuozzo 7 for me is that this is a distro.
> >> I prefer to use general purpose distros, for many reasons around
> >> packaged software, community support, future plans and others.
> >>
> >>
> >> El 27/4/19 a les 19:09, Paulo Coghi - Coghi IT ha escrit:
> >>> LXC is far to be an option, IMHO.
> >>>
> >>> I'm happily using Virtuozzo 7 with multiple NVMe storages with zero
> >>> issues for more than a year.
> >>>
> >>> On Sat, Apr 27, 2019 at 4:28 PM CoolCold <coolthec...@gmail.com
> >>> <mailto:coolthec...@gmail.com>> wrote:
> >>>
> >>>      I believe to have fixes and backports like this in to legacy
> >>> version
> >>>      of product will not happen, and you should consider upgrading.
> >>>      Personally, I've upgraded to lxc.. it's quite primitive
> >>> comparing to
> >>>      ovz 6, but it's enough for my needs.
> >>>
> >>>      On Sat, Apr 27, 2019, 17:49 spameden <spame...@gmail.com
> >>>      <mailto:spame...@gmail.com>> wrote:
> >>>
> >>>          Yes, it's an issue in kernel.
> >>>
> >>>          As dm-crypt/luks layer isn't passing TRIM to the underlying
> >>> device.
> >>>
> >>>          /boot is not encrypted that's why it works for you.
> >>>
> >>>          сб, 27 апр. 2019 г. в 11:11, Narcis Garcia
> >>>          <informat...@actiu.net <mailto:informat...@actiu.net>>:
> >>>
> >>>              See in the case that /dev/sda1 (Directly mounted as Ext4
> on
> >>>              /boot) works with Trim/Discard.
> >>>              It's the sda2_crypt (layer over sda2) that is not detected
> >>>              to be trimmable. Devuan's stock kernel does.
> >>>
> >>>              CentOS issue #6548 may not be this same bug; I've tested
> >>> now
> >>>              with CentOS 6.8 with a similar (but not same) result*:*
> >>>
> >>>              $ lsb_release -d
> >>>              Description:    CentOS release 6.8 (Final)
> >>>
> >>>              $ uname -a
> >>>              Linux localhost.localdomain 2.6.32-642.el6.x86_64 #1 SMP
> >>> Tue
> >>>              May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> >>>
> >>>              $ lsblk --discard /dev/sda
> >>>              NAME
> >>>              DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
> >>>              sda
> >>>              0      512B       2G         0
> >>>              ├─sda1
> >>>              0      512B       2G         0
> >>>              └─sda2
> >>>              0      512B       2G         0
> >>>                └─luks-f691f48b-8556-487d-ac64-50daa99ed4c9 (dm-0)
> >>>              0      512B       2G         0
> >>>
> >>>              $ cat /etc/crypttab
> >>>              luks-f691f48b-8556-487d-ac64-50daa99ed4c9
> >>>              UUID=f691f48b-8556-487d-ac64-50daa99ed4c9 none
> luks,discard
> >>>
> >>>              $ mount | grep -e discard
> >>>              /dev/mapper/luks-f691f48b-8556-487d-ac64-50daa99ed4c9 on /
> >>>              type ext4 (rw,discard)
> >>>              /dev/sda1 on /boot type ext4 (rw,discard)
> >>>
> >>>              $ sudo fstrim /boot
> >>>              # (same result as Devuan/1 and OpenVZ/6 kernel: success)
> >>>
> >>>              $ sudo fstrim /
> >>>              fstrim: /: FITRIM ioctl failed: Operation not supported
> >>>
> >>>
> >>>              El 26/4/19 a les 21:36, spameden ha escrit:
> >>>>              Hi.
> >>>>
> >>>>              I've asked this question years ago (in
> >>>>
> >>>> 2013):
> https://lists.openvz.org/pipermail/users/2013-August/005250.html
> >>>>
> >>>>              Let me know if it helps, but this bug should have been
> >>>>              fixed in CentOS and RHEL at
> >>>>              least: https://bugs.centos.org/view.php?id=6548
> >>>>
> >>>>              Maybe OpenVZ maintainers didn't pick up this fix in the
> >>>>              openvz6 legacy kernel?
> >>>>
> >>>>              Thanks.
> >>>>
> >>>>              ср, 10 апр. 2019 г. в 10:45, Narcis Garcia
> >>>>              <informat...@actiu.net <mailto:informat...@actiu.net>>:
> >>>>
> >>>>                  Does anybody know how can I solve this?
> >>>>
> >>>>                  $ lsb_release -d
> >>>>                  Description:    Devuan GNU/Linux 1.0 (jessie)
> >>>>
> >>>>                  $ uname -a
> >>>>                  Linux bell1 2.6.32-openvz-042stab134.8-amd64 #1 SMP
> >>>>                  Fri Dec 7 17:18:40
> >>>>                  MSK 2018 x86_64 GNU/Linux
> >>>>
> >>>>                  $ lsblk --discard /dev/sda
> >>>>                  NAME           DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
> >>>>                  sda                   0      512B       2G         0
> >>>>                  ├─sda1                0      512B       2G         0
> >>>>                  └─sda2                0      512B       2G         0
> >>>>                    └─sda2_crypt        0        0B       0B         0
> >>>>
> >>>>                  $ cat /etc/crypttab
> >>>>                  sda2_crypt UUID=***** none luks,discard
> >>>>
> >>>>                  $ mount | grep -e discard
> >>>>                  /dev/mapper/sda2_crypt on / type ext4
> >>>>
> >>>> (rw,noatime,errors=remount-ro,barrier=1,data=ordered,discard)
> >>>>                  /dev/sda1 on /boot type ext4
> >>>>                  (rw,relatime,barrier=1,data=ordered,discard)
> >>>>
> >>>>                  $ sudo fstrim /
> >>>>                  fstrim: /: the discard operation is not supported
> >>>>
> >>>>                  Thank you.
> >>>>
> >>>>
> >>>>                  _______________________________________________
> >>>>                  Users mailing list
> >>>>                  Users@openvz.org <mailto:Users@openvz.org>
> >>>>                  https://lists.openvz.org/mailman/listinfo/users
> >>>>
> >>>>
> >>>>              _______________________________________________
> >>>>              Users mailing list
> >>>>              Users@openvz.org <mailto:Users@openvz.org>
> >>>>              https://lists.openvz.org/mailman/listinfo/users
> >>>              _______________________________________________
> >>>              Users mailing list
> >>>              Users@openvz.org <mailto:Users@openvz.org>
> >>>              https://lists.openvz.org/mailman/listinfo/users
> >>>
> >>>          _______________________________________________
> >>>          Users mailing list
> >>>          Users@openvz.org <mailto:Users@openvz.org>
> >>>          https://lists.openvz.org/mailman/listinfo/users
> >>>
> >>>      _______________________________________________
> >>>      Users mailing list
> >>>      Users@openvz.org <mailto:Users@openvz.org>
> >>>      https://lists.openvz.org/mailman/listinfo/users
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users@openvz.org
> >>> https://lists.openvz.org/mailman/listinfo/users
> >>>
> >> _______________________________________________
> >> Users mailing list
> >> Users@openvz.org
> >> https://lists.openvz.org/mailman/listinfo/users
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users@openvz.org
> >> https://lists.openvz.org/mailman/listinfo/users
> >
> > _______________________________________________
> > Users mailing list
> > Users@openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to