The gains are insignificant with an openvz jail environment compared to a
paravirtualized (PV, *not* HVM) Xen environment. With OpenVZ in its current
incarnation you are stuck running a 2.6.32 series ancient kernel which
significantly reduces support for high performance I/O devices such as the
latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
Also nonexistant support for high performance 1500-2000MB/s storage devices
such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
motherboard firmwares that enable booting from M.2.



On Thu, Nov 12, 2015 at 5:18 PM, Josh Reynolds <j...@kyneticwifi.com> wrote:

> The can be significant performance gains in both memory reduction and
> IO by using OpenVZ though. It just depends on your needs and
> environment.
>
> On Thu, Nov 12, 2015 at 7:09 PM, Eric Kuhnke <eric.kuh...@gmail.com>
> wrote:
> > Openvz is really more like a chroot jail. You can accomplish much better
> > functionality and the ability to run a wider range of guest VMs with xen
> or
> > kvm.
> >
> > Keep in mind with openvz all guest OS must run the same kernel as the
> host.
> >
> > Unless you need openvz for a hosting environment that will have hundreds
> of
> > small VMs on a server with 128GB RAM?
> >
> > On Nov 11, 2015 3:58 PM, "Matt" <matt.mailingli...@gmail.com> wrote:
> >>
> >> Anyone out there using Proxmox for virtualization?  Have been using if
> >> for few years running Centos Openvz containers.  Like fact that Openvz
> >> is light weight and gives very little performance penalty.  In Proxmox
> >> 4.x they have introduced the ZFS file system which I think is a great
> >> offering many features such as mirroring etc.  They have also switched
> >> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
> >> stable?  Pros and cons vs Openvz?
>

Reply via email to