> The gains are insignificant with an openvz jail environment compared to a
> paravirtualized (PV, not HVM) Xen environment. With OpenVZ in its current
> incarnation you are stuck running a 2.6.32 series ancient kernel which

That's why Proxmox moved to LXC, very similar to Openvz but built into
modern kernels.  I really like how light weight Openvz and LXC are.  I
can run my DNS server and Speedtest server as separate containers and
they hardly use any resources at all.  Works very well with small
light weight servers like that.  Seems like whatever memory you assign
a KVM on other hand is gone even when its sitting twiddling its
thumbs.  Plus containers have very little performance penalty
regarding CPU or disk I/O.

I really like the ZFS file system that Proxmox 4 has switched too.
Built in mirroring etc. but takes some figuring out.  I am having
issues with new LXC containers and centos 7 though.  You have to do a
number of tweaks to get around systemd, apparmor and LXC working
together.  I hate having to do tweaks.

Are there any affordable competitors to Proxmox?


> significantly reduces support for high performance I/O devices such as the
> latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
> Also nonexistant support for high performance 1500-2000MB/s storage devices
> such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
> motherboard firmwares that enable booting from M.2.

Reply via email to