Re: [AFMUG] Proxmox

2017-02-08 Thread Jason McKemie
Thanks for the input guys!

On Wednesday, February 8, 2017, Faisal Imtiaz <fai...@snappytelecom.net>
wrote:

> Answers inline below:-
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Jason McKemie" <j.mcke...@veloxinetbroadband.com
> <javascript:_e(%7B%7D,'cvml','j.mcke...@veloxinetbroadband.com');>>
> *To: *af@afmug.com <javascript:_e(%7B%7D,'cvml','af@afmug.com');>
> *Sent: *Wednesday, February 8, 2017 12:06:31 PM
> *Subject: *[AFMUG] Proxmox
>
> To those of you out there using Proxmox, I have a couple questions.
> I started messing around with this a year or so ago, but have really just
> started to use it in a production environment.
>
> The main thing I'm wondering about is if it is necessary to have a
> separate storage server in place to operate a cluster, or if it is possible
> to use the drive(s) on each node.
>
> You can use it either way... our first & 2nd iterations were using local
> storage on each promox host node. We would store daily backup on an
> external storage box (NFS, Freenas, Zfs etc). and simply restore the KVM
> from backup in case of node failure.
>
> There are folks who used external common storage and as such moving/
> restoring vm's was easy.
>
> Our current iteration is using Proxmox v4 Host+CEPH Node cluster of 3 + 2
> additional Host Nodes.
> (3 HOST are dual purpose, they are being used as Proxmox Hosts as well as
> CEPH nodes). This configuration while heavy on the hardware, allows for
> potential loss of 2 of the storage nodes while still operating.
>
>
> Additionally, is a separate storage server necessary to do backups? It
> isn't obvious to me that there is any way to create or store a backup
> without some sort of network storage that is external to the node(s).
>
> Highly recommended.. for multiple reasons...
> you can create a local data partition (if your local storage is large
> enough) and store backup's locally.
>
> In our first & 2nd gen, we were doing two backups every day.. one to local
> storage and 2nd to external storage.
>
> I'm obviously new at this virtualization / container thing, so, sorry if
> these questions sound stupid.
>
> The only question that is stupid is one that is not asked  :)
>
>
> TIA
>
> -Jason
>
>


Re: [AFMUG] Proxmox

2017-02-08 Thread Faisal Imtiaz
Answers inline below:- 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Jason McKemie" <j.mcke...@veloxinetbroadband.com>
> To: af@afmug.com
> Sent: Wednesday, February 8, 2017 12:06:31 PM
> Subject: [AFMUG] Proxmox

> To those of you out there using Proxmox, I have a couple questions.
> I started messing around with this a year or so ago, but have really just
> started to use it in a production environment.

> The main thing I'm wondering about is if it is necessary to have a separate
> storage server in place to operate a cluster, or if it is possible to use the
> drive(s) on each node.

You can use it either way... our first & 2nd iterations were using local 
storage on each promox host node. We would store daily backup on an external 
storage box (NFS, Freenas, Zfs etc). and simply restore the KVM from backup in 
case of node failure. 

There are folks who used external common storage and as such moving/ restoring 
vm's was easy. 

Our current iteration is using Proxmox v4 Host+CEPH Node cluster of 3 + 2 
additional Host Nodes. 
(3 HOST are dual purpose, they are being used as Proxmox Hosts as well as CEPH 
nodes). This configuration while heavy on the hardware, allows for potential 
loss of 2 of the storage nodes while still operating. 

> Additionally, is a separate storage server necessary to do backups? It isn't
> obvious to me that there is any way to create or store a backup without some
> sort of network storage that is external to the node(s).

Highly recommended.. for multiple reasons... 
you can create a local data partition (if your local storage is large enough) 
and store backup's locally. 

In our first & 2nd gen, we were doing two backups every day.. one to local 
storage and 2nd to external storage. 

> I'm obviously new at this virtualization / container thing, so, sorry if these
> questions sound stupid.

The only question that is stupid is one that is not asked :) 

> TIA

> -Jason


Re: [AFMUG] Proxmox

2017-02-08 Thread Adam Moffett
You can add a local path as a storage device.  If I recall correctly 
though, you add a storage device to the cluster not to the node, so each 
node will add that same path a storage device.


So you can't mount a local hard drive and add it as "storage" in a 
proxmox cluster, unless that same path exists on all of them.


What I have done at least once is create a samba share on one host, then 
mount that share on every host including the local one.  You add it to 
fstab so it auto mounts at boot.
So if I have host1, host2, and host3 in a Proxmox cluster. I might have 
a large disk on host1 at /dev/sdb1 and mounted on /mnt/backups.  Then 
share it with Samba.   Then mount the Samba share as /mnt/proxmoxbackups 
on all three nodes.  Then add /mnt/proxmoxbackups as a local directory 
storage device in the proxmox cluster.


Orsomething like that.  It was a few years back.  Later I just used 
a separate server as a network storage device and never looked back.



-- Original Message --
From: "Jason McKemie" <j.mcke...@veloxinetbroadband.com>
To: "af@afmug.com" <af@afmug.com>
Sent: 2/8/2017 12:20:17 PM
Subject: Re: [AFMUG] Proxmox

Yeah, I definitely would store a backup on separate storage, I just 
can't seem to find a way to store it locally so that I can then go and 
copy it to the separate storage.  Ultimately I know I will want to have 
storage in place to auto-backup to, I just was hoping for a manual 
hold-over in the mean time.


On Wed, Feb 8, 2017 at 11:15 AM, Brett A Mansfield 
<li...@silverlakeinternet.com> wrote:

Hi Jason,

You can create a cluster without external storage, but because it is 
syncronys replication it will double your write hit. But if you have a 
solid network and solid drives you'll barely notice if at all.


For backups I recommend you use separate storage. Otherwise your 
risking losing your data if your primary raid array dies.


Thank you,
Brett A Mansfield

> On Feb 8, 2017, at 10:06 AM, Jason McKemie 
<j.mcke...@veloxinetbroadband.com 
<mailto:j.mcke...@veloxinetbroadband.com>> wrote:

>
> To those of you out there using Proxmox, I have a couple questions.
>
> I started messing around with this a year or so ago, but have really 
just started to use it in a production environment.

>
> The main thing I'm wondering about is if it is necessary to have a 
separate storage server in place to operate a cluster, or if it is 
possible to use the drive(s) on each node.

>
> Additionally, is a separate storage server necessary to do backups? 
It isn't obvious to me that there is any way to create or store a 
backup without some sort of network storage that is external to the 
node(s).

>
> I'm obviously new at this virtualization / container thing, so, 
sorry if these questions sound stupid.

>
> TIA
>
> -Jason
>
>



Re: [AFMUG] Proxmox

2017-02-08 Thread Jason McKemie
Thanks!

On Wed, Feb 8, 2017 at 11:24 AM, Zach Underwood 
wrote:

> No it would run in the host proxmox OS. You can also mix hosts with and
> without local disk using gluster.
>
> Here are some details
> https://pve.proxmox.com/wiki/Storage:_GlusterFS
> http://blog.ivanilves.com/2014/proxmox-ve-3-3-2-node-
> cluster-with-glusterfs/
>
> On Wed, Feb 8, 2017 at 12:21 PM, Jason McKemie <
> j.mcke...@veloxinetbroadband.com> wrote:
>
>> How would you use something like gluster on the nodes?  Just put it in
>> its own container / VM?
>>
>> On Wed, Feb 8, 2017 at 11:12 AM, Zach Underwood 
>> wrote:
>>
>>> Yes you can use local disk but then you are unable to live move a VM. If
>>> you use the local disk with something like gluster then you can line move
>>> VMs.
>>>
>>> On Wed, Feb 8, 2017 at 12:06 PM, Jason McKemie <
>>> j.mcke...@veloxinetbroadband.com> wrote:
>>>
 To those of you out there using Proxmox, I have a couple questions.

 I started messing around with this a year or so ago, but have really
 just started to use it in a production environment.

 The main thing I'm wondering about is if it is necessary to have a
 separate storage server in place to operate a cluster, or if it is possible
 to use the drive(s) on each node.

 Additionally, is a separate storage server necessary to do backups? It
 isn't obvious to me that there is any way to create or store a backup
 without some sort of network storage that is external to the node(s).

 I'm obviously new at this virtualization / container thing, so, sorry
 if these questions sound stupid.

 TIA

 -Jason



>>>
>>>
>>> --
>>> Zach Underwood (RHCE,RHCSA,RHCT,UACA)
>>> My website 
>>> advance-networking.com
>>>
>>
>>
>
>
> --
> Zach Underwood (RHCE,RHCSA,RHCT,UACA)
> My website 
> advance-networking.com
>


Re: [AFMUG] Proxmox

2017-02-08 Thread Zach Underwood
No it would run in the host proxmox OS. You can also mix hosts with and
without local disk using gluster.

Here are some details
https://pve.proxmox.com/wiki/Storage:_GlusterFS
http://blog.ivanilves.com/2014/proxmox-ve-3-3-2-node-cluster-with-glusterfs/

On Wed, Feb 8, 2017 at 12:21 PM, Jason McKemie <
j.mcke...@veloxinetbroadband.com> wrote:

> How would you use something like gluster on the nodes?  Just put it in its
> own container / VM?
>
> On Wed, Feb 8, 2017 at 11:12 AM, Zach Underwood 
> wrote:
>
>> Yes you can use local disk but then you are unable to live move a VM. If
>> you use the local disk with something like gluster then you can line move
>> VMs.
>>
>> On Wed, Feb 8, 2017 at 12:06 PM, Jason McKemie <
>> j.mcke...@veloxinetbroadband.com> wrote:
>>
>>> To those of you out there using Proxmox, I have a couple questions.
>>>
>>> I started messing around with this a year or so ago, but have really
>>> just started to use it in a production environment.
>>>
>>> The main thing I'm wondering about is if it is necessary to have a
>>> separate storage server in place to operate a cluster, or if it is possible
>>> to use the drive(s) on each node.
>>>
>>> Additionally, is a separate storage server necessary to do backups? It
>>> isn't obvious to me that there is any way to create or store a backup
>>> without some sort of network storage that is external to the node(s).
>>>
>>> I'm obviously new at this virtualization / container thing, so, sorry if
>>> these questions sound stupid.
>>>
>>> TIA
>>>
>>> -Jason
>>>
>>>
>>>
>>
>>
>> --
>> Zach Underwood (RHCE,RHCSA,RHCT,UACA)
>> My website 
>> advance-networking.com
>>
>
>


-- 
Zach Underwood (RHCE,RHCSA,RHCT,UACA)
My website 
advance-networking.com


Re: [AFMUG] Proxmox

2017-02-08 Thread Jason McKemie
How would you use something like gluster on the nodes?  Just put it in its
own container / VM?

On Wed, Feb 8, 2017 at 11:12 AM, Zach Underwood 
wrote:

> Yes you can use local disk but then you are unable to live move a VM. If
> you use the local disk with something like gluster then you can line move
> VMs.
>
> On Wed, Feb 8, 2017 at 12:06 PM, Jason McKemie <
> j.mcke...@veloxinetbroadband.com> wrote:
>
>> To those of you out there using Proxmox, I have a couple questions.
>>
>> I started messing around with this a year or so ago, but have really just
>> started to use it in a production environment.
>>
>> The main thing I'm wondering about is if it is necessary to have a
>> separate storage server in place to operate a cluster, or if it is possible
>> to use the drive(s) on each node.
>>
>> Additionally, is a separate storage server necessary to do backups? It
>> isn't obvious to me that there is any way to create or store a backup
>> without some sort of network storage that is external to the node(s).
>>
>> I'm obviously new at this virtualization / container thing, so, sorry if
>> these questions sound stupid.
>>
>> TIA
>>
>> -Jason
>>
>>
>>
>
>
> --
> Zach Underwood (RHCE,RHCSA,RHCT,UACA)
> My website 
> advance-networking.com
>


Re: [AFMUG] Proxmox

2017-02-08 Thread Jason McKemie
Yeah, I definitely would store a backup on separate storage, I just can't
seem to find a way to store it locally so that I can then go and copy it to
the separate storage.  Ultimately I know I will want to have storage in
place to auto-backup to, I just was hoping for a manual hold-over in the
mean time.

On Wed, Feb 8, 2017 at 11:15 AM, Brett A Mansfield <
li...@silverlakeinternet.com> wrote:

> Hi Jason,
>
> You can create a cluster without external storage, but because it is
> syncronys replication it will double your write hit. But if you have a
> solid network and solid drives you'll barely notice if at all.
>
> For backups I recommend you use separate storage. Otherwise your risking
> losing your data if your primary raid array dies.
>
> Thank you,
> Brett A Mansfield
>
> > On Feb 8, 2017, at 10:06 AM, Jason McKemie <
> j.mcke...@veloxinetbroadband.com> wrote:
> >
> > To those of you out there using Proxmox, I have a couple questions.
> >
> > I started messing around with this a year or so ago, but have really
> just started to use it in a production environment.
> >
> > The main thing I'm wondering about is if it is necessary to have a
> separate storage server in place to operate a cluster, or if it is possible
> to use the drive(s) on each node.
> >
> > Additionally, is a separate storage server necessary to do backups? It
> isn't obvious to me that there is any way to create or store a backup
> without some sort of network storage that is external to the node(s).
> >
> > I'm obviously new at this virtualization / container thing, so, sorry if
> these questions sound stupid.
> >
> > TIA
> >
> > -Jason
> >
> >
>
>


Re: [AFMUG] Proxmox

2017-02-08 Thread Zach Underwood
Yes you can use local disk but then you are unable to live move a VM. If
you use the local disk with something like gluster then you can line move
VMs.

On Wed, Feb 8, 2017 at 12:06 PM, Jason McKemie <
j.mcke...@veloxinetbroadband.com> wrote:

> To those of you out there using Proxmox, I have a couple questions.
>
> I started messing around with this a year or so ago, but have really just
> started to use it in a production environment.
>
> The main thing I'm wondering about is if it is necessary to have a
> separate storage server in place to operate a cluster, or if it is possible
> to use the drive(s) on each node.
>
> Additionally, is a separate storage server necessary to do backups? It
> isn't obvious to me that there is any way to create or store a backup
> without some sort of network storage that is external to the node(s).
>
> I'm obviously new at this virtualization / container thing, so, sorry if
> these questions sound stupid.
>
> TIA
>
> -Jason
>
>
>


-- 
Zach Underwood (RHCE,RHCSA,RHCT,UACA)
My website 
advance-networking.com


Re: [AFMUG] Proxmox

2017-02-08 Thread Brett A Mansfield
Hi Jason,

You can create a cluster without external storage, but because it is syncronys 
replication it will double your write hit. But if you have a solid network and 
solid drives you'll barely notice if at all. 

For backups I recommend you use separate storage. Otherwise your risking losing 
your data if your primary raid array dies. 

Thank you,
Brett A Mansfield

> On Feb 8, 2017, at 10:06 AM, Jason McKemie  
> wrote:
> 
> To those of you out there using Proxmox, I have a couple questions.
> 
> I started messing around with this a year or so ago, but have really just 
> started to use it in a production environment. 
> 
> The main thing I'm wondering about is if it is necessary to have a separate 
> storage server in place to operate a cluster, or if it is possible to use the 
> drive(s) on each node.
> 
> Additionally, is a separate storage server necessary to do backups? It isn't 
> obvious to me that there is any way to create or store a backup without some 
> sort of network storage that is external to the node(s).
> 
> I'm obviously new at this virtualization / container thing, so, sorry if 
> these questions sound stupid.
> 
> TIA
> 
> -Jason
> 
> 



[AFMUG] Proxmox

2017-02-08 Thread Jason McKemie
To those of you out there using Proxmox, I have a couple questions.

I started messing around with this a year or so ago, but have really just
started to use it in a production environment.

The main thing I'm wondering about is if it is necessary to have a separate
storage server in place to operate a cluster, or if it is possible to use
the drive(s) on each node.

Additionally, is a separate storage server necessary to do backups? It
isn't obvious to me that there is any way to create or store a backup
without some sort of network storage that is external to the node(s).

I'm obviously new at this virtualization / container thing, so, sorry if
these questions sound stupid.

TIA

-Jason


[AFMUG] Proxmox VE packet capture on guest

2016-09-18 Thread Adam Moffett
I put this question on the Proxmox VE forum as well, but I figured 
somebody here might have already fought this battle.


Is there any trick to forward traffic promiscuously from one port on a 
linux bridge to another port on the same bridge?  The goal being to run 
a pcap with wireshark on a guest VM to pick up traffic from mirrored 
switch port.


Background:
A vendor needs me to capture traffic to and from a device we're 
troubleshooting.  It so happens I had a Proxmox VE installation at the 
same location, and the host machine had an extra NIC.  So I mirrored a 
port on the switch, connected the mirrored port to the extra interface 
on the proxmox server, added that port to a new linux bridge, and added 
a new interface on a windows guest to run Wireshark.


The problem (which makes perfect sense in hindsight) is that the bridge 
on the host won't forward any of the packets to the guest because the 
guest does not match any of the destination MAC addresses.


For the immediate need, I ran tcpdump on the host and then just copied 
the pcap file to the guest.  It would be convenient if the vendor's tech 
support guy could remote into the windows machine and run wireshark 
whenever he wants to.  I read several (old) posts on serverfault and 
other places saying to set the bridge aging timeout to 0 to "make it act 
like a hub", but that method does not seem to have any effect for me.





Re: [AFMUG] Proxmox virtualization

2015-11-16 Thread Paul Stewart
I still very much prefer VMWare (free or commercial depending on needs).  Easy 
to hire staff that know it, easy to integrate with monitoring etc and depending 
on what you need it's not that bad pricewise IMHO

For my personal lab/dev stuff I run VMWare - at $$$job we run various systems 
but on Linux systems it's primarily Proxmox and Windows it's HyperV ... several 
companies I deal with are Xen across the board (which is by far the largest 
sets of deployments out there).  I would run Xen but I just never got around to 
knowing it really well like I do VMWare ... 

-Original Message-
From: Af [mailto:af-boun...@afmug.com] On Behalf Of Matt
Sent: Friday, November 13, 2015 5:29 PM
To: af@afmug.com
Subject: Re: [AFMUG] Proxmox virtualization

> The gains are insignificant with an openvz jail environment compared 
> to a paravirtualized (PV, not HVM) Xen environment. With OpenVZ in its 
> current incarnation you are stuck running a 2.6.32 series ancient 
> kernel which

That's why Proxmox moved to LXC, very similar to Openvz but built into modern 
kernels.  I really like how light weight Openvz and LXC are.  I can run my DNS 
server and Speedtest server as separate containers and they hardly use any 
resources at all.  Works very well with small light weight servers like that.  
Seems like whatever memory you assign a KVM on other hand is gone even when its 
sitting twiddling its thumbs.  Plus containers have very little performance 
penalty regarding CPU or disk I/O.

I really like the ZFS file system that Proxmox 4 has switched too.
Built in mirroring etc. but takes some figuring out.  I am having issues with 
new LXC containers and centos 7 though.  You have to do a number of tweaks to 
get around systemd, apparmor and LXC working together.  I hate having to do 
tweaks.

Are there any affordable competitors to Proxmox?


> significantly reduces support for high performance I/O devices such as 
> the latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
> Also nonexistant support for high performance 1500-2000MB/s storage 
> devices such as M.2 format PCI Express SSDs (Samsung, Intel) and 
> support for the motherboard firmwares that enable booting from M.2.



Re: [AFMUG] Proxmox virtualization

2015-11-13 Thread Eric Kuhnke
such a system is in one of my racks and cost less than $1150 to build in a
1U chassis...  mid-higher-end?



On Thu, Nov 12, 2015 at 7:09 PM, Josh Reynolds  wrote:

> You're talking about specialized mid-higher end hardware. That said,
> those drivers should be backported to that 2.6.32 kernel.
>
> On Thu, Nov 12, 2015 at 8:25 PM, Eric Kuhnke 
> wrote:
> > The gains are insignificant with an openvz jail environment compared to a
> > paravirtualized (PV, not HVM) Xen environment. With OpenVZ in its current
> > incarnation you are stuck running a 2.6.32 series ancient kernel which
> > significantly reduces support for high performance I/O devices such as
> the
> > latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a
> piece.
> > Also nonexistant support for high performance 1500-2000MB/s storage
> devices
> > such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
> > motherboard firmwares that enable booting from M.2.
> >
> >
> >
> > On Thu, Nov 12, 2015 at 5:18 PM, Josh Reynolds 
> wrote:
> >>
> >> The can be significant performance gains in both memory reduction and
> >> IO by using OpenVZ though. It just depends on your needs and
> >> environment.
> >>
> >> On Thu, Nov 12, 2015 at 7:09 PM, Eric Kuhnke 
> >> wrote:
> >> > Openvz is really more like a chroot jail. You can accomplish much
> better
> >> > functionality and the ability to run a wider range of guest VMs with
> xen
> >> > or
> >> > kvm.
> >> >
> >> > Keep in mind with openvz all guest OS must run the same kernel as the
> >> > host.
> >> >
> >> > Unless you need openvz for a hosting environment that will have
> hundreds
> >> > of
> >> > small VMs on a server with 128GB RAM?
> >> >
> >> > On Nov 11, 2015 3:58 PM, "Matt"  wrote:
> >> >>
> >> >> Anyone out there using Proxmox for virtualization?  Have been using
> if
> >> >> for few years running Centos Openvz containers.  Like fact that
> Openvz
> >> >> is light weight and gives very little performance penalty.  In
> Proxmox
> >> >> 4.x they have introduced the ZFS file system which I think is a great
> >> >> offering many features such as mirroring etc.  They have also
> switched
> >> >> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
> >> >> stable?  Pros and cons vs Openvz?
> >
> >
>


Re: [AFMUG] Proxmox virtualization

2015-11-13 Thread Matt
> The gains are insignificant with an openvz jail environment compared to a
> paravirtualized (PV, not HVM) Xen environment. With OpenVZ in its current
> incarnation you are stuck running a 2.6.32 series ancient kernel which

That's why Proxmox moved to LXC, very similar to Openvz but built into
modern kernels.  I really like how light weight Openvz and LXC are.  I
can run my DNS server and Speedtest server as separate containers and
they hardly use any resources at all.  Works very well with small
light weight servers like that.  Seems like whatever memory you assign
a KVM on other hand is gone even when its sitting twiddling its
thumbs.  Plus containers have very little performance penalty
regarding CPU or disk I/O.

I really like the ZFS file system that Proxmox 4 has switched too.
Built in mirroring etc. but takes some figuring out.  I am having
issues with new LXC containers and centos 7 though.  You have to do a
number of tweaks to get around systemd, apparmor and LXC working
together.  I hate having to do tweaks.

Are there any affordable competitors to Proxmox?


> significantly reduces support for high performance I/O devices such as the
> latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
> Also nonexistant support for high performance 1500-2000MB/s storage devices
> such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
> motherboard firmwares that enable booting from M.2.


Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Paul Stewart
Did you get any responses as I'm curious?  We have quite a bit of OpenVZ 
running and wondering about migration to LXC as well...

Thanks,
Paul



-Original Message-
From: Af [mailto:af-boun...@afmug.com] On Behalf Of Matt
Sent: Wednesday, November 11, 2015 6:59 PM
To: af@afmug.com
Subject: [AFMUG] Proxmox virtualization

Anyone out there using Proxmox for virtualization?  Have been using if for few 
years running Centos Openvz containers.  Like fact that Openvz is light weight 
and gives very little performance penalty.  In Proxmox 4.x they have introduced 
the ZFS file system which I think is a great offering many features such as 
mirroring etc.  They have also switched from Openvz to LXC for containers.  
Anyone used LXC much?  Is it stable?  Pros and cons vs Openvz?



Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Faisal Imtiaz
Haha looks like all of us are in the same boat Everybody looking at Everybody 
else to see if somebody has any input. I have it on my to do list to do some 
testing because we use proxmox ourselves quite a bit but I'm very unlikely to 
get to it for another few weeks.


Regards

Faisal

Sent from Mobile Device

 Original message From: Paul Stewart 
<p...@paulstewart.org> Date:11/12/2015  4:13 PM  (GMT-05:00) 
To: af@afmug.com Subject: Re: [AFMUG] Proxmox 
virtualization 
Did you get any responses as I'm curious?  We have quite a bit of OpenVZ 
running and wondering about migration to LXC as well...

Thanks,
Paul



-Original Message-
From: Af [mailto:af-boun...@afmug.com] On Behalf Of Matt
Sent: Wednesday, November 11, 2015 6:59 PM
To: af@afmug.com
Subject: [AFMUG] Proxmox virtualization

Anyone out there using Proxmox for virtualization?  Have been using if for few 
years running Centos Openvz containers.  Like fact that Openvz is light weight 
and gives very little performance penalty.  In Proxmox 4.x they have introduced 
the ZFS file system which I think is a great offering many features such as 
mirroring etc.  They have also switched from Openvz to LXC for containers.  
Anyone used LXC much?  Is it stable?  Pros and cons vs Openvz?



Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Eric Kuhnke
Openvz is really more like a chroot jail. You can accomplish much better
functionality and the ability to run a wider range of guest VMs with xen or
kvm.

Keep in mind with openvz all guest OS must run the same kernel as the host.

Unless you need openvz for a hosting environment that will have hundreds of
small VMs on a server with 128GB RAM?
On Nov 11, 2015 3:58 PM, "Matt"  wrote:

> Anyone out there using Proxmox for virtualization?  Have been using if
> for few years running Centos Openvz containers.  Like fact that Openvz
> is light weight and gives very little performance penalty.  In Proxmox
> 4.x they have introduced the ZFS file system which I think is a great
> offering many features such as mirroring etc.  They have also switched
> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
> stable?  Pros and cons vs Openvz?
>


Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Josh Reynolds
The can be significant performance gains in both memory reduction and
IO by using OpenVZ though. It just depends on your needs and
environment.

On Thu, Nov 12, 2015 at 7:09 PM, Eric Kuhnke  wrote:
> Openvz is really more like a chroot jail. You can accomplish much better
> functionality and the ability to run a wider range of guest VMs with xen or
> kvm.
>
> Keep in mind with openvz all guest OS must run the same kernel as the host.
>
> Unless you need openvz for a hosting environment that will have hundreds of
> small VMs on a server with 128GB RAM?
>
> On Nov 11, 2015 3:58 PM, "Matt"  wrote:
>>
>> Anyone out there using Proxmox for virtualization?  Have been using if
>> for few years running Centos Openvz containers.  Like fact that Openvz
>> is light weight and gives very little performance penalty.  In Proxmox
>> 4.x they have introduced the ZFS file system which I think is a great
>> offering many features such as mirroring etc.  They have also switched
>> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
>> stable?  Pros and cons vs Openvz?


Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Bill Prince
You could also do what Apple is doing. There was an announcement a 
couple weeks ago that they are dumping VMware, and going with KVM.


Trying to see what that might entail for us, but there's a learning curve.

bp
<part15sbs{at}gmail{dot}com>

On 11/12/2015 1:13 PM, Paul Stewart wrote:

Did you get any responses as I'm curious?  We have quite a bit of OpenVZ 
running and wondering about migration to LXC as well...

Thanks,
Paul



-Original Message-
From: Af [mailto:af-boun...@afmug.com] On Behalf Of Matt
Sent: Wednesday, November 11, 2015 6:59 PM
To: af@afmug.com
Subject: [AFMUG] Proxmox virtualization

Anyone out there using Proxmox for virtualization?  Have been using if for few 
years running Centos Openvz containers.  Like fact that Openvz is light weight 
and gives very little performance penalty.  In Proxmox 4.x they have introduced 
the ZFS file system which I think is a great offering many features such as 
mirroring etc.  They have also switched from Openvz to LXC for containers.  
Anyone used LXC much?  Is it stable?  Pros and cons vs Openvz?





Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Josh Reynolds
You're talking about specialized mid-higher end hardware. That said,
those drivers should be backported to that 2.6.32 kernel.

On Thu, Nov 12, 2015 at 8:25 PM, Eric Kuhnke  wrote:
> The gains are insignificant with an openvz jail environment compared to a
> paravirtualized (PV, not HVM) Xen environment. With OpenVZ in its current
> incarnation you are stuck running a 2.6.32 series ancient kernel which
> significantly reduces support for high performance I/O devices such as the
> latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
> Also nonexistant support for high performance 1500-2000MB/s storage devices
> such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
> motherboard firmwares that enable booting from M.2.
>
>
>
> On Thu, Nov 12, 2015 at 5:18 PM, Josh Reynolds  wrote:
>>
>> The can be significant performance gains in both memory reduction and
>> IO by using OpenVZ though. It just depends on your needs and
>> environment.
>>
>> On Thu, Nov 12, 2015 at 7:09 PM, Eric Kuhnke 
>> wrote:
>> > Openvz is really more like a chroot jail. You can accomplish much better
>> > functionality and the ability to run a wider range of guest VMs with xen
>> > or
>> > kvm.
>> >
>> > Keep in mind with openvz all guest OS must run the same kernel as the
>> > host.
>> >
>> > Unless you need openvz for a hosting environment that will have hundreds
>> > of
>> > small VMs on a server with 128GB RAM?
>> >
>> > On Nov 11, 2015 3:58 PM, "Matt"  wrote:
>> >>
>> >> Anyone out there using Proxmox for virtualization?  Have been using if
>> >> for few years running Centos Openvz containers.  Like fact that Openvz
>> >> is light weight and gives very little performance penalty.  In Proxmox
>> >> 4.x they have introduced the ZFS file system which I think is a great
>> >> offering many features such as mirroring etc.  They have also switched
>> >> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
>> >> stable?  Pros and cons vs Openvz?
>
>


Re: [AFMUG] Proxmox virtualization

2015-11-12 Thread Eric Kuhnke
The gains are insignificant with an openvz jail environment compared to a
paravirtualized (PV, *not* HVM) Xen environment. With OpenVZ in its current
incarnation you are stuck running a 2.6.32 series ancient kernel which
significantly reduces support for high performance I/O devices such as the
latest 10GbE PCI-Express 3.0 NICs, which are now as cheap as $200 a piece.
Also nonexistant support for high performance 1500-2000MB/s storage devices
such as M.2 format PCI Express SSDs (Samsung, Intel) and support for the
motherboard firmwares that enable booting from M.2.



On Thu, Nov 12, 2015 at 5:18 PM, Josh Reynolds  wrote:

> The can be significant performance gains in both memory reduction and
> IO by using OpenVZ though. It just depends on your needs and
> environment.
>
> On Thu, Nov 12, 2015 at 7:09 PM, Eric Kuhnke 
> wrote:
> > Openvz is really more like a chroot jail. You can accomplish much better
> > functionality and the ability to run a wider range of guest VMs with xen
> or
> > kvm.
> >
> > Keep in mind with openvz all guest OS must run the same kernel as the
> host.
> >
> > Unless you need openvz for a hosting environment that will have hundreds
> of
> > small VMs on a server with 128GB RAM?
> >
> > On Nov 11, 2015 3:58 PM, "Matt"  wrote:
> >>
> >> Anyone out there using Proxmox for virtualization?  Have been using if
> >> for few years running Centos Openvz containers.  Like fact that Openvz
> >> is light weight and gives very little performance penalty.  In Proxmox
> >> 4.x they have introduced the ZFS file system which I think is a great
> >> offering many features such as mirroring etc.  They have also switched
> >> from Openvz to LXC for containers.  Anyone used LXC much?  Is it
> >> stable?  Pros and cons vs Openvz?
>


[AFMUG] Proxmox virtualization

2015-11-11 Thread Matt
Anyone out there using Proxmox for virtualization?  Have been using if
for few years running Centos Openvz containers.  Like fact that Openvz
is light weight and gives very little performance penalty.  In Proxmox
4.x they have introduced the ZFS file system which I think is a great
offering many features such as mirroring etc.  They have also switched
from Openvz to LXC for containers.  Anyone used LXC much?  Is it
stable?  Pros and cons vs Openvz?