Re: [PVE-User] Undo pvecm

2016-07-05 Thread Eneko Lacunza
El 05/07/16 a las 21:53, William Gnann escribió: Hi, I have created a cluster with two nodes to concentrate the administration of my machines within a single interface. But one of the machines got some hardware problem and went down. I ran "pvecm e 1" to bring back the master node to quorate

Re: [PVE-User] Ceph RBD image striping

2016-07-05 Thread Eneko Lacunza
Hi Alwin, In Proxmox Ceph client integration is done using librbd, not krbd. Stripe parameters can't be defined from Proxmox GUI. Why are you changing striping parameters? Cheers Eneko El 05/07/16 a las 19:14, Alwin Antreich escribió: Hi all, how can I create an image with the ceph stripin

[PVE-User] Undo pvecm

2016-07-05 Thread William Gnann
Hi, I have created a cluster with two nodes to concentrate the administration of my machines within a single interface. But one of the machines got some hardware problem and went down. I ran "pvecm e 1" to bring back the master node to quorate state and excluded the node with hardware problem.

Re: [PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-05 Thread Gilberto Nunes
Sorry guys... It was my missconfiguration in the switch... I have create just one Link Aggregation instead of two And the previously interfaces files works properly Rasmussen Thanks any way 2016-07-05 16:50 GMT-03:00 Michael Rasmussen : > On Tue, 5 Jul 2016 16:30:44 -0300 > Gilberto Nunes

Re: [PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-05 Thread Michael Rasmussen
On Tue, 5 Jul 2016 16:30:44 -0300 Gilberto Nunes wrote: > - PVE 4.2: > > auto eth1 > iface eth1 inet manual > bond-master bond0 > > auto eth2 > iface eth2 inet manual > bond-master bond0 > > auto eth3 > iface eth3 inet manual > bond-master bond0 > > auto bond0 >

[PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-05 Thread Gilberto Nunes
Hello list I have two servers, one with Ubuntu 14 and PVE 4.2 In Ubuntu, I was successful create a 802.3ad bonding with a HP 1920-48G switch. The LACP show me link active in all ports attached in Ubuntu server. On the other hand, all ports attached in PVE is mark as inactive! I wondering why?!?!

[PVE-User] Ceph RBD image striping

2016-07-05 Thread Alwin Antreich
Hi all, how can I create an image with the ceph striping feature and add the disk to a VM? Thanks in advance. I can add an image via the rbd cli, but fail to activate it through proxmox (timeout). I manually added the disk file to the VM config under /etc/pve/qemu-server/. In proxmox the sto

Re: [PVE-User] Moving disk to a new storage - VM Reboot

2016-07-05 Thread Alexandre DERUMIER
Is the migration task log finished correctly ? - Mail original - De: "Kevin Lemonnier" À: "proxmoxve" Envoyé: Mardi 5 Juillet 2016 14:11:14 Objet: Re: [PVE-User] Moving disk to a new storage - VM Reboot proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve) pve-manager: 4.2-15 (running ver

Re: [PVE-User] GlusterFS Performance tweak

2016-07-05 Thread Gilberto Nunes
Well... I am not a gluster expertise, but I add this parameters in /etc/glusterfs/glusterd.vol in hope to improve performance: option performance.flush-behind on option performance.strict-write-ordering on option performance.strict-o-direct on option performance.force-readdirp on option performanc

[PVE-User] GlusterFS Performance tweak

2016-07-05 Thread Gilberto Nunes
Hello list I have PVE 4.2 and a server act as a GlusterFS storage... I have just one VM, a KVM Ubuntu machine. The VM itself works fine. This VM is our Zimbra Mail Server. The VM image reside in the glusterfs server, seting up without any performance tunning, just the defaults options. But, no

Re: [PVE-User] Moving disk to a new storage - VM Reboot

2016-07-05 Thread Kevin Lemonnier
proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve) pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c) pve-kernel-4.4.13-1-pve: 4.4.13-56 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 1.0-1 pve-cluster: 4.0-42 qemu-server: 4.0-83 pve-firmware: 1.1-8 libpve-common-perl: 4.0-70 libpve-access-co

Re: [PVE-User] Moving disk to a new storage - VM Reboot

2016-07-05 Thread Alexandre DERUMIER
They are no reason for qemu process to be killed when storage migration occur. Could be a qemu crash, but I never had seeing this. What is the qemu version ? - Mail original - De: "Kevin Lemonnier" À: "proxmoxve" Envoyé: Lundi 4 Juillet 2016 16:51:36 Objet: Re: [PVE-User] Moving disk

Re: [PVE-User] P2V Windows XP OEM

2016-07-05 Thread Thomas Lamprecht
Hi, On 07/05/2016 07:53 AM, Eneko Lacunza wrote: Hi Thomas, El 04/07/16 a las 18:00, Thomas Lamprecht escribió: I have continued looking onto this, and it seems I have to use -acpitable command with an image of the original host SLIC table. But a fix in QEMU is also needed: https://bugzil