El 05/07/16 a las 21:53, William Gnann escribió:
Hi,
I have created a cluster with two nodes to concentrate the administration of my
machines within a single interface. But one of the machines got some hardware
problem and went down.
I ran "pvecm e 1" to bring back the master node to quorate
Hi Alwin,
In Proxmox Ceph client integration is done using librbd, not krbd.
Stripe parameters can't be defined from Proxmox GUI. Why are you
changing striping parameters?
Cheers
Eneko
El 05/07/16 a las 19:14, Alwin Antreich escribió:
Hi all,
how can I create an image with the ceph stripin
Hi,
I have created a cluster with two nodes to concentrate the administration of my
machines within a single interface. But one of the machines got some hardware
problem and went down.
I ran "pvecm e 1" to bring back the master node to quorate state and excluded
the node with hardware problem.
Sorry guys... It was my missconfiguration in the switch...
I have create just one Link Aggregation instead of two
And the previously interfaces files works properly Rasmussen
Thanks any way
2016-07-05 16:50 GMT-03:00 Michael Rasmussen :
> On Tue, 5 Jul 2016 16:30:44 -0300
> Gilberto Nunes
On Tue, 5 Jul 2016 16:30:44 -0300
Gilberto Nunes wrote:
> - PVE 4.2:
>
> auto eth1
> iface eth1 inet manual
> bond-master bond0
>
> auto eth2
> iface eth2 inet manual
> bond-master bond0
>
> auto eth3
> iface eth3 inet manual
> bond-master bond0
>
> auto bond0
>
Hello list
I have two servers, one with Ubuntu 14 and PVE 4.2
In Ubuntu, I was successful create a 802.3ad bonding with a HP 1920-48G
switch.
The LACP show me link active in all ports attached in Ubuntu server.
On the other hand, all ports attached in PVE is mark as inactive!
I wondering why?!?!
Hi all,
how can I create an image with the ceph striping feature and add the disk to a
VM?
Thanks in advance.
I can add an image via the rbd cli, but fail to activate it through proxmox
(timeout). I manually added the disk file to
the VM config under /etc/pve/qemu-server/. In proxmox the sto
Is the migration task log finished correctly ?
- Mail original -
De: "Kevin Lemonnier"
À: "proxmoxve"
Envoyé: Mardi 5 Juillet 2016 14:11:14
Objet: Re: [PVE-User] Moving disk to a new storage - VM Reboot
proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running ver
Well... I am not a gluster expertise, but I add this parameters in
/etc/glusterfs/glusterd.vol in hope to improve performance:
option performance.flush-behind on
option performance.strict-write-ordering on
option performance.strict-o-direct on
option performance.force-readdirp on
option performanc
Hello list
I have PVE 4.2 and a server act as a GlusterFS storage...
I have just one VM, a KVM Ubuntu machine.
The VM itself works fine.
This VM is our Zimbra Mail Server.
The VM image reside in the glusterfs server, seting up without any
performance tunning, just the defaults options.
But, no
proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.13-1-pve: 4.4.13-56
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-co
They are no reason for qemu process to be killed when storage migration occur.
Could be a qemu crash, but I never had seeing this.
What is the qemu version ?
- Mail original -
De: "Kevin Lemonnier"
À: "proxmoxve"
Envoyé: Lundi 4 Juillet 2016 16:51:36
Objet: Re: [PVE-User] Moving disk
Hi,
On 07/05/2016 07:53 AM, Eneko Lacunza wrote:
Hi Thomas,
El 04/07/16 a las 18:00, Thomas Lamprecht escribió:
I have continued looking onto this, and it seems I have to use
-acpitable command with an image of the original host SLIC table.
But a fix in QEMU is also needed:
https://bugzil
13 matches
Mail list logo