Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
>>Forgot to mention that consul supports multiple clusters and/or multi >>center clusters out of the box. yes, I read the doc yesterday. seem very interesting. The most work could be to replace pmxcs by consul kv store. I have seen some consul fuse fs implementation, but it don't have all pmxcs

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Michael Rasmussen
On Wed, 21 Sep 2016 01:45:18 +0200 Michael Rasmussen wrote: > https://github.com/hashicorp/consul > Forgot to mention that consul supports multiple clusters and/or multi center clusters out of the box. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http:

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Dmitry Petuhov
I planned to try gfs2 on couple of my clusters. It will not work without corosync at all, because it's needed for DLM. 21.09.2016 2:25, Alexandre DERUMIER wrote: > Another question about my first idea (replace corosync), > > is is really difficult to replace corosync by something else ? > > Sheep

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
About corosync scaling, I found a discussion about implementation of satellites nodes http://discuss.corosync.narkive.com/Uh97uGyd/rfc-extending-corosync-to-high-node-counts https://chrissie.fedorapeople.org/corosync-remote.txt https://github.com/chrissie-c/corosync/tree/topic-remote - Mai

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Michael Rasmussen
On Wed, 21 Sep 2016 01:25:24 +0200 (CEST) Alexandre DERUMIER wrote: > Another question about my first idea (replace corosync), > > is is really difficult to replace corosync by something else ? > > Sheepdog storage for example, have support for corosync and zookeeper. > There is also: https://

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
Another question about my first idea (replace corosync), is is really difficult to replace corosync by something else ? Sheepdog storage for example, have support for corosync and zookeeper. - Mail original - De: "datanom.net" À: "pve-devel" Envoyé: Mercredi 21 Septembre 2016 01:08:4

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Michael Rasmussen
On Wed, 21 Sep 2016 00:56:06 +0200 (CEST) Alexandre DERUMIER wrote: > >>vmid are unique with a single cluster, but not within several clusters... > > yes, that's why I proposed to add some kind of optional prefix to vmid, could > be defined in each cluster. > Instead of a literal prefix you

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
>>vmid are unique with a single cluster, but not within several clusters... yes, that's why I proposed to add some kind of optional prefix to vmid, could be defined in each cluster. - Mail original - De: "dietmar" À: "aderumier" , "pve-devel" Cc: "Thomas Lamprecht" Envoyé: Mardi 20 S

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
>>But shared storage between different clusters is a problem, because our >>locking mechanism only works inside a cluster. So there must be a single >>cluster which does all allocation for a specific storage?? But if we have unique id (uuid for example), it shouldn't be a problem ? no need to cro

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Dietmar Maurer
> The vm disks also need to be unique. (they use vmid, so if vmid is unique it's > ok) vmid are unique with a single cluster, but not within several clusters... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Dietmar Maurer
> On September 20, 2016 at 10:45 AM Alexandre DERUMIER > wrote: > > > >>I guess we need more. For example, we can exactly identify > >>a VM/CT/Storage/Network by prefixing it with the cluster name: > >> > >>cluster1/vm/100 > >>cluster2/ct/100 > >>cluster1/storage/local > >>cluster2/network/vmbr

Re: [pve-devel] [PATCH manager] add a warning message if the EFI disk is missing fix #1112

2016-09-20 Thread Dietmar Maurer
applied, but I wonder if there is some predefined html style for that - instead of hardcoding a color? > + var EFIHint = Ext.createWidget({ > + xtype: 'displayfield', //submitValue is false, so we don't get > submitted > + fieldStyle: 'background-color: LightYellow;', ___

Re: [pve-devel] [PATCH manager 2/2] fix #1113 use a LSI controller for legacy OSes

2016-09-20 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH manager 1/2] fix #1113: preserve LSI controller for legacy Oses

2016-09-20 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH manager] add a warning message if the EFI disk is missing fix #1112

2016-09-20 Thread Emmanuel Kasper
do not display an obstrusive popup on top of a modal window, but display a hint under the combo box --- www/manager6/qemu/QemuBiosEdit.js | 43 +++ 1 file changed, 39 insertions(+), 4 deletions(-) diff --git a/www/manager6/qemu/QemuBiosEdit.js b/www/manager6/q

[pve-devel] [PATCH manager 1/2] fix #1113: preserve LSI controller for legacy Oses

2016-09-20 Thread Emmanuel Kasper
The SCSI HW type handling will be made in the Wizard class, removing from here. --- www/manager6/qemu/HDEdit.js | 4 1 file changed, 4 deletions(-) diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js index 312b218..c86ab44 100644 --- a/www/manager6/qemu/HDEdit.js +++ b/www

[pve-devel] [PATCH manager 2/2] fix #1113 use a LSI controller for legacy OSes

2016-09-20 Thread Emmanuel Kasper
instead of setting virtio-scsi for all newly created VMs, pass the OS Optimal SCSI Controller to pveQemuCreateWizard which will add it as an hidden paramater just before POSTing the wizard data to the API. --- www/manager6/qemu/CreateWizard.js | 5 + www/manager6/qemu/OSDefaults.js | 9 +

Re: [pve-devel] [PATCH container] restore: only restore lxc.* if root

2016-09-20 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH container 1/2] restore: fix simple with non-volume mps

2016-09-20 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH container 2/2] restore: add permission check

2016-09-20 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH container] restore: only restore lxc.* if root

2016-09-20 Thread Fabian Grünbichler
since these can only be added as root to existing containers, and might be dangerous. --- src/PVE/API2/LXC.pm | 2 +- src/PVE/LXC/Create.pm | 12 +++- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 83afd56..15ebb87 100644

[pve-devel] [PATCH container 1/2] restore: fix simple with non-volume mps

2016-09-20 Thread Fabian Grünbichler
adding non-volume mps was supposed to be delayed in "simple" (storage-only) restore mode --- src/PVE/API2/LXC.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 9b92c13..12aaffa 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -360,6

[pve-devel] [PATCH container 2/2] restore: add permission check

2016-09-20 Thread Fabian Grünbichler
we should probably check the current user just like when bind/dev mountpoints are passed as regular parameters. --- src/PVE/API2/LXC.pm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 12aaffa..83afd56 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/

Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-20 Thread Alexandre DERUMIER
>>Please can you verify this? I'll try (it's on my production cluster, so, it could take some time) - Mail original - De: "dietmar" À: "aderumier" Cc: "pve-devel" Envoyé: Mardi 20 Septembre 2016 11:30:35 Objet: Re: [pve-devel] intel pstate: wrong cpu frequency with performance governo

Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-20 Thread Dietmar Maurer
> I'm seeing a lot less retransmit since I have disable pstate. > (but maybe this is because of bug where frequency was stuck low, and I have > also a big cluster with a lot of vms) Please can you verify this? ___ pve-devel mailing list pve-devel@pve.pr

Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-20 Thread Alexandre DERUMIER
>>What is the suggestion - disbale the pstate driver? I think that would >>increase >>power consumption? >>yes, this is increase power consumption. But increase stability & latencies. Note that disable pstate, don't disable dynamic frequencies. we still have classic "cpufreq" governors. with

Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-20 Thread Alexandre DERUMIER
>>What is the suggestion - disbale the pstate driver? I think that would >>increase >>power consumption? yes, this is increase power consumption. But increase stability & latencies. my old xeons (previous sandybridge) and amd, always are at maximum cpu frequency. Redhat have a special daemon

Re: [pve-devel] [PATCH cluster] allow also deleting a node by name when ringX_addr is a IP

2016-09-20 Thread Dietmar Maurer
This is not necessary (see corosync_nodelist) > diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm > index b26a1ec..b72fb00 100755 > --- a/data/PVE/CLI/pvecm.pm > +++ b/data/PVE/CLI/pvecm.pm > @@ -414,9 +414,11 @@ __PACKAGE__->register_method ({ > > foreach my $tmp_node (keys %$nod

Re: [pve-devel] [PATCH cluster] allow also deleting a node by name when ringX_addr is a IP

2016-09-20 Thread Thomas Lamprecht
On 09/20/2016 10:42 AM, Fabian Grünbichler wrote: On Tue, Sep 20, 2016 at 10:17:44AM +0200, Thomas Lamprecht wrote: This is an additional (convenience) fix for the delnode param. As we always have the node name in our config - either the 'name' (preferred) or the 'ring0_addr' property of a nod

Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-20 Thread Alexandre DERUMIER
>>I guess we need more. For example, we can exactly identify >>a VM/CT/Storage/Network by prefixing it with the cluster name: >> >>cluster1/vm/100 >>cluster2/ct/100 >>cluster1/storage/local >>cluster2/network/vmbr >> >>But we need a way to tell if resources are considered >>to be equal (cluster wi

Re: [pve-devel] [PATCH cluster] allow also deleting a node by name when ringX_addr is a IP

2016-09-20 Thread Fabian Grünbichler
On Tue, Sep 20, 2016 at 10:17:44AM +0200, Thomas Lamprecht wrote: > This is an additional (convenience) fix for the delnode param. > As we always have the node name in our config - either the 'name' > (preferred) or the 'ring0_addr' property of a node entry in the > corosync.conf holds it - allow a

[pve-devel] [PATCH cluster] allow also deleting a node by name when ringX_addr is a IP

2016-09-20 Thread Thomas Lamprecht
This is an additional (convenience) fix for the delnode param. As we always have the node name in our config - either the 'name' (preferred) or the 'ring0_addr' property of a node entry in the corosync.conf holds it - allow also deleting by it if the ringX_addr is set to an IP, else this may be con

Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-20 Thread Dietmar Maurer
> I think this is the normal behavior of pstate, as they are a lower limit. > > But for virtualisation, I think it's really bad to have changing frequency. > (clock problem for example). What is the suggestion - disbale the pstate driver? I think that would increase power consumption? > Also fo

Re: [pve-devel] [PATCHV2 pve-cluster 1/2] Fix #1093: allow also delete node by IP

2016-09-20 Thread Dietmar Maurer
applied both patches with cleanups ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] applied: [PATCH pve-qemu-kvm] fix Bug #615 Windows guests suddenly hangs after couple times of migration

2016-09-20 Thread Fabian Grünbichler
applied with cleanup On Mon, Sep 19, 2016 at 01:59:29PM +0200, Wolfgang Link wrote: > From: "Dr. David Alan Gilbert" > > Load the LAPIC state during post_load (rather than when the CPU > starts). > > This allows an interrupt to be delivered from the ioapic to > the lapic prior to cpu loading, i

[pve-devel] applied: [PATCH kvm] various CVE fixes

2016-09-20 Thread Fabian Grünbichler
applied On Mon, Sep 19, 2016 at 09:58:14AM +0200, Fabian Grünbichler wrote: > CVE-2016-7170: vmsvga: correct bitmap and pixmap size checks > CVE-2016-7421: scsi: pvscsi: limit process IO loop to ring size > CVE-2016-7423: scsi: mptsas: use g_new0 to allocate MPTSASRequest object > --- > ...vga-co