>>Forgot to mention that consul supports multiple clusters and/or multi
>>center clusters out of the box.
yes, I read the doc yesterday. seem very interesting.
The most work could be to replace pmxcs by consul kv store. I have seen some
consul fuse fs implementation,
but it don't have all pmxcs
On Wed, 21 Sep 2016 01:45:18 +0200
Michael Rasmussen wrote:
> https://github.com/hashicorp/consul
>
Forgot to mention that consul supports multiple clusters and/or multi
center clusters out of the box.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http:
I planned to try gfs2 on couple of my clusters. It will not work without
corosync at all, because it's needed for DLM.
21.09.2016 2:25, Alexandre DERUMIER wrote:
> Another question about my first idea (replace corosync),
>
> is is really difficult to replace corosync by something else ?
>
> Sheep
About corosync scaling,
I found a discussion about implementation of satellites nodes
http://discuss.corosync.narkive.com/Uh97uGyd/rfc-extending-corosync-to-high-node-counts
https://chrissie.fedorapeople.org/corosync-remote.txt
https://github.com/chrissie-c/corosync/tree/topic-remote
- Mai
On Wed, 21 Sep 2016 01:25:24 +0200 (CEST)
Alexandre DERUMIER wrote:
> Another question about my first idea (replace corosync),
>
> is is really difficult to replace corosync by something else ?
>
> Sheepdog storage for example, have support for corosync and zookeeper.
>
There is also:
https://
Another question about my first idea (replace corosync),
is is really difficult to replace corosync by something else ?
Sheepdog storage for example, have support for corosync and zookeeper.
- Mail original -
De: "datanom.net"
À: "pve-devel"
Envoyé: Mercredi 21 Septembre 2016 01:08:4
On Wed, 21 Sep 2016 00:56:06 +0200 (CEST)
Alexandre DERUMIER wrote:
> >>vmid are unique with a single cluster, but not within several clusters...
>
> yes, that's why I proposed to add some kind of optional prefix to vmid, could
> be defined in each cluster.
>
Instead of a literal prefix you
>>vmid are unique with a single cluster, but not within several clusters...
yes, that's why I proposed to add some kind of optional prefix to vmid, could
be defined in each cluster.
- Mail original -
De: "dietmar"
À: "aderumier" , "pve-devel"
Cc: "Thomas Lamprecht"
Envoyé: Mardi 20 S
>>But shared storage between different clusters is a problem, because our
>>locking mechanism only works inside a cluster. So there must be a single
>>cluster which does all allocation for a specific storage??
But if we have unique id (uuid for example), it shouldn't be a problem ? no
need to cro
> The vm disks also need to be unique. (they use vmid, so if vmid is unique it's
> ok)
vmid are unique with a single cluster, but not within several clusters...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/
> On September 20, 2016 at 10:45 AM Alexandre DERUMIER
> wrote:
>
>
> >>I guess we need more. For example, we can exactly identify
> >>a VM/CT/Storage/Network by prefixing it with the cluster name:
> >>
> >>cluster1/vm/100
> >>cluster2/ct/100
> >>cluster1/storage/local
> >>cluster2/network/vmbr
applied, but I wonder if there is some predefined html style for that
- instead of hardcoding a color?
> + var EFIHint = Ext.createWidget({
> + xtype: 'displayfield', //submitValue is false, so we don't get
> submitted
> + fieldStyle: 'background-color: LightYellow;',
___
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
do not display an obstrusive popup on top of a modal window,
but display a hint under the combo box
---
www/manager6/qemu/QemuBiosEdit.js | 43 +++
1 file changed, 39 insertions(+), 4 deletions(-)
diff --git a/www/manager6/qemu/QemuBiosEdit.js
b/www/manager6/q
The SCSI HW type handling will be made in the Wizard class, removing from here.
---
www/manager6/qemu/HDEdit.js | 4
1 file changed, 4 deletions(-)
diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js
index 312b218..c86ab44 100644
--- a/www/manager6/qemu/HDEdit.js
+++ b/www
instead of setting virtio-scsi for all newly created VMs, pass the
OS Optimal SCSI Controller to pveQemuCreateWizard which will
add it as an hidden paramater just before POSTing the wizard data
to the API.
---
www/manager6/qemu/CreateWizard.js | 5 +
www/manager6/qemu/OSDefaults.js | 9 +
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
since these can only be added as root to existing containers,
and might be dangerous.
---
src/PVE/API2/LXC.pm | 2 +-
src/PVE/LXC/Create.pm | 12 +++-
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 83afd56..15ebb87 100644
adding non-volume mps was supposed to be delayed in "simple"
(storage-only) restore mode
---
src/PVE/API2/LXC.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 9b92c13..12aaffa 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -360,6
we should probably check the current user just like when
bind/dev mountpoints are passed as regular parameters.
---
src/PVE/API2/LXC.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 12aaffa..83afd56 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/
>>Please can you verify this?
I'll try (it's on my production cluster, so, it could take some time)
- Mail original -
De: "dietmar"
À: "aderumier"
Cc: "pve-devel"
Envoyé: Mardi 20 Septembre 2016 11:30:35
Objet: Re: [pve-devel] intel pstate: wrong cpu frequency with performance
governo
> I'm seeing a lot less retransmit since I have disable pstate.
> (but maybe this is because of bug where frequency was stuck low, and I have
> also a big cluster with a lot of vms)
Please can you verify this?
___
pve-devel mailing list
pve-devel@pve.pr
>>What is the suggestion - disbale the pstate driver? I think that would
>>increase
>>power consumption?
>>yes, this is increase power consumption. But increase stability & latencies.
Note that disable pstate, don't disable dynamic frequencies.
we still have classic "cpufreq" governors.
with
>>What is the suggestion - disbale the pstate driver? I think that would
>>increase
>>power consumption?
yes, this is increase power consumption. But increase stability & latencies.
my old xeons (previous sandybridge) and amd, always are at maximum cpu
frequency.
Redhat have a special daemon
This is not necessary (see corosync_nodelist)
> diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
> index b26a1ec..b72fb00 100755
> --- a/data/PVE/CLI/pvecm.pm
> +++ b/data/PVE/CLI/pvecm.pm
> @@ -414,9 +414,11 @@ __PACKAGE__->register_method ({
>
> foreach my $tmp_node (keys %$nod
On 09/20/2016 10:42 AM, Fabian Grünbichler wrote:
On Tue, Sep 20, 2016 at 10:17:44AM +0200, Thomas Lamprecht wrote:
This is an additional (convenience) fix for the delnode param.
As we always have the node name in our config - either the 'name'
(preferred) or the 'ring0_addr' property of a nod
>>I guess we need more. For example, we can exactly identify
>>a VM/CT/Storage/Network by prefixing it with the cluster name:
>>
>>cluster1/vm/100
>>cluster2/ct/100
>>cluster1/storage/local
>>cluster2/network/vmbr
>>
>>But we need a way to tell if resources are considered
>>to be equal (cluster wi
On Tue, Sep 20, 2016 at 10:17:44AM +0200, Thomas Lamprecht wrote:
> This is an additional (convenience) fix for the delnode param.
> As we always have the node name in our config - either the 'name'
> (preferred) or the 'ring0_addr' property of a node entry in the
> corosync.conf holds it - allow a
This is an additional (convenience) fix for the delnode param.
As we always have the node name in our config - either the 'name'
(preferred) or the 'ring0_addr' property of a node entry in the
corosync.conf holds it - allow also deleting by it if the ringX_addr
is set to an IP, else this may be con
> I think this is the normal behavior of pstate, as they are a lower limit.
>
> But for virtualisation, I think it's really bad to have changing frequency.
> (clock problem for example).
What is the suggestion - disbale the pstate driver? I think that would increase
power consumption?
> Also fo
applied both patches with cleanups
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied with cleanup
On Mon, Sep 19, 2016 at 01:59:29PM +0200, Wolfgang Link wrote:
> From: "Dr. David Alan Gilbert"
>
> Load the LAPIC state during post_load (rather than when the CPU
> starts).
>
> This allows an interrupt to be delivered from the ioapic to
> the lapic prior to cpu loading, i
applied
On Mon, Sep 19, 2016 at 09:58:14AM +0200, Fabian Grünbichler wrote:
> CVE-2016-7170: vmsvga: correct bitmap and pixmap size checks
> CVE-2016-7421: scsi: pvscsi: limit process IO loop to ring size
> CVE-2016-7423: scsi: mptsas: use g_new0 to allocate MPTSASRequest object
> ---
> ...vga-co
36 matches
Mail list logo