>>Is possible that the change a non-hotpluggage value also be migrated with
>>the VM?
a non-hotpluggage value is in pending until restart, so when a vm is migrated
is keep in pending.
Currently, the behaviour is bad, if you add a disk without hotplug, it's
registered in vm config file.
Then
I created a 3 node, 2 OSD setup using the guide here:
http://www.jamescoyle.net/how-to/1213-ceph-storage-on-proxmox
command line to start off, then finish of with the web gui.
Immediately ran into the dreaded HEALTH_WARN, with hundreds of pgs
stuck and unclean
Took me a bit to figure out, but
The pending changes are added to the configuration file, so the changes
would likely be migrated, but they wouldn't be applied until the normal
process for applying them is performed. The benefit here is that they
aren't immediately applied to the VM's settings, the way they are without
this patch
Hi
I have a question, please see below
- Original Message -
From: "Alexandre DERUMIER"
To: "Stanislav German-Evtushenko"
Cc:
Sent: Thursday, October 30, 2014 1:32 PM
Subject: Re: [pve-devel] qemu-server : implement pending changes v2
Some questions regarding this patch:
- What ha
>>Some questions regarding this patch:
>>- What happens when we do online migration with pending changes?
pending changes not applied.
(Currently if you do a change a non-hotpluggage value and you do a live
migration, you have good chance to crash)
>>- What happens when we do online migration
> Date: Thu, 30 Oct 2014 13:40:21 +0100
> From: Alexandre Derumier
> To: pve-devel@pve.proxmox.com
> Subject: [pve-devel] qemu-server : implement pending changes v2
> Message-ID: <1414672833-17829-1-git-send-email-aderum...@odiso.com>
>
> Hi,
>
> I have redone my patch with Diemar recommendations,
> Alexandre what's about your IPv6 and brfilter patches? I think they can get
> integrated into pve test?
OK, I started working on that...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
1) if vm is not running, we apply pending change directly
2) for disk, we remove unused disk on succesfull hotplug
[CONF]
unused0: vm-disk-100-2.raw
[PENDING]
virtio0: vm-disk-100-2.raw
->
[CONF]
[PENDING]
virtio0: vm-disk-100-2.raw
3) for
1) if previous device exist:
if disk is different, we unplug it first
[CONF]
virtio0: vm-disk-100-1.raw
unused0: vm-disk-100-2.raw
#qm set 100 -virtio0 vm-disk-100-2.raw
[CONF]
unused0: vm-disk-100-2.raw
unused1: vm-disk-100-1.raw
[PEN
on vm update
we create the disk and always set the it as unused after create
and we register it also in device pending change
[CONF]
qm set 110 -virtio0 local:1
[CONF]
unused0: vm-disk-100-1.raw
[PENDING]
virtio0: vm-disk-100-1.raw
on vm create
we simply register the
1) if a previous nic exist,
if they are non-online options (model,mac,queues) , we try to unplug it
first.
[CONF]
net0: e1000,bridge=vmbr0
[PENDING]
net0: virtio,bridge=vmbr0
->
[CONF]
[PENDING]
net0: virtio,bridge=vmbr0
else we a
if cpu hotplug is not possible,
we simply return to keep the changes in pending
[CONF]
cores: 1
maxcpus:16
[CONF]
cores: 2
maxcpus:16
[CONF]
cores: 2
maxcpus:16
[PENDING]
cores: 1
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 22 +-
1 file changed, 13 inserti
1) we write delete option in hash
$conf->{pending}->{del}->{$opt} = 1;
[CONF]
name: test
hotplug: 1
net0:
qm set -delete name,hotplug
[CONF]
name: test
hotplug: 1
[PENDING]
delete: name,hotplug,net0
2) if option can be delete online,
we delete it from conf and remove pending delete
0) no change here for delete
1) we write params in pending change if vm is running
if $running
$conf->{pending}->{$opt} = $param->{$opt};
else
$conf->{$opt} = $param->{$opt};
and we also create disks
2) we try to hotplug devices first
3) we try to update online values at the end.
if disk is unused:
we delete it from conf, and also delete it from pending in case of a previous
pending add
[CONF]
unused0: vm-disk-100-1.raw
[PENDING]
virtio0: vm-disk-100-1.raw
#qm set 100 -delete unused0
[CONF]
[PENDING]
else
if vm is running
we register the disk in $conf-
fixme : need to be implemented (we can call vm_update_api with params)
we apply pending change on vm_stop,
but also on vm_start if the stop was unclean or done from the guest
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 14 --
1 file changed, 12 insertions(+), 2 dele
for disks, we set them as unused after succesfull unplug
[CONF]
virtio0: vm-disk-100-1.raw
net0:
[CONF]
virtio0: vm-disk-100-1.raw
unused0: vm-disk-100-1.raw
net0: ...
[PENDING]
delete: virtio0,net0
for all devices, after successfull unplug,
we delete the device from the conf and remove t
example:
[PENDING]
virtio1:...
delete:net0,net1
$conf->{pending}->{virtio1}
$conf->{pending}->{del}->{net0}
$conf->{pending}->{del}->{net1}
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer
example:
$conf->{pending}->{virtio1}
$conf->{pending}->{del}->{net0}
$conf->{pending}->{del}->{net1}
[PENDING]
virtio1:...
delete:net0,net1
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 33 +
1 file changed, 29 insertions(+), 4 deletions(-)
diff -
Hi,
I have redone my patch with Diemar recommendations,
main difference:
-pending delete are now in a delete list
delete: virtio0,net0
-no more unused disk in pending conf
I have splitted my big patch into many smaller
(I hope that nothing is missing, but I have redone tests everywhere an
>>AFAIR Alexandre talked about a separate package only containing
>>ebtables-restore and ebtables-save?
Yes, we can provide such package, on provide them in pve-firewall package.
in debian ebtables packages, they simply remove the ebtables-restore and
ebtables-save in rules file.
- Mail
21 matches
Mail list logo