Can't we simply display the changed/new config, and return a 'diff' to show
what is changed.
Basically we do the same thing with network configuration.
What do you think?
For pending update:
[CONF]
virtio0:
[PENDING]
virtio0:
display the value virtio0: old value(pending: new
but in this case,
I don't know how to display
[CONF]
virtio0:oldvalue
[PENDING]
virtio0: newvalue
delete: virtio0
virtio0: old value (pending: delete - new value) ?
This will never happen - my current implementation avoids that.
Can't we simply display the changed/new config, and return a 'diff' to show
what is changed.
mmm,I really don't like this idea. I think it'll confuse user.
I think we should display the running config.
For example if user hot-unplug a disk, and it's failing, we should keep
displaying it in
For pending update:
[CONF]
virtio0:
[PENDING]
virtio0:
display the value virtio0: old value(pending: new value)
OK
For pending add:
[CONF]
[PENDING]
virtio0:
display the value virtio0:(pending: new value)
OK
for pending delete:
[CONF]
virtio0:
[PENDING]
delete:
OK, I guess you are right.
Ok,Great !
I think I'll wait for your patches before begin to work on gui, to be sure to
have some good.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 17
Signed-off-by: Wolfgang Link wolfg...@linksystems.org
---
PVE/QemuServer.pm |7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 02bf404..26c6c76 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2588,6 +2588,13 @@ sub
I'm currently looking to implement livemigration + storage migration,
for this,
I need to call
API2::Storage::Content::create
from node1 (source) to node2 (target)
and get the created volid value.
Currently, for live migration, we always use ssh and send a qm ... commands
to target node.
So, does it need to implement a new command, something like pvestorage
create mapped to the storage apis ?
Oh, sorry, I never notice that we already have a pvesm command :)
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Dietmar Maurer diet...@proxmox.com
Cc:
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/QemuServer.pm |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0241dc0..db46691 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1782,7 +1782,7 @@ sub
Based on Alexandres patches (qemu-server : implement pending changes v2)
Changes:
- I tried to simplify things by always writing changes into pending
section first.
- do not parse 'delete' option parse_vm_config
Todo: implement hotplug disk/net
Alexandre Derumier (1):
parse_vm_config :
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/API2/Qemu.pm | 191 ++
1 file changed, 119 insertions(+), 72 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f23452d..bfd3e2c 100644
--- a/PVE/API2/Qemu.pm
+++
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/QemuServer.pm | 47 ++-
1 file changed, 46 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2dd4558..fb3f471 100644
--- a/PVE/QemuServer.pm
+++
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/API2/Qemu.pm | 99 --
1 file changed, 97 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a0fcd28..f23452d 100644
--- a/PVE/API2/Qemu.pm
+++
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/API2/Qemu.pm |5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a3dbb06..a1f0f41 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -945,10 +945,9 @@ my
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/QemuServer.pm | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index db46691..a83c971 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1792,22 +1792,24 @@
I move related helper methods into PVE::QemuServer.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/API2/Qemu.pm | 111 -
PVE/QemuServer.pm | 103 +
2 files changed, 111
From: Alexandre Derumier aderum...@odiso.com
example:
[PENDING]
virtio1:...
delete:net0,net1
$conf-{pending}-{virtio1}
$conf-{pending}-{del}-{net0}
$conf-{pending}-{del}-{net1}
Signed-off-by: Alexandre Derumier aderum...@odiso.com
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
example:
$conf-{pending}-{virtio1}
$conf-{pending}-{delete} = net0,net1
[PENDING]
virtio1: ...
delete: net0,net1
Signed-off-by: Alexandre Derumier aderum...@odiso.com
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
PVE/QemuServer.pm | 32 +---
1 file
I think I'll wait for your patches before begin to work on gui, to be sure to
have
some good.
I just sent what I have so far to the list. Please can you do a short review?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Thanks,
I'll review them this afternoon
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 17 Novembre 2014 11:01:58
Objet: RE: [pve-devel] qemu-server : implement pending changes v2
I
Thanks,
I'll review them this afternoon
The question is how hard it is to implement disk/network hotplug on top of this?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks!
-Original Message-
From: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] On Behalf Of
Wolfgang Link
Sent: Montag, 17. November 2014 09:53
To: pve-devel@pve.proxmox.com
Cc: Wolfgang Link
Subject: [pve-devel] [PATCH] Add Check: If host has enough real CPUs for
The question is how hard it is to implement disk/network hotplug on top of
this?
I'll try to make a test hotplug patch to see if it's working fine
Great, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Maybe you need any help? :)
On Mon, Nov 17, 2014 at 10:13 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
So, does it need to implement a new command, something like pvestorage
create mapped to the storage apis ?
Oh, sorry, I never notice that we already have a pvesm command :)
I need to done some work with nbd server first (to migrate unused disk and also
offline vm)
and I'll try to submit some patch next week.
But yes, help will be welcome :)
- Mail original -
De: Kamil Trzciński ayu...@ayufan.eu
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Dietmar
Great, thanks!
Ok, I can hotplug|unplug disk,nic,cpu,tablet with some small rework of my
previous patches.
I'll send patch at the end of the day.
The only thing I can't manage currently with new code, is drive swap. (update
an existing disk volid by another one).
- Mail original
add check if START Parameter is set in FILE: /etc/default/pve-manager
If START=no NO VM will start if pve-manager start is called
If START!=no or not present, VMs will use the boot_at_start Flag
Signed-off-by: Wolfgang Link wolfg...@linksystems.org
---
bin/init.d/pve-manager |4
1 file
This apply on top of Dietmar patches
All seem to work fine (including swap disk)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/API2/Qemu.pm | 44 ++-
PVE/QemuServer.pm | 86 -
pve-bridge|4 +++
3 files changed, 91 insertions(+), 43 deletions(-)
diff --git
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b948de8..27e6957 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3524,6 +3524,19 @@ sub
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |8
1 file changed, 8 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 27e6957..2542059 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3538,8 +3538,16 @@ sub
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/API2/Qemu.pm | 83 ++--
PVE/QemuServer.pm | 99 +++--
2 files changed, 99 insertions(+), 83 deletions(-)
diff --git a/PVE/API2/Qemu.pm
if cpu hotplug is not possible,
we simply return to keep the changes in pending
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 27 +--
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
Great, thanks!
Ok, I can hotplug|unplug disk,nic,cpu,tablet with some small rework of my
previous patches.
I'll send patch at the end of the day.
The only thing I can't manage currently with new code, is drive swap. (update
an
existing disk volid by another one).
What is the
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 141a21c..b948de8
100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3521,12 +3521,18 @@ sub vmconfig_hotplug_pending {
$conf = load_config($vmid); # update/reload
}
-return if !$conf-{hotplug};
+#return
is this intentional?
Yes, that's why I have add a comment.
We can change some values (disk throttle, disk backups, nic vlan,) even if
we don't have hotplug enabled.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre Derumier aderum...@odiso.com,
is this intentional?
Yes, that's why I have add a comment.
We can change some values (disk throttle, disk backups, nic vlan,) even
if we
don't have hotplug enabled.
Yes, like balloon. My idea was to do those things before this line (see
balloon).
diff --git a/pve-bridge b/pve-bridge
index d6c5eb8..caee33b 100755
--- a/pve-bridge
+++ b/pve-bridge
@@ -20,6 +20,10 @@ my $migratedfrom = $ENV{PVE_MIGRATED_FROM};
my $conf = PVE::QemuServer::load_config($vmid, $migratedfrom);
+if ($conf-{pending}-{$netid}){
+$conf =
THis looks also problematic. What if someone sets hotplug=0? This also use
wrong values
if the VM is migrated?
I manage this inside deviceplug|unplug.
(I remove the pending device at the end)
sub vm_deviceplug {
return 1 if !$conf-{hotplug};
...
#delete pending device after hotplug
THis looks also problematic. What if someone sets hotplug=0? This also
use wrong values if the VM is migrated?
I manage this inside deviceplug|unplug.
(I remove the pending device at the end)
sub vm_deviceplug {
return 1 if !$conf-{hotplug};
...
#delete pending device
THis looks also problematic. What if someone sets hotplug=0? This
also use wrong values if the VM is migrated?
I manage this inside deviceplug|unplug.
(I remove the pending device at the end)
sub vm_deviceplug {
return 1 if !$conf-{hotplug};
...
#delete pending
What is someone wants:
hotplug: 0
Then you want to keep changes in [PENDING], without applying them.
yes, exactly. That's the way my patches are working.
hot-unplug
--
[CONF]
virtio0:
hotplug:0
qm set 110 -delete virtio0
[CONF]
virtio0:
hotplug:0
[PENDING]
delete: virtio0
Note: you always use the value from [PENDING] inside pve-bridge
if ($conf-{pending}-{$netid}){
$conf = $conf-{pending};
}
So, yes, I always use the pending conf if it's exist,
in case we want to hotplug it.
But, on vm start, I think the pending conf is already removed right ?
- Mail
Note: you always use the value from [PENDING] inside pve-bridge
if ($conf-{pending}-{$netid}){
$conf = $conf-{pending};
}
So, yes, I always use the pending conf if it's exist, in case we want to
hotplug it.
But, on vm start, I think the pending conf is already removed right ?
Yes, right. But only if the vm is not migrated. So I guess you need to
prevent that
if $migratedfrom is set?
Oh, yes, indeed, you are right, I totaly miss this case.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc:
Yes, right. But only if the vm is not migrated. So I guess you need to
prevent that if $migratedfrom is set?
Oh, yes, indeed, you are right, I totaly miss this case.
Ok, I will try to resend the whole series including your work tommorrow.
___
Hi
I compiled the pve-kernel-2.6.32-34-pve kernel for obtain support to DRBD
8.4.5, and i get these messages:
root@PVE2:~/drbd# module-assistant auto-install drbd8
Updated infos about 1 packages
Getting source for kernel version: 2.6.32-34-pve
Kernel headers available in
applied
add check if START Parameter is set in FILE: /etc/default/pve-manager
If START=no NO VM will start if pve-manager start is called
If START!=no or not present, VMs will use the boot_at_start Flag
___
pve-devel mailing list
48 matches
Mail list logo