Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
Can't we simply display the changed/new config, and return a 'diff' to show 
what is changed.

Basically we do the same thing with network configuration.

What do you think?

> For pending update:
> 
> [CONF]
> virtio0:
> [PENDING]
> virtio0:
> 
> 
> display the value "virtio0: old value(pending: new value)"
> 
> 
> 
> 
> For pending add:
> [CONF]
> [PENDING]
> virtio0:
> 
> display the value "virtio0:(pending: new value)"
> 
> 
> 
> 
> 
> for pending delete:
> 
> [CONF]
> virtio0:
> [PENDING]
> delete: virtio0,
> 
> 
> here,I'm not sure.
> we could add a new line "pending delete: virtio0,...
> or display near the current value
> "virtio0: old value  (pending: delete)"
> 
> but in this case,
> 
> I don't know how to display
> 
> [CONF]
> virtio0:oldvalue
> [PENDING]
> virtio0: newvalue
> delete: virtio0
> 
> 
> "virtio0: old value  (pending: delete - new value)"  ?
> 
> 
> 
> 
> 
> 
> 
> 
> - Mail original -
> 
> De: "Dietmar Maurer" 
> À: "Alexandre DERUMIER" , pve-
> de...@pve.proxmox.com
> Envoyé: Lundi 17 Novembre 2014 08:33:40
> Objet: RE: [pve-devel] qemu-server : implement pending changes v2
> 
> Do you already have an idea how to implement the GUI?
> 
> > -Original Message-
> > From: Dietmar Maurer
> > Sent: Montag, 17. November 2014 08:25
> > To: 'Alexandre DERUMIER'
> > Subject: RE: [pve-devel] qemu-server : implement pending changes v2
> >
> > > (Do you think I can already work on gui for this ? )
> >
> > sure.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> > but in this case,
> >
> > I don't know how to display
> >
> > [CONF]
> > virtio0:oldvalue
> > [PENDING]
> > virtio0: newvalue
> > delete: virtio0
> >
> >
> > "virtio0: old value  (pending: delete - new value)"  ?

This will never happen - my current implementation avoids that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
>>Can't we simply display the changed/new config, and return a 'diff' to show 
>>what is changed. 

mmm,I really don't like this idea. I think it'll confuse user.

I think we should display the running config.

For example if user hot-unplug a disk, and it's failing, we should keep 
displaying it in config,
if user want to retry to unplug it again.


- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 09:06:04 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

Can't we simply display the changed/new config, and return a 'diff' to show 
what is changed. 

Basically we do the same thing with network configuration. 

What do you think? 

> For pending update: 
> 
> [CONF] 
> virtio0: 
> [PENDING] 
> virtio0: 
> 
> 
> display the value "virtio0: old value (pending: new value)" 
> 
> 
> 
> 
> For pending add: 
> [CONF] 
> [PENDING] 
> virtio0: 
> 
> display the value "virtio0:(pending: new value)" 
> 
> 
> 
> 
> 
> for pending delete: 
> 
> [CONF] 
> virtio0: 
> [PENDING] 
> delete: virtio0, 
> 
> 
> here,I'm not sure. 
> we could add a new line "pending delete: virtio0,... 
> or display near the current value 
> "virtio0: old value (pending: delete)" 
> 
> but in this case, 
> 
> I don't know how to display 
> 
> [CONF] 
> virtio0:oldvalue 
> [PENDING] 
> virtio0: newvalue 
> delete: virtio0 
> 
> 
> "virtio0: old value (pending: delete - new value)" ? 
> 
> 
> 
> 
> 
> 
> 
> 
> - Mail original - 
> 
> De: "Dietmar Maurer"  
> À: "Alexandre DERUMIER" , pve- 
> de...@pve.proxmox.com 
> Envoyé: Lundi 17 Novembre 2014 08:33:40 
> Objet: RE: [pve-devel] qemu-server : implement pending changes v2 
> 
> Do you already have an idea how to implement the GUI? 
> 
> > -Original Message- 
> > From: Dietmar Maurer 
> > Sent: Montag, 17. November 2014 08:25 
> > To: 'Alexandre DERUMIER' 
> > Subject: RE: [pve-devel] qemu-server : implement pending changes v2 
> > 
> > > (Do you think I can already work on gui for this ? ) 
> > 
> > sure. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> >>Can't we simply display the changed/new config, and return a 'diff' to
> >>show what is changed.
> 
> mmm,I really don't like this idea. I think it'll confuse user.
> 
> I think we should display the running config.
> 
> For example if user hot-unplug a disk, and it's failing, we should keep 
> displaying
> it in config, if user want to retry to unplug it again.

OK, I guess you are right.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> For pending update:
> 
> [CONF]
> virtio0:
> [PENDING]
> virtio0:
> 
> 
> display the value "virtio0: old value(pending: new value)"

OK

> For pending add:
> [CONF]
> [PENDING]
> virtio0:
> 
> display the value "virtio0:(pending: new value)"

OK

> for pending delete:
> 
> [CONF]
> virtio0:
> [PENDING]
> delete: virtio0,
> 
> 
> here,I'm not sure.
> we could add a new line "pending delete: virtio0,...
> or display near the current value
> "virtio0: old value  (pending: delete)"

virtio0: old value  (pending: delete)"
 
> but in this case,
> 
> I don't know how to display
> 
> [CONF]
> virtio0:oldvalue
> [PENDING]
> virtio0: newvalue
> delete: virtio0

this does not happen.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
>>OK, I guess you are right. 

Ok,Great !

I think I'll wait for your patches before begin to work on gui, to be sure to 
have some good.


- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 09:27:25 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

> >>Can't we simply display the changed/new config, and return a 'diff' to 
> >>show what is changed. 
> 
> mmm,I really don't like this idea. I think it'll confuse user. 
> 
> I think we should display the running config. 
> 
> For example if user hot-unplug a disk, and it's failing, we should keep 
> displaying 
> it in config, if user want to retry to unplug it again. 

OK, I guess you are right. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Add Check: If host has enough real CPUs for starting VM, to prevent a Qemu CPU emualtion!

2014-11-17 Thread Wolfgang Link

Signed-off-by: Wolfgang Link 
---
 PVE/QemuServer.pm |7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 02bf404..26c6c76 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2588,6 +2588,13 @@ sub config_to_command {
 my $cores = $conf->{cores} || 1;
 my $maxcpus = $conf->{maxcpus} if $conf->{maxcpus};
 
+my $total_cores = $sockets * $cores;
+my $allowed_cores = $cpuinfo->{cpus};
+
+die "MAX $allowed_cores Cores allowed per VM on this Node"
+   if($allowed_cores < $total_cores ) ;
+
+
 if ($maxcpus) {
push @$cmd, '-smp', "cpus=$cores,maxcpus=$maxcpus";
 } else {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] best way to call storage api2 between nodes ? (for livemigration with local storage)

2014-11-17 Thread Alexandre DERUMIER
I'm currently looking to implement livemigration + storage migration,

for this,

I need to call

API2::Storage::Content::create 

from node1 (source) to node2 (target) 
and get the created volid value.

Currently, for live migration, we always use ssh and send a "qm ..." commands 
to target node.

So, does it need to implement a new command, something like "pvestorage 
create" mapped to the storage apis ?



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] best way to call storage api2 between nodes ? (for livemigration with local storage)

2014-11-17 Thread Alexandre DERUMIER
>>So, does it need to implement a new command, something like "pvestorage 
>>create" mapped to the storage apis ?

Oh, sorry, I never notice that we already have a "pvesm" command :)



- Mail original - 

De: "Alexandre DERUMIER"  
À: "Dietmar Maurer"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 10:08:23 
Objet: [pve-devel] best way to call storage api2 between nodes ? (for 
livemigration with local storage) 

I'm currently looking to implement livemigration + storage migration, 

for this, 

I need to call 

API2::Storage::Content::create 

from node1 (source) to node2 (target) 
and get the created volid value. 

Currently, for live migration, we always use ssh and send a "qm ..." commands 
to target node. 

So, does it need to implement a new command, something like "pvestorage 
create" mapped to the storage apis ? 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 2/9] parse_vm_config: correctly handle $descr

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/QemuServer.pm |6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0241dc0..db46691 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1782,7 +1782,7 @@ sub parse_vm_config {
 my $res = {
digest => Digest::SHA::sha1_hex($raw),
snapshots => {},
-   pending => {}
+   pending => {},
 };
 
 $filename =~ m|/qemu-server/(\d+)\.conf$|
@@ -1798,10 +1798,12 @@ sub parse_vm_config {
next if $line =~ m/^\s*$/;
 
if ($line =~ m/^\[PENDING\]\s*$/i) {
+   $conf->{description} = $descr if $descr;
+   $descr = '';
$conf = $res->{pending} = {};
next;
 
-   }elsif ($line =~ m/^\[([a-z][a-z0-9_\-]+)\]\s*$/i) {
+   } elsif ($line =~ m/^\[([a-z][a-z0-9_\-]+)\]\s*$/i) {
my $snapname = $1;
$conf->{description} = $descr if $descr;
$descr = '';
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 0/9] qemu-server : implement pending changes

2014-11-17 Thread Dietmar Maurer
Based on Alexandres patches (qemu-server : implement pending changes v2)

Changes:

- I tried to simplify things by always writing changes into pending 
  section first.

- do not parse 'delete' option parse_vm_config

Todo: implement hotplug disk/net 


Alexandre Derumier (1):
  parse_vm_config : parse pending changes

Dietmar Maurer (8):
  parse_vm_config: correctly handle $descr
  parse_vm_config: only allow 'delete' inside [PENDING]
  write_vm_config : write pending change
  update_vm_api: always write into pending section
  implement vmconfig_apply_pending for stopped VM
  vm_start: apply pending changes
  fix balloon consistency check (consider pending changes)
  implement trivial hotplug

 PVE/API2/Qemu.pm  |  116 ++--
 PVE/QemuServer.pm |  194 +++--
 2 files changed, 268 insertions(+), 42 deletions(-)

-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 6/9] implement vmconfig_apply_pending for stopped VM

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/API2/Qemu.pm |  191 ++
 1 file changed, 119 insertions(+), 72 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f23452d..bfd3e2c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -37,6 +37,73 @@ my $resolve_cdrom_alias = sub {
 }
 };
 
+my $vm_is_volid_owner = sub {
+my ($storecfg, $vmid, $volid) =@_;
+
+if ($volid !~  m|^/|) {
+   my ($path, $owner);
+   eval { ($path, $owner) = PVE::Storage::path($storecfg, $volid); };
+   if ($owner && ($owner == $vmid)) {
+   return 1;
+   }
+}
+
+return undef;
+};
+
+my $test_deallocate_drive = sub {
+my ($storecfg, $vmid, $key, $drive, $force) = @_;
+
+if (!PVE::QemuServer::drive_is_cdrom($drive)) {
+   my $volid = $drive->{file};
+   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
+   if ($force || $key =~ m/^unused/) {
+   my $sid = PVE::Storage::parse_volume_id($volid);
+   return $sid;
+   }
+   }
+}
+
+return undef;
+};
+
+my $pending_delete_option = sub {
+my ($conf, $key) = @_;
+
+delete $conf->{pending}->{$key};
+my $pending_delete_hash = { $key => 1 };
+foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
+   $pending_delete_hash->{$opt} = 1;
+}
+$conf->{pending}->{delete} = join(',', keys %$pending_delete_hash);
+};
+
+my $pending_undelete_option = sub {
+my ($conf, $key) = @_;
+
+my $pending_delete_hash = {};
+foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
+   $pending_delete_hash->{$opt} = 1;
+}
+delete $pending_delete_hash->{$key};
+
+my @keylist = keys %$pending_delete_hash;
+if (scalar(@keylist)) {
+   $conf->{pending}->{delete} = join(',', @keylist);
+} else {
+   delete $conf->{pending}->{delete};
+}  
+};
+
+my $register_unused_drive = sub {
+my ($storecfg, $vmid, $conf, $drive) = @_;
+if (!PVE::QemuServer::drive_is_cdrom($drive)) {
+   my $volid = $drive->{file};
+   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
+   PVE::QemuServer::add_unused_volume($conf, $volid, $vmid);
+   }
+}
+};
 
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
@@ -639,36 +706,6 @@ __PACKAGE__->register_method({
return $conf;
 }});
 
-my $vm_is_volid_owner = sub {
-my ($storecfg, $vmid, $volid) =@_;
-
-if ($volid !~  m|^/|) {
-   my ($path, $owner);
-   eval { ($path, $owner) = PVE::Storage::path($storecfg, $volid); };
-   if ($owner && ($owner == $vmid)) {
-   return 1;
-   }
-}
-
-return undef;
-};
-
-my $test_deallocate_drive = sub {
-my ($storecfg, $vmid, $key, $drive, $force) = @_;
-
-if (!PVE::QemuServer::drive_is_cdrom($drive)) {
-   my $volid = $drive->{file};
-   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
-   if ($force || $key =~ m/^unused/) {
-   my $sid = PVE::Storage::parse_volume_id($volid);
-   return $sid;
-   }
-   }
-}
-
-return undef;
-};
-
 my $delete_drive = sub {
 my ($conf, $storecfg, $vmid, $key, $drive, $force) = @_;
 
@@ -868,6 +905,48 @@ my $vmconfig_update_net = sub {
 die "error hotplug $opt" if !PVE::QemuServer::vm_deviceplug($storecfg, 
$conf, $vmid, $opt, $net);
 };
 
+my $vmconfig_apply_pending = sub {
+my ($vmid, $conf, $storecfg, $running) = @_;
+
+my @delete = PVE::Tools::split_list($conf->{pending}->{delete});
+foreach my $opt (@delete) { # delete
+   die "internal error" if $opt =~ m/^unused/; 
+   $conf = PVE::QemuServer::load_config($vmid); # update/reload
+   if (!defined($conf->{$opt})) {
+   &$pending_undelete_option($conf, $opt);
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   } elsif (PVE::QemuServer::valid_drivename($opt)) {
+   &$register_unused_drive($storecfg, $vmid, $conf, 
PVE::QemuServer::parse_drive($opt, $conf->{$opt}));
+   &$pending_undelete_option($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   } else {
+   &$pending_undelete_option($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   }
+}
+  
+$conf = PVE::QemuServer::load_config($vmid); # update/reload
+   
+foreach my $opt (keys %{$conf->{pending}}) { # add/change
+   $conf = PVE::QemuServer::load_config($vmid); # update/reload
+   
+   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
+   # skip if nothing changed
+   } elsif (PVE::QemuServer::valid_drivename($opt)) {
+   &$register_unused_drive($storecfg, $vmid, $conf, 
PVE::QemuServer::parse_drive($opt, $conf->

[pve-devel] [PATCH v3 9/9] implement trivial hotplug

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/QemuServer.pm |   47 ++-
 1 file changed, 46 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2dd4558..fb3f471 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3432,11 +3432,56 @@ sub set_migration_caps {
 vm_mon_cmd_nocheck($vmid, "migrate-set-capabilities", capabilities => 
$cap_ref);
 }
 
+sub vmconfig_hotplug_pending {
+my ($vmid, $conf, $storecfg) = @_;
+
+my $defaults = PVE::QemuServer::load_defaults();
+
+# commit values which do not have any impact on running VM first
+
+my $changes = 0;
+foreach my $opt (keys %{$conf->{pending}}) { # add/change
+   if ($opt eq 'name' || $opt eq 'hotplug' || $opt eq 'onboot' || $opt eq 
'shares') {
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   delete $conf->{pending}->{$opt};
+   $changes = 1;
+   }
+}   
+
+if ($changes) {
+   update_config_nolock($vmid, $conf, 1);
+   $conf = load_config($vmid); # update/reload
+}
+
+$changes = 0;
+
+# allow manual ballooning if shares is set to zero
+
+if (defined($conf->{pending}->{balloon}) && defined($conf->{shares}) && 
($conf->{shares} == 0)) {
+   my $balloon = $conf->{pending}->{balloon} || $conf->{memory} || 
$defaults->{memory};
+   vm_mon_cmd($vmid, "balloon", value => $balloon*1024*1024);
+   $conf->{balloon} = $conf->{pending}->{balloon};
+   delete $conf->{pending}->{balloon};
+   $changes = 1;
+}
+
+if ($changes) {
+   update_config_nolock($vmid, $conf, 1);
+   $conf = load_config($vmid); # update/reload
+}
+
+return if !$conf->{hotplug};
+
+# fixme: implement disk/network hotplug here
+
+}
 
 sub vmconfig_apply_pending {
 my ($vmid, $conf, $storecfg, $running) = @_;
 
-die "implement me - vm is running" if $running; # fixme: if 
$conf->{hotplug};
+return vmconfig_hotplug_pending($vmid, $conf, $storecfg) if $running;
+
+# cold plug
 
 my @delete = PVE::Tools::split_list($conf->{pending}->{delete});
 foreach my $opt (@delete) { # delete
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 5/9] update_vm_api: always write into pending section

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/API2/Qemu.pm |   99 --
 1 file changed, 97 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a0fcd28..f23452d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -951,6 +951,44 @@ my $update_vm_api  = sub {
 
 &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
 
+my $pending_delete_option = sub {
+   my ($conf, $key) = @_;
+
+   delete $conf->{pending}->{$key};
+   my $pending_delete_hash = { $key => 1 };
+   foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
+   $pending_delete_hash->{$opt} = 1;
+   }
+   $conf->{pending}->{delete} = join(',', keys %$pending_delete_hash);
+};
+
+my $pending_undelete_option = sub {
+   my ($conf, $key) = @_;
+
+   my $pending_delete_hash = {};
+   foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
+   $pending_delete_hash->{$opt} = 1;
+   }
+   delete $pending_delete_hash->{$key};
+
+   my @keylist = keys %$pending_delete_hash;
+   if (scalar(@keylist)) {
+   $conf->{pending}->{delete} = join(',', @keylist);
+   } else {
+   delete $conf->{pending}->{delete};
+   }   
+};
+
+my $register_unused_drive = sub {
+   my ($conf, $drive) = @_;
+   if (!PVE::QemuServer::drive_is_cdrom($drive)) {
+   my $volid = $drive->{file};
+   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
+   PVE::QemuServer::add_unused_volume($conf, $volid, $vmid);
+   }
+   }
+};
+
 my $updatefn =  sub {
 
my $conf = PVE::QemuServer::load_config($vmid);
@@ -960,6 +998,7 @@ my $update_vm_api  = sub {
 
PVE::QemuServer::check_lock($conf) if !$skiplock;
 
+   # fixme: wrong place? howto handle pending changes? @delete ?
if ($param->{memory} || defined($param->{balloon})) {
my $maxmem = $param->{memory} || $conf->{memory} || 
$defaults->{memory};
my $balloon = defined($param->{balloon}) ?  $param->{balloon} : 
$conf->{balloon};
@@ -974,11 +1013,67 @@ my $update_vm_api  = sub {
 
print "update VM $vmid: " . join (' ', @paramarr) . "\n";
 
-   foreach my $opt (@delete) { # delete
+   # write updates to pending section
+
+   foreach my $opt (@delete) {
$conf = PVE::QemuServer::load_config($vmid); # update/reload
-   &$vmconfig_delete_option($rpcenv, $authuser, $conf, $storecfg, 
$vmid, $opt, $force);
+   if ($opt =~ m/^unused/) {
+   $rpcenv->check_vm_perm($authuser, $vmid, undef, 
['VM.Config.Disk']);
+   my $drive = PVE::QemuServer::parse_drive($opt, 
$conf->{$opt});
+   if (my $sid = &$test_deallocate_drive($storecfg, $vmid, 
$opt, $drive, $force)) {
+   $rpcenv->check($authuser, "/storage/$sid", 
['Datastore.AllocateSpace']);
+   &$delete_drive($conf, $storecfg, $vmid, $opt, $drive);
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   }
+   } elsif (PVE::QemuServer::valid_drivename($opt)) {
+   $rpcenv->check_vm_perm($authuser, $vmid, undef, 
['VM.Config.Disk']);
+   &$register_unused_drive($conf, 
PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt})) 
+   if defined($conf->{pending}->{$opt});
+   &$pending_delete_option($conf, $opt);
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   } else {
+   &$pending_delete_option($conf, $opt);
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   }
}
 
+   foreach my $opt (keys %$param) { # add/change
+   $conf = PVE::QemuServer::load_config($vmid); # update/reload
+   next if defined($conf->{pending}->{$opt}) && ($param->{$opt} eq 
$conf->{pending}->{$opt}); # skip if nothing changed
+   
+   if (PVE::QemuServer::valid_drivename($opt)) {
+   my $drive = PVE::QemuServer::parse_drive($opt, 
$param->{$opt});
+   if (PVE::QemuServer::drive_is_cdrom($drive)) { # CDROM
+   $rpcenv->check_vm_perm($authuser, $vmid, undef, 
['VM.Config.CDROM']);
+   } else {
+   $rpcenv->check_vm_perm($authuser, $vmid, undef, 
['VM.Config.Disk']);
+   }
+   &$register_unused_drive($conf, 
PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt})) 
+   if defined($conf->{pending}->{$opt});
+
+   &$create_disks($rpcenv, $authuser, $conf->{pending}, 
$storecfg, $vmid, undef, {$opt => $param->{$opt}});
+   } else {
+   $conf->{pending}->{$opt

[pve-devel] [PATCH v3 8/9] fix balloon consistency check (consider pending changes)

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/API2/Qemu.pm |5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a3dbb06..a1f0f41 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -945,10 +945,9 @@ my $update_vm_api  = sub {
 
PVE::QemuServer::check_lock($conf) if !$skiplock;
 
-   # fixme: wrong place? howto handle pending changes? @delete ?
if ($param->{memory} || defined($param->{balloon})) {
-   my $maxmem = $param->{memory} || $conf->{memory} || 
$defaults->{memory};
-   my $balloon = defined($param->{balloon}) ?  $param->{balloon} : 
$conf->{balloon};
+   my $maxmem = $param->{memory} || $conf->{pending}->{memory} || 
$conf->{memory} || $defaults->{memory};
+   my $balloon = defined($param->{balloon}) ? $param->{balloon} : 
$conf->{pending}->{balloon} || $conf->{balloon};
 
die "balloon value too large (must be smaller than assigned 
memory)\n"
if $balloon && $balloon > $maxmem;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 3/9] parse_vm_config: only allow 'delete' inside [PENDING]

2014-11-17 Thread Dietmar Maurer

Signed-off-by: Dietmar Maurer 
---
 PVE/QemuServer.pm |   13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index db46691..a83c971 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1792,22 +1792,24 @@ sub parse_vm_config {
 
 my $conf = $res;
 my $descr = '';
+my $section = '';
 
 my @lines = split(/\n/, $raw);
 foreach my $line (@lines) {
next if $line =~ m/^\s*$/;
 
if ($line =~ m/^\[PENDING\]\s*$/i) {
+   $section = 'pending'; 
$conf->{description} = $descr if $descr;
$descr = '';
-   $conf = $res->{pending} = {};
+   $conf = $res->{$section} = {};
next;
 
} elsif ($line =~ m/^\[([a-z][a-z0-9_\-]+)\]\s*$/i) {
-   my $snapname = $1;
+   $section = $1;
$conf->{description} = $descr if $descr;
$descr = '';
-   $conf = $res->{snapshots}->{$snapname} = {};
+   $conf = $res->{snapshots}->{$section} = {};
next;
}
 
@@ -1824,9 +1826,8 @@ sub parse_vm_config {
my $key = $1;
my $value = $2;
$conf->{$key} = $value;
-   } elsif ($line =~ m/^(delete):\s*(.*\S)\s*$/) {
-   my $key = $1;
-   my $value = $2;
+   } elsif (($section eq 'pending') && ($line =~ 
m/^delete:\s*(.*\S)\s*$/)) {
+   my $value = $1;
foreach my $opt (split(/,/, $value)) {
$conf->{del}->{$opt} = 1;
}
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 7/9] vm_start: apply pending changes

2014-11-17 Thread Dietmar Maurer
I move related helper methods into PVE::QemuServer.

Signed-off-by: Dietmar Maurer 
---
 PVE/API2/Qemu.pm  |  111 -
 PVE/QemuServer.pm |  103 +
 2 files changed, 111 insertions(+), 103 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index bfd3e2c..a3dbb06 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -37,26 +37,12 @@ my $resolve_cdrom_alias = sub {
 }
 };
 
-my $vm_is_volid_owner = sub {
-my ($storecfg, $vmid, $volid) =@_;
-
-if ($volid !~  m|^/|) {
-   my ($path, $owner);
-   eval { ($path, $owner) = PVE::Storage::path($storecfg, $volid); };
-   if ($owner && ($owner == $vmid)) {
-   return 1;
-   }
-}
-
-return undef;
-};
-
 my $test_deallocate_drive = sub {
 my ($storecfg, $vmid, $key, $drive, $force) = @_;
 
 if (!PVE::QemuServer::drive_is_cdrom($drive)) {
my $volid = $drive->{file};
-   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
+   if ( PVE::QemuServer::vm_is_volid_owner($storecfg, $vmid, $volid)) {
if ($force || $key =~ m/^unused/) {
my $sid = PVE::Storage::parse_volume_id($volid);
return $sid;
@@ -67,44 +53,6 @@ my $test_deallocate_drive = sub {
 return undef;
 };
 
-my $pending_delete_option = sub {
-my ($conf, $key) = @_;
-
-delete $conf->{pending}->{$key};
-my $pending_delete_hash = { $key => 1 };
-foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
-   $pending_delete_hash->{$opt} = 1;
-}
-$conf->{pending}->{delete} = join(',', keys %$pending_delete_hash);
-};
-
-my $pending_undelete_option = sub {
-my ($conf, $key) = @_;
-
-my $pending_delete_hash = {};
-foreach my $opt (PVE::Tools::split_list($conf->{pending}->{delete})) {
-   $pending_delete_hash->{$opt} = 1;
-}
-delete $pending_delete_hash->{$key};
-
-my @keylist = keys %$pending_delete_hash;
-if (scalar(@keylist)) {
-   $conf->{pending}->{delete} = join(',', @keylist);
-} else {
-   delete $conf->{pending}->{delete};
-}  
-};
-
-my $register_unused_drive = sub {
-my ($storecfg, $vmid, $conf, $drive) = @_;
-if (!PVE::QemuServer::drive_is_cdrom($drive)) {
-   my $volid = $drive->{file};
-   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
-   PVE::QemuServer::add_unused_volume($conf, $volid, $vmid);
-   }
-}
-};
-
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
 
@@ -712,7 +660,7 @@ my $delete_drive = sub {
 if (!PVE::QemuServer::drive_is_cdrom($drive)) {
my $volid = $drive->{file};
 
-   if (&$vm_is_volid_owner($storecfg, $vmid, $volid)) {
+   if (PVE::QemuServer::vm_is_volid_owner($storecfg, $vmid, $volid)) {
if ($force || $key =~ m/^unused/) {
eval {
# check if the disk is really unused
@@ -905,48 +853,6 @@ my $vmconfig_update_net = sub {
 die "error hotplug $opt" if !PVE::QemuServer::vm_deviceplug($storecfg, 
$conf, $vmid, $opt, $net);
 };
 
-my $vmconfig_apply_pending = sub {
-my ($vmid, $conf, $storecfg, $running) = @_;
-
-my @delete = PVE::Tools::split_list($conf->{pending}->{delete});
-foreach my $opt (@delete) { # delete
-   die "internal error" if $opt =~ m/^unused/; 
-   $conf = PVE::QemuServer::load_config($vmid); # update/reload
-   if (!defined($conf->{$opt})) {
-   &$pending_undelete_option($conf, $opt);
-   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-   } elsif (PVE::QemuServer::valid_drivename($opt)) {
-   &$register_unused_drive($storecfg, $vmid, $conf, 
PVE::QemuServer::parse_drive($opt, $conf->{$opt}));
-   &$pending_undelete_option($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-   } else {
-   &$pending_undelete_option($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-   }
-}
-  
-$conf = PVE::QemuServer::load_config($vmid); # update/reload
-   
-foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   $conf = PVE::QemuServer::load_config($vmid); # update/reload
-   
-   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
-   # skip if nothing changed
-   } elsif (PVE::QemuServer::valid_drivename($opt)) {
-   &$register_unused_drive($storecfg, $vmid, $conf, 
PVE::QemuServer::parse_drive($opt, $conf->{$opt})) 
-   if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   } else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   }
-
-   delete $conf->{pending}->{$opt};
-   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-}

[pve-devel] [PATCH v3 1/9] parse_vm_config : parse pending changes

2014-11-17 Thread Dietmar Maurer
From: Alexandre Derumier 

example:

[PENDING]
virtio1:...
delete:net0,net1

$conf->{pending}->{virtio1}
$conf->{pending}->{del}->{net0}
$conf->{pending}->{del}->{net1}

Signed-off-by: Alexandre Derumier 
Signed-off-by: Dietmar Maurer 
---
 PVE/QemuServer.pm |   13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 02bf404..0241dc0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1782,6 +1782,7 @@ sub parse_vm_config {
 my $res = {
digest => Digest::SHA::sha1_hex($raw),
snapshots => {},
+   pending => {}
 };
 
 $filename =~ m|/qemu-server/(\d+)\.conf$|
@@ -1796,7 +1797,11 @@ sub parse_vm_config {
 foreach my $line (@lines) {
next if $line =~ m/^\s*$/;
 
-   if ($line =~ m/^\[([a-z][a-z0-9_\-]+)\]\s*$/i) {
+   if ($line =~ m/^\[PENDING\]\s*$/i) {
+   $conf = $res->{pending} = {};
+   next;
+
+   }elsif ($line =~ m/^\[([a-z][a-z0-9_\-]+)\]\s*$/i) {
my $snapname = $1;
$conf->{description} = $descr if $descr;
$descr = '';
@@ -1817,6 +1822,12 @@ sub parse_vm_config {
my $key = $1;
my $value = $2;
$conf->{$key} = $value;
+   } elsif ($line =~ m/^(delete):\s*(.*\S)\s*$/) {
+   my $key = $1;
+   my $value = $2;
+   foreach my $opt (split(/,/, $value)) {
+   $conf->{del}->{$opt} = 1;
+   }
} elsif ($line =~ m/^([a-z][a-z_]*\d*):\s*(\S+)\s*$/) {
my $key = $1;
my $value = $2;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 4/9] write_vm_config : write pending change

2014-11-17 Thread Dietmar Maurer
example:

$conf->{pending}->{virtio1}
$conf->{pending}->{delete} = "net0,net1"

[PENDING]
virtio1: ...
delete: net0,net1

Signed-off-by: Alexandre Derumier 
Signed-off-by: Dietmar Maurer 
---
 PVE/QemuServer.pm |   32 +---
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a83c971..26b0efc 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1826,10 +1826,12 @@ sub parse_vm_config {
my $key = $1;
my $value = $2;
$conf->{$key} = $value;
-   } elsif (($section eq 'pending') && ($line =~ 
m/^delete:\s*(.*\S)\s*$/)) {
+   } elsif ($line =~ m/^delete:\s*(.*\S)\s*$/) {
my $value = $1;
-   foreach my $opt (split(/,/, $value)) {
-   $conf->{del}->{$opt} = 1;
+   if ($section eq 'pending') {
+   $conf->{delete} = $value; # we parse this later
+   } else {
+   warn "vm $vmid - propertry 'delete' is only allowed in 
[PENDING]\n";
}
} elsif ($line =~ m/^([a-z][a-z_]*\d*):\s*(\S+)\s*$/) {
my $key = $1;
@@ -1893,12 +1895,18 @@ sub write_vm_config {
 my $used_volids = {};
 
 my $cleanup_config = sub {
-   my ($cref, $snapname) = @_;
+   my ($cref, $pending, $snapname) = @_;
 
foreach my $key (keys %$cref) {
next if $key eq 'digest' || $key eq 'description' || $key eq 
'snapshots' ||
-   $key eq 'snapstate';
+   $key eq 'snapstate' || $key eq 'pending';
my $value = $cref->{$key};
+   if ($key eq 'delete') {
+   die "propertry 'delete' is only allowed in [PENDING]\n"
+   if !$pending;
+   # fixme: check syntax?
+   next;
+   }
eval { $value = check_type($key, $value); };
die "unable to parse value of '$key' - $@" if $@;
 
@@ -1912,8 +1920,12 @@ sub write_vm_config {
 };
 
 &$cleanup_config($conf);
+
+&$cleanup_config($conf->{pending}, 1);
+
 foreach my $snapname (keys %{$conf->{snapshots}}) {
-   &$cleanup_config($conf->{snapshots}->{$snapname}, $snapname);
+   die "internal error" if $snapname eq 'pending';
+   &$cleanup_config($conf->{snapshots}->{$snapname}, undef, $snapname);
 }
 
 # remove 'unusedX' settings if we re-add a volume
@@ -1936,13 +1948,19 @@ sub write_vm_config {
}
 
foreach my $key (sort keys %$conf) {
-   next if $key eq 'digest' || $key eq 'description' || $key eq 
'snapshots';
+   next if $key eq 'digest' || $key eq 'description' || $key eq 
'pending' || $key eq 'snapshots';
$raw .= "$key: $conf->{$key}\n";
}
return $raw;
 };
 
 my $raw = &$generate_raw_config($conf);
+
+if (scalar(keys %{$conf->{pending}})){
+   $raw .= "\n[PENDING]\n";
+   $raw .= &$generate_raw_config($conf->{pending});
+}
+ 
 foreach my $snapname (sort keys %{$conf->{snapshots}}) {
$raw .= "\n[$snapname]\n";
$raw .= &$generate_raw_config($conf->{snapshots}->{$snapname});
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> I think I'll wait for your patches before begin to work on gui, to be sure to 
> have
> some good.

I just sent what I have so far to the list. Please can you do a short review?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
Thanks,

I'll review them this afternoon
- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 11:01:58 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

> I think I'll wait for your patches before begin to work on gui, to be sure to 
> have 
> some good. 

I just sent what I have so far to the list. Please can you do a short review? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> Thanks,
> 
> I'll review them this afternoon

The question is how hard it is to implement disk/network hotplug on top of this?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Add Check: If host has enough real CPUs for starting VM, to prevent a Qemu CPU emualtion!

2014-11-17 Thread Dietmar Maurer
applied, thanks!

> -Original Message-
> From: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] On Behalf Of
> Wolfgang Link
> Sent: Montag, 17. November 2014 09:53
> To: pve-devel@pve.proxmox.com
> Cc: Wolfgang Link
> Subject: [pve-devel] [PATCH] Add Check: If host has enough real CPUs for
> starting VM, to prevent a Qemu CPU emualtion!
> 
> 
> Signed-off-by: Wolfgang Link 
> ---
>  PVE/QemuServer.pm |7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 02bf404..26c6c76
> 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2588,6 +2588,13 @@ sub config_to_command {
>  my $cores = $conf->{cores} || 1;
>  my $maxcpus = $conf->{maxcpus} if $conf->{maxcpus};
> 
> +my $total_cores = $sockets * $cores;
> +my $allowed_cores = $cpuinfo->{cpus};
> +
> +die "MAX $allowed_cores Cores allowed per VM on this Node"
> + if($allowed_cores < $total_cores ) ;
> +
> +
>  if ($maxcpus) {
>   push @$cmd, '-smp', "cpus=$cores,maxcpus=$maxcpus";
>  } else {
> --
> 1.7.10.4
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
>>The question is how hard it is to implement disk/network hotplug on top of 
>>this?

I'll try to make a test hotplug patch to see if it's working fine

- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 11:16:30 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

> Thanks, 
> 
> I'll review them this afternoon 

The question is how hard it is to implement disk/network hotplug on top of 
this? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> >>The question is how hard it is to implement disk/network hotplug on top of
> this?
> 
> I'll try to make a test hotplug patch to see if it's working fine

Great, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] best way to call storage api2 between nodes ? (for livemigration with local storage)

2014-11-17 Thread Kamil Trzciński
Maybe you need any help? :)

On Mon, Nov 17, 2014 at 10:13 AM, Alexandre DERUMIER 
wrote:

> >>So, does it need to implement a new command, something like "pvestorage
> create" mapped to the storage apis ?
>
> Oh, sorry, I never notice that we already have a "pvesm" command :)
>
>
>
> - Mail original -
>
> De: "Alexandre DERUMIER" 
> À: "Dietmar Maurer" 
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Lundi 17 Novembre 2014 10:08:23
> Objet: [pve-devel] best way to call storage api2 between nodes ? (for
> livemigration with local storage)
>
> I'm currently looking to implement livemigration + storage migration,
>
> for this,
>
> I need to call
>
> API2::Storage::Content::create
>
> from node1 (source) to node2 (target)
> and get the created volid value.
>
> Currently, for live migration, we always use ssh and send a "qm ..."
> commands to target node.
>
> So, does it need to implement a new command, something like "pvestorage
> create" mapped to the storage apis ?
>
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>



-- 
Kamil Trzciński

ayu...@ayufan.eu
www.ayufan.eu
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] best way to call storage api2 between nodes ? (for livemigration with local storage)

2014-11-17 Thread Alexandre DERUMIER
I need to done some work with nbd server first (to migrate unused disk and also 
offline vm)

and I'll try to submit some patch next week.


But yes, help will be welcome :)


- Mail original - 

De: "Kamil Trzciński"  
À: "Alexandre DERUMIER"  
Cc: "Dietmar Maurer" , pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 12:29:25 
Objet: Re: [pve-devel] best way to call storage api2 between nodes ? (for 
livemigration with local storage) 


Maybe you need any help? :) 


On Mon, Nov 17, 2014 at 10:13 AM, Alexandre DERUMIER < aderum...@odiso.com > 
wrote: 


>>So, does it need to implement a new command, something like "pvestorage 
>>create" mapped to the storage apis ? 

Oh, sorry, I never notice that we already have a "pvesm" command :) 



- Mail original - 

De: "Alexandre DERUMIER" < aderum...@odiso.com > 
À: "Dietmar Maurer" < diet...@proxmox.com > 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 10:08:23 
Objet: [pve-devel] best way to call storage api2 between nodes ? (for 
livemigration with local storage) 



I'm currently looking to implement livemigration + storage migration, 

for this, 

I need to call 

API2::Storage::Content::create 

from node1 (source) to node2 (target) 
and get the created volid value. 

Currently, for live migration, we always use ssh and send a "qm ..." commands 
to target node. 

So, does it need to implement a new command, something like "pvestorage 
create" mapped to the storage apis ? 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 






-- 

Kamil Trzciński 

ayu...@ayufan.eu 
www.ayufan.eu 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
>>Great, thanks! 

Ok, I can hotplug|unplug disk,nic,cpu,tablet with some small rework of my 
previous patches.

I'll send patch at the end of the day.


The only thing I can't manage currently with new code, is drive swap. (update 
an existing disk volid by another one).



- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 12:21:50 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

> >>The question is how hard it is to implement disk/network hotplug on top of 
> this? 
> 
> I'll try to make a test hotplug patch to see if it's working fine 

Great, thanks! 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Bug#579:

2014-11-17 Thread Wolfgang Link
add check if START Parameter is set in FILE: /etc/default/pve-manager
If START="no" NO VM will start if pve-manager start is called
If START!="no" or not present, VMs will use the boot_at_start Flag

Signed-off-by: Wolfgang Link 
---
 bin/init.d/pve-manager |4 
 1 file changed, 4 insertions(+)

diff --git a/bin/init.d/pve-manager b/bin/init.d/pve-manager
index e635f03..441e9d8 100755
--- a/bin/init.d/pve-manager
+++ b/bin/init.d/pve-manager
@@ -20,6 +20,10 @@ test -f $PVESH || exit 0
 case "$1" in
start)
echo "Starting VMs and Containers"
+   [ -r /etc/default/pve-manager ] && . /etc/default/pve-manager
+   if [ "$START" = "no" ];then
+   exit 0
+   fi
pvesh --nooutput create /nodes/localhost/startall 
;;
stop)
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] pending changes : add hotplug|unplug support

2014-11-17 Thread Alexandre Derumier
This apply on top of Dietmar patches

All seem to work fine (including swap disk)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/6] vm_deviceplug|unplug : implement pending change

2014-11-17 Thread Alexandre Derumier
also cleanup intendation

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm |  127 -
 1 file changed, 86 insertions(+), 41 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e43a228..141a21c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2994,63 +2994,81 @@ sub vm_devices_list {
 }
 
 sub vm_deviceplug {
-my ($storecfg, $conf, $vmid, $deviceid, $device) = @_;
+my ($storecfg, $conf, $vmid, $deviceid, $device, $optvalue) = @_;
 
-return 1 if !check_running($vmid);
+if (!check_running($vmid)){
+   if($conf->{pending}->{$deviceid}){
+   $conf->{$deviceid} = $optvalue;
+   delete $conf->{pending}->{$deviceid};
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   }
+}
 
 my $q35 = machine_type_is_q35($conf);
 
 if ($deviceid eq 'tablet') {
-   qemu_deviceadd($vmid, print_tabletdevice_full($conf));
+
+   eval { qemu_deviceadd($vmid, 
print_tabletdevice_full($conf->{pending}))};
+
+   if($conf->{pending}->{delete} =~ m/tablet/) {
+   delete $conf->{$deviceid};
+   vmconfig_undelete_pending_option($conf, $deviceid);
+   } else {
+   $conf->{$deviceid} = $conf->{pending}->{$deviceid};
+   delete $conf->{pending}->{$deviceid};
+   }
+
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
return 1;
 }
 
 return 1 if !$conf->{hotplug};
 
+return 1 if ($deviceid =~ m/^(ide|sata)(\d+)$/);
+
 my $devices_list = vm_devices_list($vmid);
 return 1 if defined($devices_list->{$deviceid});
 
 qemu_bridgeadd($storecfg, $conf, $vmid, $deviceid); #add bridge if we need 
it for the device
 
 if ($deviceid =~ m/^(virtio)(\d+)$/) {
-return undef if !qemu_driveadd($storecfg, $vmid, $device);
-my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
$device);
-qemu_deviceadd($vmid, $devicefull);
-if(!qemu_deviceaddverify($vmid, $deviceid)) {
-   qemu_drivedel($vmid, $deviceid);
-   return undef;
-}
+   return undef if !qemu_driveadd($storecfg, $vmid, $device);
+   my $devicefull = print_drivedevice_full($storecfg, $conf->{pending}, 
$vmid, $device);
+   qemu_deviceadd($vmid, $devicefull);
+   if(!qemu_deviceaddverify($vmid, $deviceid)) {
+   qemu_drivedel($vmid, $deviceid);
+   return undef;
+   }
 }
 
 if ($deviceid =~ m/^(scsihw)(\d+)$/) {
-my $scsihw = defined($conf->{scsihw}) ? $conf->{scsihw} : "lsi";
-my $pciaddr = print_pci_addr($deviceid);
-my $devicefull = "$scsihw,id=$deviceid$pciaddr";
-qemu_deviceadd($vmid, $devicefull);
-return undef if(!qemu_deviceaddverify($vmid, $deviceid));
+   my $scsihw = defined($conf->{scsihw}) ? $conf->{scsihw} : "lsi";
+   my $pciaddr = print_pci_addr($deviceid);
+   my $devicefull = "$scsihw,id=$deviceid$pciaddr";
+   qemu_deviceadd($vmid, $devicefull);
+   return undef if(!qemu_deviceaddverify($vmid, $deviceid));
 }
 
 if ($deviceid =~ m/^(scsi)(\d+)$/) {
-return undef if !qemu_findorcreatescsihw($storecfg,$conf, $vmid, 
$device);
-return undef if !qemu_driveadd($storecfg, $vmid, $device);
-my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
$device);
-if(!qemu_deviceadd($vmid, $devicefull)) {
-   qemu_drivedel($vmid, $deviceid);
-   return undef;
-}
+   return undef if !qemu_findorcreatescsihw($storecfg,$conf, $vmid, 
$device);
+   return undef if !qemu_driveadd($storecfg, $vmid, $device);
+   my $devicefull = print_drivedevice_full($storecfg, $conf->{pending}, 
$vmid, $device);
+   if(!qemu_deviceadd($vmid, $devicefull)) {
+   qemu_drivedel($vmid, $deviceid);
+   return undef;
+   }
 }
 
 if ($deviceid =~ m/^(net)(\d+)$/) {
-return undef if !qemu_netdevadd($vmid, $conf, $device, $deviceid);
-my $netdevicefull = print_netdevice_full($vmid, $conf, $device, 
$deviceid);
-qemu_deviceadd($vmid, $netdevicefull);
-if(!qemu_deviceaddverify($vmid, $deviceid)) {
-   qemu_netdevdel($vmid, $deviceid);
-   return undef;
-}
+   return undef if !qemu_netdevadd($vmid, $conf->{pending}, $device, 
$deviceid);
+   my $netdevicefull = print_netdevice_full($vmid, $conf->{pending}, 
$device, $deviceid);
+   qemu_deviceadd($vmid, $netdevicefull);
+   if(!qemu_deviceaddverify($vmid, $deviceid)) {
+   qemu_netdevdel($vmid, $deviceid);
+   return undef;
+   }
 }
 
-
 if (!$q35 && $deviceid =~ m/^(pci\.)(\d+)$/) {
my $bridgeid = $2;
my $pciaddr = print_pci_addr($deviceid);
@@ -3059,30 +3077,46 @@ sub vm_deviceplug {
return undef if !qemu_deviceaddverify($vmid, $deviceid);
 }
 
+#delete pending device after hotplug
+if($conf->{pending}->{$device

[pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm  |   44 ++-
 PVE/QemuServer.pm |   86 -
 pve-bridge|4 +++
 3 files changed, 91 insertions(+), 43 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a1f0f41..b87389f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -813,46 +813,6 @@ my $vmconfig_update_disk = sub {
 }
 };
 
-my $vmconfig_update_net = sub {
-my ($rpcenv, $authuser, $conf, $storecfg, $vmid, $opt, $value) = @_;
-
-if ($conf->{$opt} && PVE::QemuServer::check_running($vmid)) {
-   my $oldnet = PVE::QemuServer::parse_net($conf->{$opt});
-   my $newnet = PVE::QemuServer::parse_net($value);
-
-   if($oldnet->{model} ne $newnet->{model}){
-   #if model change, we try to hot-unplug
-die "error hot-unplug $opt for update" if 
!PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt);
-   }else{
-
-   if($newnet->{bridge} && $oldnet->{bridge}){
-   my $iface = "tap".$vmid."i".$1 if $opt =~ m/net(\d+)/;
-
-   if($newnet->{rate} ne $oldnet->{rate}){
-   PVE::Network::tap_rate_limit($iface, $newnet->{rate});
-   }
-
-   if(($newnet->{bridge} ne $oldnet->{bridge}) || ($newnet->{tag} 
ne $oldnet->{tag}) || ($newnet->{firewall} ne $oldnet->{firewall})){
-   PVE::Network::tap_unplug($iface);
-   PVE::Network::tap_plug($iface, $newnet->{bridge}, 
$newnet->{tag}, $newnet->{firewall});
-   }
-
-   }else{
-   #if bridge/nat mode change, we try to hot-unplug
-   die "error hot-unplug $opt for update" if 
!PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt);
-   }
-   }
-
-}
-$conf->{$opt} = $value;
-PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-$conf = PVE::QemuServer::load_config($vmid); # update/reload
-
-my $net = PVE::QemuServer::parse_net($conf->{$opt});
-
-die "error hotplug $opt" if !PVE::QemuServer::vm_deviceplug($storecfg, 
$conf, $vmid, $opt, $net);
-};
-
 # POST/PUT {vmid}/config implementation
 #
 # The original API used PUT (idempotent) an we assumed that all operations
@@ -1040,8 +1000,8 @@ my $update_vm_api  = sub {
 
} elsif ($opt =~ m/^net(\d+)$/) { #nics
 
-   &$vmconfig_update_net($rpcenv, $authuser, $conf, $storecfg, 
$vmid,
- $opt, $param->{$opt});
+  ## &$vmconfig_update_net($rpcenv, $authuser, $conf, 
$storecfg, $vmid,
+   ##$opt, $param->{$opt});
 
} else {
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 141a21c..b948de8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3521,12 +3521,18 @@ sub vmconfig_hotplug_pending {
$conf = load_config($vmid); # update/reload
 }
 
-return if !$conf->{hotplug};
+#return if !$conf->{hotplug};  #some changes can't be done also without 
hotplug
 
 # fixme: implement disk/network hotplug here
+foreach my $opt (keys %{$conf->{pending}}) {
+   if ($opt =~ m/^net(\d+)$/) { 
+   vmconfig_update_net($storecfg, $conf, $vmid, $opt);
+   }
+}
 
 }
 
+
 sub vmconfig_apply_pending {
 my ($vmid, $conf, $storecfg, $running) = @_;
 
@@ -3573,6 +3579,84 @@ sub vmconfig_apply_pending {
 }
 }
 
+my $safe_num_ne = sub {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a != $b;
+};
+
+my $safe_string_ne = sub {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a ne $b;
+};
+
+sub vmconfig_update_net {
+my ($storecfg, $conf, $vmid, $opt) = @_;
+
+if ($conf->{$opt}) {
+my $running = PVE::QemuServer::check_running($vmid);
+
+my $oldnet = PVE::QemuServer::parse_net($conf->{$opt});
+my $newnet = PVE::QemuServer::parse_net($conf->{pending}->{$opt});
+
+if(&$safe_string_ne($oldnet->{model}, $newnet->{model}) ||
+   &$safe_string_ne($oldnet->{macaddr}, $newnet->{macaddr}) ||
+   &$safe_num_ne($oldnet->{queues}, $newnet->{queues})){
+#for non online change, we try to hot-unplug
+if(!PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt)){
+warn "error hot-unplug $opt for update";
+return;
+}
+}else{
+
+if($newnet->{bridge} && $oldnet->{bridge}){
+my $iface = "tap".$vmid."i".$1 if $opt =~ m/net(\d+)/;
+
+if(&$safe_num_ne($oldnet->{rate}, $newnet->{rate})){
+PVE::Network::tap_rate_limit($iface, $newnet->{rate});
+}
+
+if(&$safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
+   &$safe_

[pve-devel] [PATCH 3/6] vmconfig_hotplug_pending : implement unplug

2014-11-17 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm |   13 +
 1 file changed, 13 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b948de8..27e6957 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3524,6 +3524,19 @@ sub vmconfig_hotplug_pending {
 #return if !$conf->{hotplug};  #some changes can't be done also without 
hotplug
 
 # fixme: implement disk/network hotplug here
+
+#unplug first
+my @delete = PVE::Tools::split_list($conf->{pending}->{delete});
+foreach my $opt (@delete) { 
+   #unplug
+   if ($opt eq 'tablet') {
+   warn "error hotplug $opt" if !PVE::QemuServer::vm_deviceplug(undef, 
$conf, $vmid, $opt);
+   } else {
+   warn "error hot-unplug $opt" if 
!PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt);
+   }
+}
+
+#hotplug
 foreach my $opt (keys %{$conf->{pending}}) {
if ($opt =~ m/^net(\d+)$/) { 
vmconfig_update_net($storecfg, $conf, $vmid, $opt);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 4/6] vmconfig_hotplug_pending : add tablet hotplug

2014-11-17 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm |8 
 1 file changed, 8 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 27e6957..2542059 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3538,8 +3538,16 @@ sub vmconfig_hotplug_pending {
 
 #hotplug
 foreach my $opt (keys %{$conf->{pending}}) {
+
if ($opt =~ m/^net(\d+)$/) { 
vmconfig_update_net($storecfg, $conf, $vmid, $opt);
+   }elsif ($opt eq 'tablet'){
+
+   if($conf->{pending}->{$opt} == 1){
+   PVE::QemuServer::vm_deviceplug(undef, $conf, $vmid, $opt, 
$conf->{pending}->{$opt});
+   } elsif($conf->{pending}->{$opt} == 0){
+   PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt);
+   }
}
 }
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 6/6] vmconfig_hotplug_pending : add update_disk

2014-11-17 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm  |   83 ++--
 PVE/QemuServer.pm |   99 +++--
 2 files changed, 99 insertions(+), 83 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b87389f..9107543 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -733,85 +733,6 @@ my $safe_num_ne = sub {
 return $a != $b;
 };
 
-my $vmconfig_update_disk = sub {
-my ($rpcenv, $authuser, $conf, $storecfg, $vmid, $opt, $value, $force) = 
@_;
-
-my $drive = PVE::QemuServer::parse_drive($opt, $value);
-
-if (PVE::QemuServer::drive_is_cdrom($drive)) { #cdrom
-   $rpcenv->check_vm_perm($authuser, $vmid, undef, ['VM.Config.CDROM']);
-} else {
-   $rpcenv->check_vm_perm($authuser, $vmid, undef, ['VM.Config.Disk']);
-}
-
-if ($conf->{$opt}) {
-
-   if (my $old_drive = PVE::QemuServer::parse_drive($opt, $conf->{$opt}))  
{
-
-   my $media = $drive->{media} || 'disk';
-   my $oldmedia = $old_drive->{media} || 'disk';
-   die "unable to change media type\n" if $media ne $oldmedia;
-
-   if (!PVE::QemuServer::drive_is_cdrom($old_drive) &&
-   ($drive->{file} ne $old_drive->{file})) {  # delete old disks
-
-   &$vmconfig_delete_option($rpcenv, $authuser, $conf, $storecfg, 
$vmid, $opt, $force);
-   $conf = PVE::QemuServer::load_config($vmid); # update/reload
-   }
-
-if(&$safe_num_ne($drive->{mbps}, $old_drive->{mbps}) ||
-   &$safe_num_ne($drive->{mbps_rd}, $old_drive->{mbps_rd}) ||
-   &$safe_num_ne($drive->{mbps_wr}, $old_drive->{mbps_wr}) ||
-   &$safe_num_ne($drive->{iops}, $old_drive->{iops}) ||
-   &$safe_num_ne($drive->{iops_rd}, $old_drive->{iops_rd}) ||
-   &$safe_num_ne($drive->{iops_wr}, $old_drive->{iops_wr}) ||
-   &$safe_num_ne($drive->{mbps_max}, $old_drive->{mbps_max}) ||
-   &$safe_num_ne($drive->{mbps_rd_max}, $old_drive->{mbps_rd_max}) 
||
-   &$safe_num_ne($drive->{mbps_wr_max}, $old_drive->{mbps_wr_max}) 
||
-   &$safe_num_ne($drive->{iops_max}, $old_drive->{iops_max}) ||
-   &$safe_num_ne($drive->{iops_rd_max}, $old_drive->{iops_rd_max}) 
||
-   &$safe_num_ne($drive->{iops_wr_max}, 
$old_drive->{iops_wr_max})) {
-   PVE::QemuServer::qemu_block_set_io_throttle($vmid,"drive-$opt",
-  ($drive->{mbps} || 
0)*1024*1024,
-  ($drive->{mbps_rd} 
|| 0)*1024*1024,
-  ($drive->{mbps_wr} 
|| 0)*1024*1024,
-  $drive->{iops} || 0,
-  $drive->{iops_rd} || 
0,
-  $drive->{iops_wr} || 
0,
-  ($drive->{mbps_max} 
|| 0)*1024*1024,
-  
($drive->{mbps_rd_max} || 0)*1024*1024,
-  
($drive->{mbps_wr_max} || 0)*1024*1024,
-  $drive->{iops_max} 
|| 0,
-  
$drive->{iops_rd_max} || 0,
-  
$drive->{iops_wr_max} || 0)
-  if !PVE::QemuServer::drive_is_cdrom($drive);
-}
-   }
-}
-
-&$create_disks($rpcenv, $authuser, $conf, $storecfg, $vmid, undef, {$opt 
=> $value});
-PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
-
-$conf = PVE::QemuServer::load_config($vmid); # update/reload
-$drive = PVE::QemuServer::parse_drive($opt, $conf->{$opt});
-
-if (PVE::QemuServer::drive_is_cdrom($drive)) { # cdrom
-
-   if (PVE::QemuServer::check_running($vmid)) {
-   if ($drive->{file} eq 'none') {
-   PVE::QemuServer::vm_mon_cmd($vmid, "eject",force => 
JSON::true,device => "drive-$opt");
-   } else {
-   my $path = PVE::QemuServer::get_iso_path($storecfg, $vmid, 
$drive->{file});
-   PVE::QemuServer::vm_mon_cmd($vmid, "eject",force => 
JSON::true,device => "drive-$opt"); #force eject if locked
-   PVE::QemuServer::vm_mon_cmd($vmid, "change",device => 
"drive-$opt",target => "$path") if $path;
-   }
-   }
-
-} else { # hotplug new disks
-
-   die "error hotplug $opt" if !PVE::QemuServer::vm_deviceplug($storecfg, 
$conf, $vmid, $opt, $drive);
-}
-};
 
 # POST/PUT {vmid}/config implementation
 #
@@ -995,8 +916,8 @@ my $update_vm_api  = sub {
 
if (PVE::QemuServer::valid_drivename($opt)) {
 
-   &$vmconfig_update_disk($rpcenv, $authu

[pve-devel] [PATCH 5/6] vmconfig_hotplug_pending : add cpu hotplug

2014-11-17 Thread Alexandre Derumier
if cpu hotplug is not possible,
we simply return to keep the changes in pending

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm |   27 +--
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2542059..067bc1c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3272,24 +3272,28 @@ sub qemu_netdevdel {
 sub qemu_cpu_hotplug {
 my ($vmid, $conf, $cores) = @_;
 
-die "new cores config is not defined" if !$cores;
-die "you can't add more cores than maxcpus"
-   if $conf->{maxcpus} && ($cores > $conf->{maxcpus});
+return if !$cores;
+return if !$conf->{maxcpus};
 return if !check_running($vmid);
 
+return if $cores > $conf->{maxcpus};
+
 my $currentcores = $conf->{cores} if $conf->{cores};
-die "current cores is not defined" if !$currentcores;
-die "maxcpus is not defined" if !$conf->{maxcpus};
-raise_param_exc({ 'cores' => "online cpu unplug is not yet possible" })
-   if($cores < $currentcores);
+return if !$currentcores;
+
+return if($cores < $currentcores); # unplug is not yet possible
 
 my $currentrunningcores = vm_mon_cmd($vmid, "query-cpus");
-raise_param_exc({ 'cores' => "cores number if running vm is different than 
configuration" })
-   if scalar (@{$currentrunningcores}) != $currentcores;
+return if scalar (@{$currentrunningcores}) != $currentcores;
 
 for(my $i = $currentcores; $i < $cores; $i++) {
vm_mon_cmd($vmid, "cpu-add", id => int($i));
 }
+
+$conf->{cores} = $cores;
+delete $conf->{pending}->{cores};
+PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+
 }
 
 sub qemu_block_set_io_throttle {
@@ -3548,7 +3552,10 @@ sub vmconfig_hotplug_pending {
} elsif($conf->{pending}->{$opt} == 0){
PVE::QemuServer::vm_deviceunplug($vmid, $conf, $opt);
}
-   }
+   } elsif($opt eq 'cores'){
+   PVE::QemuServer::qemu_cpu_hotplug($vmid, $conf, 
$conf->{pending}->{$opt});
+}
+
 }
 
 }
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Dietmar Maurer
> >>Great, thanks!
> 
> Ok, I can hotplug|unplug disk,nic,cpu,tablet with some small rework of my
> previous patches.
> 
> I'll send patch at the end of the day.
> 
> 
> The only thing I can't manage currently with new code, is drive swap. (update 
> an
> existing disk volid by another one).

What is the problem with that?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-11-17 Thread Alexandre DERUMIER
>>What is the problem with that?

No problem, I have fixed my code. (It was using &drive_delete previously, but I 
can simply use device_unplug directly)


- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 16:45:35 
Objet: RE: [pve-devel] qemu-server : implement pending changes v2 

> >>Great, thanks! 
> 
> Ok, I can hotplug|unplug disk,nic,cpu,tablet with some small rework of my 
> previous patches. 
> 
> I'll send patch at the end of the day. 
> 
> 
> The only thing I can't manage currently with new code, is drive swap. (update 
> an 
> existing disk volid by another one). 

What is the problem with that? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 141a21c..b948de8
> 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -3521,12 +3521,18 @@ sub vmconfig_hotplug_pending {
>   $conf = load_config($vmid); # update/reload
>  }
> 
> -return if !$conf->{hotplug};
> +#return if !$conf->{hotplug};  #some changes can't be done also
> + without hotplug

is this intentional?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre DERUMIER
>>is this intentional?

Yes, that's why I have add a comment.

We can change some values (disk throttle, disk backups, nic vlan,) even if 
we don't have hotplug enabled.



- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre Derumier" , pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 16:50:42 
Objet: RE: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net 

> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 141a21c..b948de8 
> 100644 
> --- a/PVE/QemuServer.pm 
> +++ b/PVE/QemuServer.pm 
> @@ -3521,12 +3521,18 @@ sub vmconfig_hotplug_pending { 
> $conf = load_config($vmid); # update/reload 
> } 
> 
> - return if !$conf->{hotplug}; 
> + #return if !$conf->{hotplug}; #some changes can't be done also 
> + without hotplug 

is this intentional? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> >>is this intentional?
> 
> Yes, that's why I have add a comment.
> 
> We can change some values (disk throttle, disk backups, nic vlan,) even 
> if we
> don't have hotplug enabled.

Yes, like balloon. My idea was to do those things before this line (see 
balloon). 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> diff --git a/pve-bridge b/pve-bridge
> index d6c5eb8..caee33b 100755
> --- a/pve-bridge
> +++ b/pve-bridge
> @@ -20,6 +20,10 @@ my $migratedfrom = $ENV{PVE_MIGRATED_FROM};
> 
>  my $conf = PVE::QemuServer::load_config($vmid, $migratedfrom);
> 
> +if ($conf->{pending}->{$netid}){
> +$conf = $conf->{pending};
> +}
> +
>  die "unable to get network config '$netid'\n"
>  if !$conf->{$netid};
> 

THis looks also problematic. What if someone sets hotplug=0? This also use 
wrong values
if the VM is migrated?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre DERUMIER
>>THis looks also problematic. What if someone sets hotplug=0? This also use 
>>wrong values
>>if the VM is migrated?

I manage this inside deviceplug|unplug.
(I remove the pending device at the end)


sub vm_deviceplug {

   return 1 if !$conf->{hotplug};
...
   #delete pending device after hotplug
if($conf->{pending}->{$deviceid}){
$conf->{$deviceid} = $optvalue;
delete $conf->{pending}->{$deviceid};
PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
}
}

- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre Derumier" , pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 16:57:57 
Objet: RE: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net 

> diff --git a/pve-bridge b/pve-bridge 
> index d6c5eb8..caee33b 100755 
> --- a/pve-bridge 
> +++ b/pve-bridge 
> @@ -20,6 +20,10 @@ my $migratedfrom = $ENV{PVE_MIGRATED_FROM}; 
> 
> my $conf = PVE::QemuServer::load_config($vmid, $migratedfrom); 
> 
> +if ($conf->{pending}->{$netid}){ 
> + $conf = $conf->{pending}; 
> +} 
> + 
> die "unable to get network config '$netid'\n" 
> if !$conf->{$netid}; 
> 

THis looks also problematic. What if someone sets hotplug=0? This also use 
wrong values 
if the VM is migrated? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> >>THis looks also problematic. What if someone sets hotplug=0? This also
> >>use wrong values if the VM is migrated?
> 
> I manage this inside deviceplug|unplug.
> (I remove the pending device at the end)
> 
> 
> sub vm_deviceplug {
> 
>return 1 if !$conf->{hotplug};
> ...
>#delete pending device after hotplug
> if($conf->{pending}->{$deviceid}){
> $conf->{$deviceid} = $optvalue;
> delete $conf->{pending}->{$deviceid};
> PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
> }
> }

What is someone wants:

hotplug: 0

Then you want to keep changes in [PENDING], without applying them.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> > >>THis looks also problematic. What if someone sets hotplug=0? This
> > >>also use wrong values if the VM is migrated?
> >
> > I manage this inside deviceplug|unplug.
> > (I remove the pending device at the end)
> >
> >
> > sub vm_deviceplug {
> > 
> >return 1 if !$conf->{hotplug};
> > ...
> >#delete pending device after hotplug
> > if($conf->{pending}->{$deviceid}){
> > $conf->{$deviceid} = $optvalue;
> > delete $conf->{pending}->{$deviceid};
> > PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
> > }
> > }
> 
> What is someone wants:
> 
> hotplug: 0
> 
> Then you want to keep changes in [PENDING], without applying them.

Note: you always use the value from [PENDING] inside pve-bridge

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre DERUMIER
>>What is someone wants:
>>
>>hotplug: 0
>>
>>Then you want to keep changes in [PENDING], without applying them.

yes, exactly. That's the way my patches are working.

hot-unplug
--
[CONF]
virtio0:
hotplug:0


qm set 110 -delete virtio0

[CONF]
virtio0:
hotplug:0
[PENDING]
delete: virtio0



hotplug

[CONF]
hotplug:0

qm set 110 -virtio0 local:1

[CONF]
hotplug:0
[PENDING]
virtio0: 





- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 17:05:48 
Objet: RE: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net 

> >>THis looks also problematic. What if someone sets hotplug=0? This also 
> >>use wrong values if the VM is migrated? 
> 
> I manage this inside deviceplug|unplug. 
> (I remove the pending device at the end) 
> 
> 
> sub vm_deviceplug { 
>  
> return 1 if !$conf->{hotplug}; 
> ... 
> #delete pending device after hotplug 
> if($conf->{pending}->{$deviceid}){ 
> $conf->{$deviceid} = $optvalue; 
> delete $conf->{pending}->{$deviceid}; 
> PVE::QemuServer::update_config_nolock($vmid, $conf, 1); 
> } 
> } 

What is someone wants: 

hotplug: 0 

Then you want to keep changes in [PENDING], without applying them. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre DERUMIER
>>Note: you always use the value from [PENDING] inside pve-bridge

if ($conf->{pending}->{$netid}){
$conf = $conf->{pending};
}

So, yes, I always use the pending conf if it's exist, 
in case we want to hotplug it.

But, on vm start, I think the pending conf is already removed right ?



- Mail original - 

De: "Dietmar Maurer"  
À: "Dietmar Maurer" , "Alexandre DERUMIER" 
 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 17:08:47 
Objet: RE: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net 

> > >>THis looks also problematic. What if someone sets hotplug=0? This 
> > >>also use wrong values if the VM is migrated? 
> > 
> > I manage this inside deviceplug|unplug. 
> > (I remove the pending device at the end) 
> > 
> > 
> > sub vm_deviceplug { 
> >  
> > return 1 if !$conf->{hotplug}; 
> > ... 
> > #delete pending device after hotplug 
> > if($conf->{pending}->{$deviceid}){ 
> > $conf->{$deviceid} = $optvalue; 
> > delete $conf->{pending}->{$deviceid}; 
> > PVE::QemuServer::update_config_nolock($vmid, $conf, 1); 
> > } 
> > } 
> 
> What is someone wants: 
> 
> hotplug: 0 
> 
> Then you want to keep changes in [PENDING], without applying them. 

Note: you always use the value from [PENDING] inside pve-bridge 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer
> >>Note: you always use the value from [PENDING] inside pve-bridge
> 
> if ($conf->{pending}->{$netid}){
> $conf = $conf->{pending};
> }
> 
> So, yes, I always use the pending conf if it's exist, in case we want to 
> hotplug it.
> 
> But, on vm start, I think the pending conf is already removed right ?

Yes, right. But only if the vm is not migrated. So I guess you need to prevent 
that
if $migratedfrom is set?

if ($conf->{pending}->{$netid} && !defined($migratedfrom)){
$conf = $conf->{pending};
}
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Alexandre DERUMIER
>>Yes, right. But only if the vm is not migrated. So I guess you need to 
>>prevent that 
>>if $migratedfrom is set? 

Oh, yes, indeed, you are right, I totaly miss this case.

- Mail original - 

De: "Dietmar Maurer"  
À: "Alexandre DERUMIER"  
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Novembre 2014 17:25:54 
Objet: RE: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net 

> >>Note: you always use the value from [PENDING] inside pve-bridge 
> 
> if ($conf->{pending}->{$netid}){ 
> $conf = $conf->{pending}; 
> } 
> 
> So, yes, I always use the pending conf if it's exist, in case we want to 
> hotplug it. 
> 
> But, on vm start, I think the pending conf is already removed right ? 

Yes, right. But only if the vm is not migrated. So I guess you need to prevent 
that 
if $migratedfrom is set? 

if ($conf->{pending}->{$netid} && !defined($migratedfrom)){ 
$conf = $conf->{pending}; 
} 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/6] vmconfig_hotplug_pending : add update_net

2014-11-17 Thread Dietmar Maurer

> >>Yes, right. But only if the vm is not migrated. So I guess you need to
> >>prevent that if $migratedfrom is set?
> 
> Oh, yes, indeed, you are right, I totaly miss this case.

Ok, I will try to resend the whole series including your work tommorrow.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Compile the latest pve-kernel version give a message warning about of openvswitch

2014-11-17 Thread Cesar Peschiera

Hi

I compiled the "pve-kernel-2.6.32-34-pve" kernel for obtain support to DRBD
8.4.5, and i get these messages:

root@PVE2:~/drbd# module-assistant auto-install drbd8

Updated infos about 1 packages
Getting source for kernel version: 2.6.32-34-pve
Kernel headers available in /usr/src/linux-headers-2.6.32-34-pve
Creating symlink...
apt-get install build-essential
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Done!
unpack
Extracting the package tarball, /usr/src/drbd8.tar.gz, please wait...
"/usr/share/modass/overrides/drbd8-module-source" build KVERS=2.6.32-34-pve
KSRC=/usr/src/linux kdist_image
find: `/usr/src/modules/drbd8*': No such file or directory
Done with /usr/src/drbd8-module-2.6.32-34-pve_8.4.5-1_amd64.deb .
dpkg -Ei /usr/src/drbd8-module-2.6.32-34-pve_8.4.5-1_amd64.deb
Selecting previously unselected package drbd8-module-2.6.32-34-pve.
(Reading database ... 68205 files and directories currently installed.)
Unpacking drbd8-module-2.6.32-34-pve (from
.../drbd8-module-2.6.32-34-pve_8.4.5-1_amd64.deb) ...
Setting up drbd8-module-2.6.32-34-pve (2:8.4.5-1) ...
WARNING: /lib/modules/2.6.32-34-pve/kernel/net/openvswitch/brcompat.ko needs
unknown symbol ovs_dp_ioctl_hook

But about of the final message of "WARNING" of openvswitch is for me
worrying, i don't know if this message is important or not.

Moreover, this message of "WARNING" don't appear when i compiled the
"pve-kernel-2.6.32-32-pve" kernel.

Is this a problem?
What happens with my compilation and openvswitch?

Thanks
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Bug#579:

2014-11-17 Thread Dietmar Maurer
applied

> add check if START Parameter is set in FILE: /etc/default/pve-manager
> If START="no" NO VM will start if pve-manager start is called
> If START!="no" or not present, VMs will use the boot_at_start Flag

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel