Re: [pve-devel] add cloudinit support to proxmox ?

2015-05-06 Thread Alexandre DERUMIER
looks promising! 

I'll try to build a prototype, don't seem to much complex

- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 6 Mai 2015 06:02:38
Objet: Re: [pve-devel] add cloudinit support to proxmox ?

 could be usefull to setup hostname,timezone, ip address to client. 
 (we need to add some fields in vmid.conf, generate virtual iso or floppy for 
 the first boot of the vm. 
 
 
 What do you think about it ? 

looks promising! 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Alexandre DERUMIER
seem to be ok !


BTW,
I wonder if we could replace  qmp  snapshot-drive (from 
internal-snapshot-async.patch)

by upstream qemu qmp method

blockdev-snapshot-internal-sync

http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD

One advantage is that multiple drive can be snapshotted in a transaction.

from history we have done this snapshot-drive because no method existed at this 
time.

I think we just need to keep savevm_start, savevm_stop from 
internal-snapshot-async.patch,
to be able to save the vmstate to external file/volume.




blockdev-snapshot-internal-sync
---

Synchronously take an internal snapshot of a block device when the format of
image used supports it.  If the name is an empty string, or a snapshot with
name already exists, the operation will fail.

Arguments:

- device: device name to snapshot (json-string)
- name: name of the new snapshot (json-string)

Example:

- { execute: blockdev-snapshot-internal-sync,
arguments: { device: ide-hd0,
   name: snapshot0 }
   }
- { return: {} }

EQMP

{
.name   = blockdev-snapshot-delete-internal-sync,
.args_type  = device:B,id:s?,name:s?,
.mhandler.cmd_new =
  qmp_marshal_input_blockdev_snapshot_delete_internal_sync,
},

SQMP
blockdev-snapshot-delete-internal-sync
--

Synchronously delete an internal snapshot of a block device when the format of
image used supports it.  The snapshot is identified by name or id or both.  One
of name or id is required.  If the snapshot is not found, the operation will
fail.

Arguments:

- device: device name (json-string)
- id: ID of the snapshot (json-string, optional)
- name: name of the snapshot (json-string, optional)

Example:

- { execute: blockdev-snapshot-delete-internal-sync,
arguments: { device: ide-hd0,
   name: snapshot0 }
   }
- { return: {
   id: 1,
   name: snapshot0,
   vm-state-size: 0,
   date-sec: 112,
   date-nsec: 10,
   vm-clock-sec: 100,
   vm-clock-nsec: 20
 }
   }


- Mail original -
De: Wolfgang Link w.l...@proxmox.com
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 6 Mai 2015 09:57:34
Objet: [pve-devel] [PATCH] remove running from Storage and check it in  
QemuServer

It is better to check if a VM is running in QemuServer then in Storage. 
for the Storage there is no difference if it is running or not. 

Signed-off-by: Wolfgang Link w.l...@proxmox.com 
--- 
PVE/QemuServer.pm | 29 +++-- 
1 file changed, 23 insertions(+), 6 deletions(-) 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 708b208..9a4e2ee 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -31,6 +31,8 @@ use PVE::QMPClient; 
use PVE::RPCEnvironment; 
use Time::HiRes qw(gettimeofday); 

+my $qemu_snap_storage = {rbd = 1, sheepdog = 1}; 
+ 
my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); 

# Note about locking: we use flock on the config file protect 
@@ -3777,12 +3779,11 @@ sub qemu_volume_snapshot { 

my $running = check_running($vmid); 

- return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); 
- 
- return if !$running; 
- 
- vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap); 
- 
+ if ($running  do_snapshots_with_qemu($storecfg, $volid)){ 
+ vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap); 
+ } else { 
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap); 
+ } 
} 

sub qemu_volume_snapshot_delete { 
@@ -5772,6 +5773,22 @@ my $savevm_wait = sub { 
} 
}; 

+sub do_snapshots_with_qemu { 
+ my ($storecfg, $volid) = @_; 
+ 
+ my $storage_name = PVE::Storage::parse_volume_id($volid); 
+ 
+ if ($qemu_snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){ 
+ return 1; 
+ } 
+ 
+ if ($volid =~ m/\.(qcow2|qed)$/){ 
+ return 1; 
+ } 
+ 
+ return undef; 
+} 
+ 
sub snapshot_create { 
my ($vmid, $snapname, $save_vmstate, $comment) = @_; 

-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH_V2] remove running from Storage and check it in QemuServer

2015-05-06 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/QemuServer.pm | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 708b208..9a4e2ee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -31,6 +31,8 @@ use PVE::QMPClient;
 use PVE::RPCEnvironment;
 use Time::HiRes qw(gettimeofday);
 
+my $qemu_snap_storage = {rbd = 1, sheepdog = 1};
+
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
 # Note about locking: we use flock on the config file protect
@@ -3777,12 +3779,11 @@ sub qemu_volume_snapshot {
 
 my $running = check_running($vmid);
 
-return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 
$running);
-
-return if !$running;
-
-vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap);
-
+if ($running  do_snapshots_with_qemu($storecfg, $volid)){
+   vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap);
+} else {
+   PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
+}
 }
 
 sub qemu_volume_snapshot_delete {
@@ -5772,6 +5773,22 @@ my $savevm_wait = sub {
 }
 };
 
+sub do_snapshots_with_qemu {
+my ($storecfg, $volid) = @_;
+
+my $storage_name = PVE::Storage::parse_volume_id($volid);
+
+if ($qemu_snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){
+   return 1;
+}
+
+if ($volid =~ m/\.(qcow2|qed)$/){
+   return 1;
+}
+
+return undef;
+}
+
 sub snapshot_create {
 my ($vmid, $snapname, $save_vmstate, $comment) = @_;
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Dietmar Maurer
I still see calls to that function using the $runnig parameter:

PVE/Storage/RBDPlugin.pm:$class-volume_snapshot($scfg, $storeid, $newname,
$snap, $running);
PVE/Storage/SheepdogPlugin.pm:$class-volume_snapshot($scfg, $storeid,
$newname, $snap, $running);
PVE/Storage/ZFSPlugin.pm:$class-volume_snapshot($scfg, $storeid, $newname,
$snap, $running);
PVE/Storage/ZFSPoolPlugin.pm:$class-volume_snapshot($scfg, $storeid,
$newname, $snap, $running);

?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Add manager5 directory in case of missing development symlink

2015-05-06 Thread Emmanuel Kasper
---
 Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Makefile b/Makefile
index 429264b..a09393c 100644
--- a/Makefile
+++ b/Makefile
@@ -102,6 +102,8 @@ install: country.dat vznet.conf vzdump.conf 
vzdump-hook-script.pl pve-apt.conf p
install -m 0644 copyright ${DOCDIR}
install -m 0644 debian/changelog.Debian ${DOCDIR}
install -m 0644 country.dat ${DESTDIR}/usr/share/${PACKAGE}
+   # temporary: set ExtJS 5 migration devel directory
+   install -d ${DESTDIR}/usr/share/${PACKAGE}/manager5
set -e  for i in ${SUBDIRS}; do ${MAKE} -C $$i $@; done
 
 .PHONY: distclean
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage.pm   | 4 ++--
 PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
 PVE/Storage/LVMPlugin.pm | 2 +-
 PVE/Storage/Plugin.pm| 4 +---
 PVE/Storage/RBDPlugin.pm | 4 +---
 PVE/Storage/SheepdogPlugin.pm| 4 +---
 PVE/Storage/ZFSPoolPlugin.pm | 2 +-
 7 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b542ee6..92c7d14 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -162,13 +162,13 @@ sub volume_rollback_is_possible {
 }
 
 sub volume_snapshot {
-my ($cfg, $volid, $snap, $running) = @_;
+my ($cfg, $volid, $snap) = @_;
 
 my ($storeid, $volname) = parse_volume_id($volid, 1);
 if ($storeid) {
 my $scfg = storage_config($cfg, $storeid);
 my $plugin = PVE::Storage::Plugin-lookup($scfg-{type});
-return $plugin-volume_snapshot($scfg, $storeid, $volname, $snap, 
$running);
+return $plugin-volume_snapshot($scfg, $storeid, $volname, $snap);
 } elsif ($volid =~ m|^(/.+)$|  -e $volid) {
 die snapshot file/device '$volid' is not possible\n;
 } else {
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index c957ade..763c482 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -205,7 +205,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 die volume snapshot is not possible on iscsi device;
 }
 
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 1688bb5..19eb78c 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -456,7 +456,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die lvm snapshot is not implemented;
 }
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 5b72b07..f119068 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -641,12 +641,10 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die can't snapshot this image format\n if $volname !~ m/\.(qcow2|qed)$/;
 
-return 1 if $running;
-
 my $path = $class-filesystem_path($scfg, $volname);
 
 my $cmd = ['/usr/bin/qemu-img', 'snapshot','-c', $snap, $path];
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2c45a68..878fa16 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -510,9 +510,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid) = $class-parse_volname($volname);
 
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index 3e2c126..e358f9e 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.pm
@@ -389,9 +389,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class-parse_volname($volname);
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 39fc348..1064869 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -415,7 +415,7 @@ sub volume_size_info {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 $class-zfs_request($scfg, undef, 'snapshot', 
$scfg-{pool}/$volname\@$snap);
 }
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/QemuServer.pm | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 708b208..9a4e2ee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -31,6 +31,8 @@ use PVE::QMPClient;
 use PVE::RPCEnvironment;
 use Time::HiRes qw(gettimeofday);
 
+my $qemu_snap_storage = {rbd = 1, sheepdog = 1};
+
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
 # Note about locking: we use flock on the config file protect
@@ -3777,12 +3779,11 @@ sub qemu_volume_snapshot {
 
 my $running = check_running($vmid);
 
-return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 
$running);
-
-return if !$running;
-
-vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap);
-
+if ($running  do_snapshots_with_qemu($storecfg, $volid)){
+   vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap);
+} else {
+   PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
+}
 }
 
 sub qemu_volume_snapshot_delete {
@@ -5772,6 +5773,22 @@ my $savevm_wait = sub {
 }
 };
 
+sub do_snapshots_with_qemu {
+my ($storecfg, $volid) = @_;
+
+my $storage_name = PVE::Storage::parse_volume_id($volid);
+
+if ($qemu_snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){
+   return 1;
+}
+
+if ($volid =~ m/\.(qcow2|qed)$/){
+   return 1;
+}
+
+return undef;
+}
+
 sub snapshot_create {
 my ($vmid, $snapname, $save_vmstate, $comment) = @_;
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] STRANGE: No Emergency - Informations in Proxmox-Doku

2015-05-06 Thread Alexandre DERUMIER
a) How you can block in Rescue-Mode the Autostart of PVE-Firewall 

/etc/default/pve-firewall 

START_FIREWALL=no


b) How you can block the Autostart off all Containers

add a file /etc/default/pve-manager

with 
START=no


c) How you can change the kernel to an old version!

old kernels are not remove, so you just need to edit your 
 /boot/grub/grub.cfg

and reorder the kernels, the first one is choose by default

- Mail original -
De: Detlef Bracker brac...@1awww.com
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 6 Mai 2015 12:57:54
Objet: [pve-devel] STRANGE: No Emergency - Informations in Proxmox-Doku

Dear, 

Kernel-Change can bring your Host to a not usable function - and their 
are not correct informations or they are failed in doku about this 
important things: 

a) How you can block in Rescue-Mode the Autostart of PVE-Firewall 
As a workarround we have remove the startups in /etc/rc*.*/*pve-firewall 
b) How you can block the Autostart off all Containers (please 
understand, when you have mounted the LVM, then /etc/pve/openvz 
not exists) !!! 
We mean a workarround is remove the /vz/root/* dirs, but this is not 
a good idea, why in different times the will been created 
automaticly! 
c) How you can change the kernel to an old version! All what we have 
tested with grub will not work! We have search via google 
and nothing found, what was fine! 

I will tell you we had 1 week problems on one host and half day with 
other host! 

So everybody can have this in production system! And then you begin to 
search for a resolution! 

Regards 

Detlef 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] ext5migrate: remove Ext.grid.feature.Chunking hack

2015-05-06 Thread Emmanuel Kasper
With ExtJS 4, we introduced an override to Ext.grid.feature.Chunking
to fix scrollings problems in the grid when having a high number
of nodes. Ext.grid.feature.Chunking has been removed from ExtJS
in version 5, so we hope either the problem is fixed on ExtJS side,
or we will have to find a different workaround.
---
 www/manager5/grid/ResourceGrid.js | 15 ---
 1 file changed, 15 deletions(-)

diff --git a/www/manager5/grid/ResourceGrid.js 
b/www/manager5/grid/ResourceGrid.js
index 7a9baa5..0449573 100644
--- a/www/manager5/grid/ResourceGrid.js
+++ b/www/manager5/grid/ResourceGrid.js
@@ -1,18 +1,3 @@
-// fixme: remove this fix
-// this hack is required for ExtJS 4.0.0
-Ext.override(Ext.grid.feature.Chunking, {
-attachEvents: function() {
-var grid = this.view.up('gridpanel'),
-scroller = grid.down('gridscroller[dock=right]');
-if (scroller === null ) {
-grid.on(afterlayout, this.attachEvents, this);
-   return;
-}
-scroller.el.on('scroll', this.onBodyScroll, this, {buffer: 300});
-},
-rowHeight: PVE.Utils.gridLineHeigh()
-});
-
 Ext.define('PVE.grid.ResourceGrid', {
 extend: 'Ext.grid.GridPanel',
 alias: ['widget.pveResourceGrid'],
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Add required url routing for ext5 migration directory

2015-05-06 Thread Emmanuel Kasper
---
 PVE/ExtJSIndex5.pm | 4 
 1 file changed, 4 insertions(+)

diff --git a/PVE/ExtJSIndex5.pm b/PVE/ExtJSIndex5.pm
index 1fd6dbc..a3da17b 100644
--- a/PVE/ExtJSIndex5.pm
+++ b/PVE/ExtJSIndex5.pm
@@ -39,6 +39,10 @@ _EOD
 script type=text/javascript 
src=/pve2/manager5/form/LanguageSelector.js/script
 script type=text/javascript 
src=/pve2/manager5/form/KVComboBox.js/script
 script type=text/javascript 
src=/pve2/manager5/window/LoginWindow.js/script
+script type=text/javascript 
src=/pve2/manager5/panel/StatusPanel.js/script
+script type=text/javascript 
src=/pve2/manager5/panel/ConfigPanel.js/script
+script type=text/javascript src=/pve2/manager5/dc/Config.js/script
+script type=text/javascript 
src=/pve2/manager5/grid/ResourceGrid.js/script
 script type=text/javascript 
src=/pve2/ext5/packages/ext-locale/build/ext-locale-${lang}.js/script
 _EOD
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] STRANGE: No Emergency - Informations in Proxmox-Doku

2015-05-06 Thread Detlef Bracker
Dear,

Kernel-Change can bring your Host to a not usable function - and their
are not correct informations or they are failed in doku about this
important things:

a) How you can block in Rescue-Mode the Autostart of PVE-Firewall
As a workarround we have remove the startups in /etc/rc*.*/*pve-firewall
b) How you can block the Autostart off all Containers (please
understand, when you have mounted the LVM, then /etc/pve/openvz
 not exists) !!!
   We mean a workarround is remove the /vz/root/* dirs, but this is not
a good idea, why in different times the will been created
   automaticly!
c) How you can change the kernel to an old version! All what we have
tested with grub will not work! We have search via google
   and nothing found, what was fine!

I will tell you we had 1 week problems on one host and half day with
other host!

So everybody can have this in production system! And then you begin to
search for a resolution!

Regards

Detlef



signature.asc
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Dietmar Maurer
 I wonder if we could replace  qmp  snapshot-drive (from
 internal-snapshot-async.patch)
 
 by upstream qemu qmp method
 
 blockdev-snapshot-internal-sync
 
 http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD
 
 One advantage is that multiple drive can be snapshotted in a transaction.
 
 from history we have done this snapshot-drive because no method existed at
 this time.
 
 I think we just need to keep savevm_start, savevm_stop from
 internal-snapshot-async.patch,
 to be able to save the vmstate to external file/volume.

So we still need the code to save the state asynchronously process_savevm_co,
and also the code to load it load_state_from_blockdev. So this would just
remove
a few line of very simple code in qmp_snapshot_drive? And we still need to
handle
external snapshots, so we do not gain much from that 
multiple drive can be snapshotted in a transaction
feature (we just need to add more special cases, which make code more complex)?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel