Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Alexandre DERUMIER
>>So we still need the code to save the state asynchronously 
>>"process_savevm_co", 
>>and also the code to load it "load_state_from_blockdev". So this would just 
>>remove a few line of very simple code in qmp_snapshot_drive? 

yes, indeed. (Don't have the code in mind, if it's easy to maintain no problem)


>>And we still need to handle external snapshots, so we do not gain much from 
>>that 
>>multiple drive can be snapshotted in a transaction" 
>>feature (we just need to add more special cases, which make code more 
>>complex)? 
yes, for transaction, all drives need to support it. 
(so don't work with zfs for example, or mixing rbd and zfs)




- Mail original -
De: "dietmar" 
À: "aderumier" , "Wolfgang Link" 
Cc: "pve-devel" 
Envoyé: Jeudi 7 Mai 2015 06:28:14
Objet: Re: [pve-devel] [PATCH] remove running from Storage and check it in 
QemuServer

> I wonder if we could replace qmp "snapshot-drive" (from 
> internal-snapshot-async.patch) 
> 
> by upstream qemu qmp method 
> 
> blockdev-snapshot-internal-sync 
> 
> http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD 
> 
> One advantage is that multiple drive can be snapshotted in a transaction. 
> 
> from history we have done this snapshot-drive because no method existed at 
> this time. 
> 
> I think we just need to keep savevm_start, savevm_stop from 
> internal-snapshot-async.patch, 
> to be able to save the vmstate to external file/volume. 

So we still need the code to save the state asynchronously "process_savevm_co", 
and also the code to load it "load_state_from_blockdev". So this would just 
remove 
a few line of very simple code in qmp_snapshot_drive? And we still need to 
handle 
external snapshots, so we do not gain much from that 
"multiple drive can be snapshotted in a transaction" 
feature (we just need to add more special cases, which make code more complex)? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Dietmar Maurer
> I wonder if we could replace  qmp  "snapshot-drive" (from
> internal-snapshot-async.patch)
> 
> by upstream qemu qmp method
> 
> blockdev-snapshot-internal-sync
> 
> http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD
> 
> One advantage is that multiple drive can be snapshotted in a transaction.
> 
> from history we have done this snapshot-drive because no method existed at
> this time.
> 
> I think we just need to keep savevm_start, savevm_stop from
> internal-snapshot-async.patch,
> to be able to save the vmstate to external file/volume.

So we still need the code to save the state asynchronously "process_savevm_co",
and also the code to load it "load_state_from_blockdev". So this would just
remove
a few line of very simple code in qmp_snapshot_drive? And we still need to
handle
external snapshots, so we do not gain much from that 
"multiple drive can be snapshotted in a transaction"
feature (we just need to add more special cases, which make code more complex)?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Dietmar Maurer
I still see calls to that function using the $runnig parameter:

PVE/Storage/RBDPlugin.pm:$class->volume_snapshot($scfg, $storeid, $newname,
$snap, $running);
PVE/Storage/SheepdogPlugin.pm:$class->volume_snapshot($scfg, $storeid,
$newname, $snap, $running);
PVE/Storage/ZFSPlugin.pm:$class->volume_snapshot($scfg, $storeid, $newname,
$snap, $running);
PVE/Storage/ZFSPoolPlugin.pm:$class->volume_snapshot($scfg, $storeid,
$newname, $snap, $running);

?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Alexandre DERUMIER
seem to be ok !


BTW,
I wonder if we could replace  qmp  "snapshot-drive" (from 
internal-snapshot-async.patch)

by upstream qemu qmp method

blockdev-snapshot-internal-sync

http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD

One advantage is that multiple drive can be snapshotted in a transaction.

from history we have done this snapshot-drive because no method existed at this 
time.

I think we just need to keep savevm_start, savevm_stop from 
internal-snapshot-async.patch,
to be able to save the vmstate to external file/volume.




blockdev-snapshot-internal-sync
---

Synchronously take an internal snapshot of a block device when the format of
image used supports it.  If the name is an empty string, or a snapshot with
name already exists, the operation will fail.

Arguments:

- "device": device name to snapshot (json-string)
- "name": name of the new snapshot (json-string)

Example:

-> { "execute": "blockdev-snapshot-internal-sync",
"arguments": { "device": "ide-hd0",
   "name": "snapshot0" }
   }
<- { "return": {} }

EQMP

{
.name   = "blockdev-snapshot-delete-internal-sync",
.args_type  = "device:B,id:s?,name:s?",
.mhandler.cmd_new =
  qmp_marshal_input_blockdev_snapshot_delete_internal_sync,
},

SQMP
blockdev-snapshot-delete-internal-sync
--

Synchronously delete an internal snapshot of a block device when the format of
image used supports it.  The snapshot is identified by name or id or both.  One
of name or id is required.  If the snapshot is not found, the operation will
fail.

Arguments:

- "device": device name (json-string)
- "id": ID of the snapshot (json-string, optional)
- "name": name of the snapshot (json-string, optional)

Example:

-> { "execute": "blockdev-snapshot-delete-internal-sync",
"arguments": { "device": "ide-hd0",
   "name": "snapshot0" }
   }
<- { "return": {
   "id": "1",
   "name": "snapshot0",
   "vm-state-size": 0,
       "date-sec": 112,
   "date-nsec": 10,
   "vm-clock-sec": 100,
   "vm-clock-nsec": 20
 }
   }


- Mail original -
De: "Wolfgang Link" 
À: "pve-devel" 
Envoyé: Mercredi 6 Mai 2015 09:57:34
Objet: [pve-devel] [PATCH] remove running from Storage and check it in  
QemuServer

It is better to check if a VM is running in QemuServer then in Storage. 
for the Storage there is no difference if it is running or not. 

Signed-off-by: Wolfgang Link  
--- 
PVE/QemuServer.pm | 29 +++-- 
1 file changed, 23 insertions(+), 6 deletions(-) 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 708b208..9a4e2ee 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -31,6 +31,8 @@ use PVE::QMPClient; 
use PVE::RPCEnvironment; 
use Time::HiRes qw(gettimeofday); 

+my $qemu_snap_storage = {rbd => 1, sheepdog => 1}; 
+ 
my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); 

# Note about locking: we use flock on the config file protect 
@@ -3777,12 +3779,11 @@ sub qemu_volume_snapshot { 

my $running = check_running($vmid); 

- return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); 
- 
- return if !$running; 
- 
- vm_mon_cmd($vmid, "snapshot-drive", device => $deviceid, name => $snap); 
- 
+ if ($running && do_snapshots_with_qemu($storecfg, $volid)){ 
+ vm_mon_cmd($vmid, "snapshot-drive", device => $deviceid, name => $snap); 
+ } else { 
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap); 
+ } 
} 

sub qemu_volume_snapshot_delete { 
@@ -5772,6 +5773,22 @@ my $savevm_wait = sub { 
} 
}; 

+sub do_snapshots_with_qemu { 
+ my ($storecfg, $volid) = @_; 
+ 
+ my $storage_name = PVE::Storage::parse_volume_id($volid); 
+ 
+ if ($qemu_snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){ 
+ return 1; 
+ } 
+ 
+ if ($volid =~ m/\.(qcow2|qed)$/){ 
+ return 1; 
+ } 
+ 
+ return undef; 
+} 
+ 
sub snapshot_create { 
my ($vmid, $snapname, $save_vmstate, $comment) = @_; 

-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link 
---
 PVE/QemuServer.pm | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 708b208..9a4e2ee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -31,6 +31,8 @@ use PVE::QMPClient;
 use PVE::RPCEnvironment;
 use Time::HiRes qw(gettimeofday);
 
+my $qemu_snap_storage = {rbd => 1, sheepdog => 1};
+
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
 # Note about locking: we use flock on the config file protect
@@ -3777,12 +3779,11 @@ sub qemu_volume_snapshot {
 
 my $running = check_running($vmid);
 
-return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 
$running);
-
-return if !$running;
-
-vm_mon_cmd($vmid, "snapshot-drive", device => $deviceid, name => $snap);
-
+if ($running && do_snapshots_with_qemu($storecfg, $volid)){
+   vm_mon_cmd($vmid, "snapshot-drive", device => $deviceid, name => $snap);
+} else {
+   PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
+}
 }
 
 sub qemu_volume_snapshot_delete {
@@ -5772,6 +5773,22 @@ my $savevm_wait = sub {
 }
 };
 
+sub do_snapshots_with_qemu {
+my ($storecfg, $volid) = @_;
+
+my $storage_name = PVE::Storage::parse_volume_id($volid);
+
+if ($qemu_snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){
+   return 1;
+}
+
+if ($volid =~ m/\.(qcow2|qed)$/){
+   return 1;
+}
+
+return undef;
+}
+
 sub snapshot_create {
 my ($vmid, $snapname, $save_vmstate, $comment) = @_;
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-05-06 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link 
---
 PVE/Storage.pm   | 4 ++--
 PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
 PVE/Storage/LVMPlugin.pm | 2 +-
 PVE/Storage/Plugin.pm| 4 +---
 PVE/Storage/RBDPlugin.pm | 4 +---
 PVE/Storage/SheepdogPlugin.pm| 4 +---
 PVE/Storage/ZFSPoolPlugin.pm | 2 +-
 7 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b542ee6..92c7d14 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -162,13 +162,13 @@ sub volume_rollback_is_possible {
 }
 
 sub volume_snapshot {
-my ($cfg, $volid, $snap, $running) = @_;
+my ($cfg, $volid, $snap) = @_;
 
 my ($storeid, $volname) = parse_volume_id($volid, 1);
 if ($storeid) {
 my $scfg = storage_config($cfg, $storeid);
 my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
-return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap, 
$running);
+return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap);
 } elsif ($volid =~ m|^(/.+)$| && -e $volid) {
 die "snapshot file/device '$volid' is not possible\n";
 } else {
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index c957ade..763c482 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -205,7 +205,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 die "volume snapshot is not possible on iscsi device";
 }
 
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 1688bb5..19eb78c 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -456,7 +456,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die "lvm snapshot is not implemented";
 }
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 5b72b07..f119068 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -641,12 +641,10 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
 
-return 1 if $running;
-
 my $path = $class->filesystem_path($scfg, $volname);
 
 my $cmd = ['/usr/bin/qemu-img', 'snapshot','-c', $snap, $path];
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2c45a68..878fa16 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -510,9 +510,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
 
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index 3e2c126..e358f9e 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.pm
@@ -389,9 +389,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 39fc348..1064869 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -415,7 +415,7 @@ sub volume_size_info {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 $class->zfs_request($scfg, undef, 'snapshot', 
"$scfg->{pool}/$volname\@$snap");
 }
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Alexandre DERUMIER
>>We need to use qemu "snapshot-drive" if the VM is running?

For qcow2 file yes.

found some reference here:

https://www.suse.com/documentation/sles11/book_kvm/data/cha_qemu_guest_inst_qemu-img.html
"
WARNING: Do not create or delete virtual machine snapshots with the qemu-img 
snapshot command while the virtual machine is running.
 Otherwise, you can damage the disk image with the state of the virtual machine 
saved.
"-
 Mail original -
De: "dietmar" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Jeudi 30 Avril 2015 13:11:35
Objet: Re: [pve-devel] [PATCH] remove running from Storage and check it in 
QemuServer

> > So, with this code, we always call qemu-img snasphot for qcow2, which is 
> > wrong 
> > if the vm is running. 
> > (for rbd,zfs,.. no problem). 
> > 
> > I think that's why we passed the running flag to pve-storage. 
> 
> Ah, yes - the logic is wrong, but fixable. 

Or is it the other way around? We need to use qemu "snapshot-drive" if the VM 
is 
running? 
confused :-/ 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Dietmar Maurer
> > So, with this code, we always call qemu-img snasphot for qcow2, which is
> > wrong
> > if the vm is running.
> > (for rbd,zfs,.. no problem).
> > 
> > I think that's why we passed the running flag to pve-storage.
> 
> Ah, yes - the logic is wrong, but fixable.

Or is it the other way around? We need to use qemu "snapshot-drive" if the VM is
running?
confused :-/ 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Dietmar Maurer
> So, with this code, we always call qemu-img snasphot for qcow2, which is wrong
> if the vm is running.
> (for rbd,zfs,.. no problem).
> 
> I think that's why we passed the running flag to pve-storage.

Ah, yes - the logic is wrong, but fixable.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Alexandre DERUMIER
>>Form what I see this case is already handled inside 
>>storage_support_snapshop()?


 PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
storage_support_snapshop($volid, $storecfg); 
 return if !$running; 
 vm_mon_cmd($vmid, "snapshot-drive", device => $deviceid, name => $snap);

and
storage_support_snapshop {
...
 if ($volid =~ m/\.(qcow2|qed)$/){ 
+ $ret = 1; 
+ } 


So, with this code, we always call qemu-img snasphot for qcow2, which is wrong 
if the vm is running.
(for rbd,zfs,.. no problem).

I think that's why we passed the running flag to pve-storage.



- Mail original -
De: "dietmar" 
À: "aderumier" , "Wolfgang Link" 
Cc: "pve-devel" 
Envoyé: Jeudi 30 Avril 2015 12:28:36
Objet: Re: [pve-devel] [PATCH] remove running from Storage and check it in 
QemuServer

> + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
> storage_support_snapshop($volid, $storecfg); 
> 
> This seem to be wrong. 
> 
> we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to 
> use qmp in this case. 

Form what I see this case is already handled inside storage_support_snapshop()? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Dietmar Maurer
> + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if
> storage_support_snapshop($volid, $storecfg); 
> 
> This seem to be wrong.
> 
> we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to
> use qmp in this case.

Form what I see this case is already handled inside storage_support_snapshop()?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Dietmar Maurer
> Couldn't we reuse volume_has_feature from storage plugins ?

We can, but this is misleading. It is not a storage feature, it 
is a feature of qemu.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Alexandre DERUMIER
Also,

+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
storage_support_snapshop($volid, $storecfg); 

This seem to be wrong.

we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to use 
qmp in this case.



- Mail original -
De: "Wolfgang Link" 
À: "pve-devel" 
Envoyé: Jeudi 30 Avril 2015 09:47:03
Objet: [pve-devel] [PATCH] remove running from Storage and check it in  
QemuServer

It is better to check if a VM is running in QemuServer then in Storage. 
for the Storage there is no difference if it is running or not. 

Signed-off-by: Wolfgang Link  
--- 
PVE/QemuServer.pm | 21 - 
1 file changed, 20 insertions(+), 1 deletion(-) 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 708b208..39aff42 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -31,6 +31,8 @@ use PVE::QMPClient; 
use PVE::RPCEnvironment; 
use Time::HiRes qw(gettimeofday); 

+my $snap_storage = {zfspool => 1, rbd => 1, zfs => 1, sheepdog => 1}; 
+ 
my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); 

# Note about locking: we use flock on the config file protect 
@@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { 

my $running = check_running($vmid); 

- return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); 
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
storage_support_snapshop($volid, $storecfg); 

return if !$running; 

@@ -5772,6 +5774,23 @@ my $savevm_wait = sub { 
} 
}; 

+sub storage_support_snapshot { 
+ my ($volid, $storecfg) = @_; 
+ 
+ my $storage_name = PVE::Storage::parse_volume_id($volid); 
+ 
+ my $ret = undef; 
+ if ($snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){ 
+ $ret = 1; 
+ } 
+ 
+ if ($volid =~ m/\.(qcow2|qed)$/){ 
+ $ret = 1; 
+ } 
+ 
+ return $ret; 
+} 
+ 
sub snapshot_create { 
my ($vmid, $snapname, $save_vmstate, $comment) = @_; 

-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Alexandre DERUMIER
Hi,

does it make sense to define

+my $snap_storage = {zfspool => 1, rbd => 1, zfs => 1, sheepdog => 1};
..
sub storage_support_snapshot()

in qemuserver ?


Couldn't we reuse volume_has_feature from storage plugins ?







- Mail original -
De: "Wolfgang Link" 
À: "pve-devel" 
Envoyé: Jeudi 30 Avril 2015 09:47:03
Objet: [pve-devel] [PATCH] remove running from Storage and check it in  
QemuServer

It is better to check if a VM is running in QemuServer then in Storage. 
for the Storage there is no difference if it is running or not. 

Signed-off-by: Wolfgang Link  
--- 
PVE/QemuServer.pm | 21 - 
1 file changed, 20 insertions(+), 1 deletion(-) 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 708b208..39aff42 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -31,6 +31,8 @@ use PVE::QMPClient; 
use PVE::RPCEnvironment; 
use Time::HiRes qw(gettimeofday); 

+my $snap_storage = {zfspool => 1, rbd => 1, zfs => 1, sheepdog => 1}; 
+ 
my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); 

# Note about locking: we use flock on the config file protect 
@@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { 

my $running = check_running($vmid); 

- return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); 
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
storage_support_snapshop($volid, $storecfg); 

return if !$running; 

@@ -5772,6 +5774,23 @@ my $savevm_wait = sub { 
} 
}; 

+sub storage_support_snapshot { 
+ my ($volid, $storecfg) = @_; 
+ 
+ my $storage_name = PVE::Storage::parse_volume_id($volid); 
+ 
+ my $ret = undef; 
+ if ($snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){ 
+ $ret = 1; 
+ } 
+ 
+ if ($volid =~ m/\.(qcow2|qed)$/){ 
+ $ret = 1; 
+ } 
+ 
+ return $ret; 
+} 
+ 
sub snapshot_create { 
my ($vmid, $snapname, $save_vmstate, $comment) = @_; 

-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Dietmar Maurer

>  # Note about locking: we use flock on the config file protect
> @@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot {
>  
>  my $running = check_running($vmid);
>  
> -return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap,
> $running);
> +PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if
> storage_support_snapshop($volid, $storecfg);

s/storage_support_snapshop/storage_support_snapshot/

But the name is misleading. The question is if 'qemu' does the snapshot, or if
we
need to do it ourselves. Maybesomething like: do_snapshots_with_qemu()

>  
>  return if !$running;
>  
> @@ -5772,6 +5774,23 @@ my $savevm_wait = sub {
>  }
>  };
>  
> +sub storage_support_snapshot {
> +my ($volid, $storecfg) = @_;

I would reorder the arguments:

 my ($storecfg, $volid) = @_;

> +
> +my $storage_name = PVE::Storage::parse_volume_id($volid);
> +
> +my $ret = undef;  
what for?
> +if ($snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){
> + $ret = 1;
return 1;
> +}
> +
> +if ($volid =~ m/\.(qcow2|qed)$/){
> + $ret = 1;
return 1;
> +}
> +
> +return $ret;
return undef;
> +}
> +
>  sub snapshot_create {
>  my ($vmid, $snapname, $save_vmstate, $comment) = @_;
>  
> -- 
> 2.1.4
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link 
---
 PVE/QemuServer.pm | 21 -
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 708b208..39aff42 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -31,6 +31,8 @@ use PVE::QMPClient;
 use PVE::RPCEnvironment;
 use Time::HiRes qw(gettimeofday);
 
+my $snap_storage = {zfspool => 1, rbd => 1, zfs => 1, sheepdog => 1};
+
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
 # Note about locking: we use flock on the config file protect
@@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot {
 
 my $running = check_running($vmid);
 
-return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 
$running);
+PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if 
storage_support_snapshop($volid, $storecfg);
 
 return if !$running;
 
@@ -5772,6 +5774,23 @@ my $savevm_wait = sub {
 }
 };
 
+sub storage_support_snapshot {
+my ($volid, $storecfg) = @_;
+
+my $storage_name = PVE::Storage::parse_volume_id($volid);
+
+my $ret = undef;  
+if ($snap_storage->{$storecfg->{ids}->{$storage_name}->{type}} ){
+   $ret = 1;
+}
+
+if ($volid =~ m/\.(qcow2|qed)$/){
+   $ret = 1;
+}
+
+return $ret;
+}
+
 sub snapshot_create {
 my ($vmid, $snapname, $save_vmstate, $comment) = @_;
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove running from Storage and check it in QemuServer

2015-04-30 Thread Wolfgang Link
It is better to check if a VM is running in QemuServer then in Storage.
for the Storage there is no difference if it is running or not.

Signed-off-by: Wolfgang Link 
---
 PVE/Storage.pm   | 4 ++--
 PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
 PVE/Storage/LVMPlugin.pm | 2 +-
 PVE/Storage/Plugin.pm| 4 +---
 PVE/Storage/RBDPlugin.pm | 4 +---
 PVE/Storage/SheepdogPlugin.pm| 4 +---
 PVE/Storage/ZFSPoolPlugin.pm | 2 +-
 7 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b542ee6..92c7d14 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -162,13 +162,13 @@ sub volume_rollback_is_possible {
 }
 
 sub volume_snapshot {
-my ($cfg, $volid, $snap, $running) = @_;
+my ($cfg, $volid, $snap) = @_;
 
 my ($storeid, $volname) = parse_volume_id($volid, 1);
 if ($storeid) {
 my $scfg = storage_config($cfg, $storeid);
 my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
-return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap, 
$running);
+return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap);
 } elsif ($volid =~ m|^(/.+)$| && -e $volid) {
 die "snapshot file/device '$volid' is not possible\n";
 } else {
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index c957ade..763c482 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -205,7 +205,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 die "volume snapshot is not possible on iscsi device";
 }
 
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 1688bb5..19eb78c 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -456,7 +456,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die "lvm snapshot is not implemented";
 }
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 5b72b07..f119068 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -641,12 +641,10 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
 
-return 1 if $running;
-
 my $path = $class->filesystem_path($scfg, $volname);
 
 my $cmd = ['/usr/bin/qemu-img', 'snapshot','-c', $snap, $path];
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2c45a68..878fa16 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -510,9 +510,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
 
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index 3e2c126..e358f9e 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.pm
@@ -389,9 +389,7 @@ sub volume_resize {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
-
-return 1 if $running;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 39fc348..1064869 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -415,7 +415,7 @@ sub volume_size_info {
 }
 
 sub volume_snapshot {
-my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+my ($class, $scfg, $storeid, $volname, $snap) = @_;
 
 $class->zfs_request($scfg, undef, 'snapshot', 
"$scfg->{pool}/$volname\@$snap");
 }
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel