Re: [pve-devel] [PATCH] qmp drive_add : remove backslashes from $drive string

2015-03-03 Thread Dietmar Maurer

  monhost = {
  description = Monitors daemon ips.,
  type = 'string', format = 'pve-storage-monhost-list'
  },

why?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] fix rpcinfo path

2015-03-03 Thread Wolfgang Link
change path in jessie of package rpcbind
from /usr/bin/rpcinfo to /usr/sbin/rpcinfo

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage/NFSPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index 9ba68a7..79a7730 100644
--- a/PVE/Storage/NFSPlugin.pm
+++ b/PVE/Storage/NFSPlugin.pm
@@ -168,7 +168,7 @@ sub check_connection {
 my $server = $scfg-{server};
 
 # test connection to portmapper
-my $cmd = ['/usr/bin/rpcinfo', '-p', $server];
+my $cmd = ['/usr/sbin/rpcinfo', '-p', $server];
 
 eval {
run_command($cmd, timeout = 2, outfunc = sub {}, errfunc = sub {});
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-03 Thread Dietmar Maurer
 Sure but a VM attached to a bridge should not see per default tagged
 frames. It should only see unttaged frames until we allow to see it
 tagged Frames from different VLANs.

Why? AFAIK this is the default behavior from the beginning.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-03 Thread Stefan Priebe - Profihost AG

Am 03.03.2015 um 13:42 schrieb Dietmar Maurer:
 Sure but a VM attached to a bridge should not see per default tagged
 frames. It should only see unttaged frames until we allow to see it
 tagged Frames from different VLANs.
 
 Why? AFAIK this is the default behavior from the beginning.

I think it's better to be more secure by default. Also i know no switch
nor vendor who does it this way.

Normally you won't see tagged traffic on a port by default. So most
users are not used to this behaviour? No?

Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-03 Thread Stefan Priebe - Profihost AG
@dietmar
I think this is a big problem and i never noticed it. Currently a guest
attached to the bridge see all frames. I thought it sees only untagged
frames.

This means i cannot isolate a guest to only untagged frames. What's your
opinion?

Stefan

Am 02.03.2015 um 23:10 schrieb Andrew Thrift:
 Hi Stefan,
 
 Yes that is correct.  The tap interface of a VM attached only to vmbr0
 with no vlan specified will see all frames.  
 Individual guests that are connected to vmbr0 with a vlan specified will
 only see frames tagged with that vlan-id.
 
 The use case is that we often attach virtual routers to the parent
 bridge e.g. vmbr0 so they are able to access multiple client vlan's.
 
 This is no different than the previous standard config, attaching
 directly to vmbr0 would allow you to see all frames.
 
 
 Regards,
 
 
 
 Andrew
 
 
 On Mon, Mar 2, 2015 at 8:53 PM, Stefan Priebe - Profihost AG
 s.pri...@profihost.ag mailto:s.pri...@profihost.ag wrote:
 
 Hi Andrew,
 
 sorry i lost this mail... Please see reply inline.
 
 Am 24.02.2015 um 04:45 schrieb Andrew Thrift:
  We have found you need to nest the bridges to get QinQ to work in all
  scenarios.
  e.g. The above patch will work for MOST scenarios, but if you attach a
  vlan aware VM to the parent vmbr0 bridge it will cause traffic to the
  VM's to stop, or will not be able to see the tagged frames.
 
  The patch we use only has one other minor change:
 
  -activate_bridge_vlan_slave($bridgevlan, $iface, $tag);
  +activate_bridge_vlan_slave($bridgevlan, $bridge, $tag);
 
 So you mean if you attach a 2nd VM where not the tap device handles the
 tagged frames but instead the VM does itself. It won't work? I'm not
 sure if this should work at all.
 
 As this means somebody can grab tagged traffic inside his VM without
 your knowledge.
 
 Stefan
 
 
  On Sat, Feb 14, 2015 at 9:41 PM, Stefan Priebe s.pri...@profihost.ag 
 mailto:s.pri...@profihost.ag
  mailto:s.pri...@profihost.ag mailto:s.pri...@profihost.ag wrote:
 
 
  Signed-off-by: Stefan Priebe s.pri...@profihost.ag 
 mailto:s.pri...@profihost.ag
  mailto:s.pri...@profihost.ag mailto:s.pri...@profihost.ag
  ---
   data/PVE/Network.pm |2 +-
   1 file changed, 1 insertion(+), 1 deletion(-)
 
  diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
  index 00639f6..97f4033 100644
  --- a/data/PVE/Network.pm
  +++ b/data/PVE/Network.pm
  @@ -323,7 +323,7 @@ sub activate_bridge_vlan {
 
   my @ifaces = ();
   my $dir = /sys/class/net/$bridge/brif;
  -PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+)', sub {
  +PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+(\.\d+)?)', 
 sub {
   push @ifaces, $_[0];
   });
 
  --
  1.7.10.4
 
  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com mailto:pve-devel@pve.proxmox.com
 mailto:pve-devel@pve.proxmox.com mailto:pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
 
 
 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feature: migrateall

2015-03-03 Thread Alexandre DERUMIER
This is what I want, but migrateall as is implemented in 3.4 migrates 
all VM's regardless whether the VM is running or not. After running 
migrateall a node will be completely free of any VM's what so ever 
since the feature effectively moves away any deployed VM. 

OH ok, well, the feature was about migrate ALL vms ;)

but I think it's easy to add an option like state:(running|stopped|both)




- Mail original -
De: datanom.net m...@datanom.net
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 4 Mars 2015 08:12:48
Objet: Re: [pve-devel] Feature: migrateall

On Wed, 4 Mar 2015 05:57:46 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 
 Well,you can migrate all running vms. (running or stopped). 
 
This is what I want, but migrateall as is implemented in 3.4 migrates 
all VM's regardless whether the VM is running or not. After running 
migrateall a node will be completely free of any VM's what so ever 
since the feature effectively moves away any deployed VM. 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
/usr/games/fortune -es says: 
Coding is easy; All you do is sit staring at a terminal until the drops 
of blood form on your forehead. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feature: migrateall

2015-03-03 Thread Michael Rasmussen
On Wed, 4 Mar 2015 05:57:46 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:

 
 Well,you can migrate all running vms. (running or stopped).
 
This is what I want, but migrateall as is implemented in 3.4 migrates
all VM's regardless whether the VM is running or not. After running
migrateall a node will be completely free of any VM's what so ever
since the feature effectively moves away any deployed VM.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Coding is easy;  All you do is sit staring at a terminal until the drops
of blood form on your forehead.


pgpgQLx_SPcWh.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] the vmid.fw can't delete issue

2015-03-03 Thread Alexandre DERUMIER
Yes, this is a bug. I guess we should delete the fw config when we delete the
VM?

I think we forget this ;)

Also, I think it could be great when cloning a template|vm, to clone the fw 
config too.


- Mail original -
De: dietmar diet...@proxmox.com
À: lyt_yudi lyt_y...@icloud.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 4 Mars 2015 06:15:19
Objet: Re: [pve-devel] the vmid.fw can't delete issue

 
 from gui and api, can delete vm, but can’t delete the vmid.fw in 
 /etc/pve/firewall 
 can fix it? 
 
 Bug id 603: https://bugzilla.proxmox.com/show_bug.cgi?id=603 


Yes, this is a bug. I guess we should delete the fw config when we delete the 
VM? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] systemd notify/watchdog

2015-03-03 Thread Alexandre DERUMIER
 which could be great for some services, like pvestatd. (which sometimes can 
 freeze which slow/bad storage stats) 

Unfortunately, a restart does not help in that case? 
Maybe not in that case , but I have already see a lot of time the pvestatd  
hanging, and a restart was doing the job.



I already played around with that feature, but AFAIK it needs /dev/watchdog? 
Or 
does it 
work without opening /dev/watchdog? 

I think it's different than hardware watchdog (in case of system hang).

Here it's more software watchdog, services tell to systemd that they are 
running. 
(for pvestatd, between each loop for example)
and if systemd don't receive the notify, it's restart the service.


http://0pointer.de/blog/projects/watchdog.html


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 4 Mars 2015 06:39:53
Objet: Re: [pve-devel] systemd notify/watchdog

 systemd have a watchdog feature 
 
 http://man7.org/linux/man-pages/man3/sd_watchdog_enabled.3.html 
 
 which could be great for some services, like pvestatd. (which sometimes can 
 freeze which slow/bad storage stats) 

Unfortunately, a restart does not help in that case? 

 The idea is that the application need to send notifify at regular interval to 
 systemd, 

I already played around with that feature, but AFAIK it needs /dev/watchdog? Or 
does it 
work without opening /dev/watchdog? 

/dev/watchdog can be used only once, and I have other plans for it (ha soft 
fence). 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] the vmid.fw can't delete issue

2015-03-03 Thread lyt_yudi

 在 2015年3月4日,下午3:20,Alexandre DERUMIER aderum...@odiso.com 写道:
 
 Also, I think it could be great when cloning a template|vm, to clone the fw 
 config too.

oh, I don’t think so, maybe the clone vm doesn’t  belong to the same 
user(customer).

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] the vmid.fw can't delete issue

2015-03-03 Thread lyt_yudi
hi,all

from gui and api, can delete vm, but can’t delete the vmid.fw in 
/etc/pve/firewall
can fix it?

Bug id 603: https://bugzilla.proxmox.com/show_bug.cgi?id=603

Thanks, Best Regards.
 

lyt_yudi
lyt_y...@icloud.com





smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Feature: migrateall

2015-03-03 Thread Michael Rasmussen
Hi all,

IMHO the new feature migrateall is useless. What is the point of
migrating away stopped VM's and CT's?

To be usefully to me the point of this feature should be to migrate
away all running VM's and CT's for the purpose of doing maintenance
work on a node.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Conscious is when you are aware of something and conscience is when you
wish you weren't.


pgpYmMTCq40n5.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve kernel 2.6.32

2015-03-03 Thread Dietmar Maurer
 Is pve-kernel-2.6.32-37-pve_2.6.32-148_amd64.deb from testing based on
 a newer kernel from Redhat than pve-kernel-2.6.32-37-pve from

no. for details see:

https://wiki.openvz.org/Download/kernel/rhel6-testing/042stab105.6

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] systemd notify/watchdog

2015-03-03 Thread Alexandre DERUMIER
systemd have a watchdog feature 

http://man7.org/linux/man-pages/man3/sd_watchdog_enabled.3.html

which could be great for some services, like pvestatd. (which sometimes can 
freeze which slow/bad storage stats)


The idea is that the application need to send notifify at regular interval to 
systemd,

http://www.freedesktop.org/software/systemd/man/sd_notify.html
https://lists.debian.org/debian-ctte/2013/12/msg00230.html



If systemd don't receive the notify after some time, it can restart the service.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] the vmid.fw can't delete issue

2015-03-03 Thread Dietmar Maurer
 
   from gui and api, can delete vm, but can’t delete the vmid.fw in
 /etc/pve/firewall
   can fix it?
 
   Bug id 603: https://bugzilla.proxmox.com/show_bug.cgi?id=603


Yes, this is a bug. I guess we should delete the fw config when we delete the
VM?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] systemd notify/watchdog

2015-03-03 Thread Dietmar Maurer
 which could be great for some services, like pvestatd. (which sometimes can
 freeze which slow/bad storage stats)

If you want, I can upload the jessie packages today - those already include
.service definition files.

Then you can test those features directly ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] the vmid.fw can't delete issue

2015-03-03 Thread lyt_yudi

 在 2015年3月4日,下午1:15,Dietmar Maurer diet...@proxmox.com 写道:
 
 Yes, this is a bug. I guess we should delete the fw config when we delete the
 VM?

thanks very much!

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [IB#1034892] Guest disk data security

2015-03-03 Thread Dietmar Maurer
First, thanks for the patch!

Would you mind to reformat the patch using 'git format-patch' ?

As described in http://pve.proxmox.com/wiki/Developer_Documentation

Note: with correct --signoff 


 attached please find updated patches that fixed LV cleaning routines for
 US.
 
 We use cstream instead of dd to see cleaning progress and limit I/O
 load. We activate LV before cleaning (without it dd/cstream will fail
 and create simple file if LV device is inactive on node, i.e. after VM
 was shut down).
 
 Please verify and consider fixing upstream.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] systemd notify/watchdog

2015-03-03 Thread Dietmar Maurer

 systemd have a watchdog feature 
 
 http://man7.org/linux/man-pages/man3/sd_watchdog_enabled.3.html
 
 which could be great for some services, like pvestatd. (which sometimes can
 freeze which slow/bad storage stats)

Unfortunately, a restart does not help in that case?

 The idea is that the application need to send notifify at regular interval to
 systemd,

I already played around with that feature, but AFAIK it needs /dev/watchdog? Or
does it
work without opening /dev/watchdog?

/dev/watchdog can be used only once, and I have other plans for it (ha soft
fence).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [IB#1034892] Guest disk data security

2015-03-03 Thread IB Development Team
Hi,

Following the topic

http://forum.proxmox.com/threads/20137-Guest-disk-data-security?p=102704#post102704

attached please find updated patches that fixed LV cleaning routines for
US.

We use cstream instead of dd to see cleaning progress and limit I/O
load. We activate LV before cleaning (without it dd/cstream will fail
and create simple file if LV device is inactive on node, i.e. after VM
was shut down).

Please verify and consider fixing upstream.

-- 
Regards,
Pawel Boguslawski

IB Development Team
https://dev.ib.pl/



--- /usr/share/perl5/PVE/Storage/LVMPlugin.pm	2014-10-25 09:48:23.0 +0200
+++ /usr/share/perl5/PVE/Storage/LVMPlugin.pm-ib	2014-11-13 12:42:57.0 +0100
@@ -193,6 +193,10 @@
 	description = Zero-out data when removing LVs.,
 	type = 'boolean',
 	},
+	saferemove_throughput = {
+	description = Wipe throughput (cstream -t parameter value).,
+	type = 'string',
+	},
 };
 }
 
@@ -203,6 +207,7 @@
 	shared = { optional = 1 },
 	disable = { optional = 1 },
 saferemove = { optional = 1 },
+saferemove_throughput = { optional = 1 },
 	content = { optional = 1 },
 base = { fixed = 1, optional = 1 },
 };
@@ -290,31 +295,49 @@
 my ($class, $storeid, $scfg, $volname, $isBase) = @_;
 
 my $vg = $scfg-{vgname};
-
+
 # we need to zero out LVM data for security reasons
 # and to allow thin provisioning
 
 my $zero_out_worker = sub {
-	print zero-out data on image $volname\n;
-	my $cmd = ['dd', if=/dev/zero, of=/dev/$vg/del-$volname, bs=1M];
-	eval { run_command($cmd, errmsg = zero out failed); };
+	print zero-out data on image $volname (/dev/$vg/del-$volname)\n;
+
+	# wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
+	my $throughput = '-10485760';
+	if ($scfg-{saferemove_throughput}) {
+		$throughput = $scfg-{saferemove_throughput};
+	}
+
+	my $cmd = [
+		'/usr/bin/cstream',
+		'-i', '/dev/zero',
+		'-o', /dev/$vg/del-$volname,
+		'-T', '10',
+		'-v', '1',
+		'-b', '1048576',
+		'-t', $throughput
+	];
+	eval { run_command($cmd, errmsg = zero out finished (note: 'No space left on device' is ok here)); };
 	warn $@ if $@;
 
 	$class-cluster_lock_storage($storeid, $scfg-{shared}, undef, sub {
 	my $cmd = ['/sbin/lvremove', '-f', $vg/del-$volname];
 	run_command($cmd, errmsg = lvremove '$vg/del-$volname' error);
 	});
-	print successfully removed volume $volname\n;
+	print successfully removed volume $volname ($vg/del-$volname)\n;
 };
 
+my $cmd = ['/sbin/lvchange', '-aly', $vg/$volname];
+run_command($cmd, errmsg = can't activate LV '$vg/$volname' to zero-out its data);
+
 if ($scfg-{saferemove}) {
 	# avoid long running task, so we only rename here
-	my $cmd = ['/sbin/lvrename', $vg, $volname, del-$volname];
+	$cmd = ['/sbin/lvrename', $vg, $volname, del-$volname];
 	run_command($cmd, errmsg = lvrename '$vg/$volname' error);
 	return $zero_out_worker;
 } else {
 	my $tmpvg = $scfg-{vgname};
-	my $cmd = ['/sbin/lvremove', '-f', $tmpvg/$volname];
+	$cmd = ['/sbin/lvremove', '-f', $tmpvg/$volname];
 	run_command($cmd, errmsg = lvremove '$tmpvg/$volname' error);
 }
 
--- /usr/share/perl5/PVE/Storage.pm	2014-09-10 14:21:47.0 +0200
+++ /usr/share/perl5/PVE/Storage.pm-ib	2014-10-25 16:58:51.226463323 +0200
@@ -577,7 +577,7 @@
 		}
 	}
 	}
-	my $cleanup_worker = $plugin-free_image($storeid, $scfg, $volname, $isBase);
+	$cleanup_worker = $plugin-free_image($storeid, $scfg, $volname, $isBase);
 });
 
 return if !$cleanup_worker;
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-03 Thread Stefan Priebe - Profihost AG

Am 03.03.2015 um 12:38 schrieb Dietmar Maurer:
 On March 3, 2015 at 9:48 AM Stefan Priebe - Profihost AG
 s.pri...@profihost.ag wrote:


 @dietmar
 I think this is a big problem and i never noticed it. Currently a guest
 attached to the bridge see all frames. I thought it sees only untagged
 frames.

 This means i cannot isolate a guest to only untagged frames. What's your
 opinion?
 
 The purpose of vlans is to filter tagged frames (not untagged frames) ...
 Maybe you can ask (or write a feature request) on the kernel/network list?
 Maybe OVS supports that?

Sure but a VM attached to a bridge should not see per default tagged
frames. It should only see unttaged frames until we allow to see it
tagged Frames from different VLANs.

Currently you cannot forbid to listen to tagged traffic inside a VM.
This shouldn't be the default.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-03 Thread Dietmar Maurer
 On March 3, 2015 at 9:48 AM Stefan Priebe - Profihost AG
 s.pri...@profihost.ag wrote:
 
 
 @dietmar
 I think this is a big problem and i never noticed it. Currently a guest
 attached to the bridge see all frames. I thought it sees only untagged
 frames.
 
 This means i cannot isolate a guest to only untagged frames. What's your
 opinion?

The purpose of vlans is to filter tagged frames (not untagged frames) ...
Maybe you can ask (or write a feature request) on the kernel/network list?
Maybe OVS supports that?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATVH_V2] Bug Fix 602

2015-03-03 Thread Wolfgang Link
now zfs will wait 5 sec if error msg is dataset is busy

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage/ZFSPoolPlugin.pm |   28 ++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..0f666b0 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -166,7 +166,16 @@ sub zfs_request {
 $msg .= $line\n;
 };
 
-run_command($cmd, outfunc = $output, timeout = $timeout);
+if ($method eq destroy) {
+
+   eval {run_command($cmd, errmsg = 1, outfunc = $output, timeout = 
$timeout);};
+   
+   if(my $err = $@) { 
+   return ERROR $err; 
+   }
+} else {
+   run_command($cmd, outfunc = $output, timeout = $timeout); 
+}
 
 return $msg;
 }
@@ -291,7 +300,22 @@ sub zfs_create_zvol {
 sub zfs_delete_zvol {
 my ($class, $scfg, $zvol) = @_;
 
-$class-zfs_request($scfg, undef, 'destroy', '-r', $scfg-{pool}/$zvol);
+my $ret = $class-zfs_request($scfg, undef, 'destroy', '-r', 
$scfg-{pool}/$zvol);
+
+if ($ret =~ m/^ERROR (.*)/) {
+
+   if ($ret =~ m/.*dataset is busy.*/){
+
+   for(my $i = 0; $ret  $i  5; $i++){
+   sleep(1);
+   $ret =  $class-zfs_request($scfg, undef, 'destroy', '-r', 
$scfg-{pool}/$zvol);
+   }
+
+   die $ret if $ret;
+   } else {
+   die $ret;
+   }
+}
 }
 
 sub zfs_list_zvol {
-- 
1.7.10.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel