Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Dietmar Maurer
 Another more clean way, could be configure a dhcp relay on my routeur (which
 are gateway for each vlan),
 the forward dhcp requests to the proxmox hosts (define all proxmox hosts in
 dhcp relay)
 (In this case, all proxmox hosts need to be able to reply to all dhcp requests
 of all vms)
 
 But the problem is that work only with 1cluster.

I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get
IPv6 address, gateway, and DNS server. And this does not have any problems
with VLANs, because it is handled by the router.

I that correct?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] ZFS tool for asynchronous replica

2015-02-13 Thread Wolfgang Link

Know anybody a open source tool for asynchronous replica with zfs.
I found this here, but it seem to be no more active.
http://www.bolthole.com/solaris/zrep/


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Alexandre DERUMIER
I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get 
IPv6 address, gateway, and DNS server. And this does not have any problems 
with VLANs, because it is handled by the router. 

I that correct? 

Yes, with ipv6 it should work fine without dhcp server.

(But I need it for ipv4 ;)

Another idead, I don't known if it's possible through qemu-agent to configure 
network ?


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: datanom.net m...@datanom.net, pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 13 Février 2015 09:44:35
Objet: Re: [pve-devel] GUI for DHCP

 Another more clean way, could be configure a dhcp relay on my routeur (which 
 are gateway for each vlan), 
 the forward dhcp requests to the proxmox hosts (define all proxmox hosts in 
 dhcp relay) 
 (In this case, all proxmox hosts need to be able to reply to all dhcp 
 requests 
 of all vms) 
 
 But the problem is that work only with 1cluster. 

I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get 
IPv6 address, gateway, and DNS server. And this does not have any problems 
with VLANs, because it is handled by the router. 

I that correct? 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Daniel Hunsaker
 I don't known if it's technically possible to use same ip on each
 host by vlan (for dhcp reponse of the vm of the hosts),

Sounds like AnyCast to me...

On Fri Feb 13 2015 at 3:16:30 AM Dietmar Maurer diet...@proxmox.com wrote:

  I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get
  IPv6 address, gateway, and DNS server. And this does not have any
 problems
  with VLANs, because it is handled by the router.
  
  I that correct?
 
  Yes, with ipv6 it should work fine without dhcp server.
 
  (But I need it for ipv4 ;)
 
  Another idead, I don't known if it's possible through qemu-agent to
 configure
  network ?

 AFAIK that is currently not possible.

 I also wonder if we can use zeroconf somehow? (IPV4 link local addresses:
 169.254.0.0/16)

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Dietmar Maurer
 I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get 
 IPv6 address, gateway, and DNS server. And this does not have any problems 
 with VLANs, because it is handled by the router. 
 
 I that correct? 
 
 Yes, with ipv6 it should work fine without dhcp server.
 
 (But I need it for ipv4 ;)
 
 Another idead, I don't known if it's possible through qemu-agent to configure
 network ?

AFAIK that is currently not possible.

I also wonder if we can use zeroconf somehow? (IPV4 link local addresses:
169.254.0.0/16)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Alexandre DERUMIER
or maybe,
create a dummy interface for dhcp server ip, and plug it to the bridge.

And don't known if it can communicate with the vm taps interface and if the 
traffic is not going out the physical eth interface.


- Mail original -
De: aderumier aderum...@odiso.com
À: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 13 Février 2015 09:23:46
Objet: Re: [pve-devel] GUI for DHCP

 Any idea about this ? 
 
Sorry, no idea 

Another more clean way, could be configure a dhcp relay on my routeur (which 
are gateway for each vlan), 
the forward dhcp requests to the proxmox hosts (define all proxmox hosts in 
dhcp relay) 
(In this case, all proxmox hosts need to be able to reply to all dhcp requests 
of all vms) 

But the problem is that work only with 1cluster. 



- Mail original - 
De: dietmar diet...@proxmox.com 
À: aderumier aderum...@odiso.com, datanom.net m...@datanom.net 
Cc: pve-devel pve-devel@pve.proxmox.com 
Envoyé: Vendredi 13 Février 2015 06:23:09 
Objet: Re: [pve-devel] GUI for DHCP 

 On February 12, 2015 at 7:53 PM Alexandre DERUMIER aderum...@odiso.com 
 wrote: 
 
 
 One question I also have is: 
 
 do we install a dnsmasq on each proxmox host of a cluster ? 

Yes. 

 I'm using a lot of vlans with /24, in a 16 proxmox nodes cluster, 
 
 that mean that I need 16 ips per vlan to manage dhcp. 
 
 
 I don't known if it's technically possible to use same ip on each host by 
 vlan 
 (for dhcp reponse of the vm of the hosts), 
 so no need to care (I think?) about arp on the network. 
 
 or maybe filtering arp for this ip to not going outside. 
 
 Any idea about this ? 

Sorry, no idea. 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Alexandre DERUMIER
 Any idea about this ? 

Sorry, no idea

Another more clean way, could be configure a dhcp relay on my routeur (which 
are gateway for each vlan),
the forward dhcp requests to the proxmox hosts (define all proxmox hosts in 
dhcp relay)
(In this case, all proxmox hosts need to be able to reply to all dhcp requests 
of all vms)

But the problem is that work only with 1cluster.



- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, datanom.net m...@datanom.net
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 13 Février 2015 06:23:09
Objet: Re: [pve-devel] GUI for DHCP

 On February 12, 2015 at 7:53 PM Alexandre DERUMIER aderum...@odiso.com 
 wrote: 
 
 
 One question I also have is: 
 
 do we install a dnsmasq on each proxmox host of a cluster ? 

Yes. 

 I'm using a lot of vlans with /24, in a 16 proxmox nodes cluster, 
 
 that mean that I need 16 ips per vlan to manage dhcp. 
 
 
 I don't known if it's technically possible to use same ip on each host by 
 vlan 
 (for dhcp reponse of the vm of the hosts), 
 so no need to care (I think?) about arp on the network. 
 
 or maybe filtering arp for this ip to not going outside. 
 
 Any idea about this ? 

Sorry, no idea. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Alexandre DERUMIER
AFAIK that is currently not possible. 
yes, current qemu allow only to read|write file, could work, but it's difficult 
to have something 
working with any guest.

I have also find a qemu-agent re-implementation in python
https://github.com/xolox/python-negotiator
which allow custom script and command execution

(I think it could be also possible to hack official qemu-guest agent).
But that mean that user need a special proxmox qemu-guest agent.


I also wonder if we can use zeroconf somehow? (IPV4 link local addresses: 
169.254.0.0/16) 

Don't think it can work. it's really only for this speficic range 169.254.0.0/16



- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: datanom.net m...@datanom.net, pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 13 Février 2015 11:16:07
Objet: Re: [pve-devel] GUI for DHCP

 I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get 
 IPv6 address, gateway, and DNS server. And this does not have any problems 
 with VLANs, because it is handled by the router. 
  
 I that correct? 
 
 Yes, with ipv6 it should work fine without dhcp server. 
 
 (But I need it for ipv4 ;) 
 
 Another idead, I don't known if it's possible through qemu-agent to configure 
 network ? 

AFAIK that is currently not possible. 

I also wonder if we can use zeroconf somehow? (IPV4 link local addresses: 
169.254.0.0/16) 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Dietmar Maurer
 (I think it could be also possible to hack official qemu-guest agent).
 But that mean that user need a special proxmox qemu-guest agent.

I guess this would only make sense if qemu people accept such patches upstream.
Maybe worth to ask on the qemu list for opinions.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs

2015-02-13 Thread Pablo Ruiz
Hi Wolfgang,

I think you are confusing (probably due to my really short description of
the issue at the original email [my fault]) using an iscsi storage entry at
storage.cfg with using iscsi to mount the block device where a local zfs
pool is stored. In our case, we have a couple of those, and we mount them
only at one node of the cluster at a time.

Yes, there maybe more ways to accomplish the same thing (ie. avoiding an
accidental import of a pool by proxmox), but most of them relay on
scripting and event handling which seems more risky (think on script
failures, race conditions, etc.) than having a clear override option at
storage.cfg.

Just my two cents.



On Fri, Feb 13, 2015 at 7:10 AM, Wolfgang wolfg...@linksystems.org wrote:

 There is no need for a flag to do not autoimport, because first you can
 disable the storage in storage.cfg.  and so it will not activated. Second
 one pool with iscsi export should not in the storage.cfg.
 Am 12.02.2015 21:25 schrieb Pablo Ruiz pablo.r...@gmail.com:

 Hi,

 IMHO, I see no reason to not default for the most common case (ie.
 auto-importing) if there's a way to override it, and such a way is
 some-what documented.. ;)

 On Thu, Feb 12, 2015 at 8:35 PM, Adrian Costin adr...@goat.fish wrote:


 AFAIK having a setting to control wether auto-import of pool is
 desirable would be a plus. As in some situations the import/export of the
 pool is controlled by any other means, and an accidental pool of the pool
 may be a destructive action (ie. when the pool maybe from a shared medium
 like iscsi, and thus should not be mounted by two nodes at the same time).


 I agree. Should I add another parameter for this? If yes, should this be
 default to auto-import, or not?


 Best regards,
 Adrian Costin



 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] ZFS tool for asynchronous replica

2015-02-13 Thread Lindsay Mathieson
On Fri, 13 Feb 2015 01:44:20 PM Pablo Ruiz wrote:
 I am using this same one at production on a few machines w/o an issue. Also
 around google you will find a port over bash instead of ksh (which in fact
 requires changing no more than 10 lines)..


Is it safe to use KVM images? the replica is crash safe
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] unplug scsi controller if no more disk exist

2015-02-13 Thread Alexandre Derumier
we need to remove scsi controller, because live migration will crash,

as on migration target node, we'll start the vm without controller if no disk 
exist

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   31 +--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 032cfb0..cd9b09f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3337,15 +3337,17 @@ sub vm_deviceunplug {
 qemu_devicedelverify($vmid, $deviceid);
 qemu_drivedel($vmid, $deviceid);

-} elsif ($deviceid =~ m/^(lsi)(\d+)$/) {
+} elsif ($deviceid =~ m/^(scsihw)(\d+)$/) {
 
qemu_devicedel($vmid, $deviceid);
+   qemu_devicedelverify($vmid, $deviceid);
 
 } elsif ($deviceid =~ m/^(scsi)(\d+)$/) {
 
 qemu_devicedel($vmid, $deviceid);
 qemu_drivedel($vmid, $deviceid);
-
+   qemu_deletescsihw($conf, $vmid, $deviceid);
+
 } elsif ($deviceid =~ m/^(net)(\d+)$/) {
 
 qemu_devicedel($vmid, $deviceid);
@@ -3459,6 +3461,31 @@ sub qemu_findorcreatescsihw {
 return 1;
 }
 
+sub qemu_deletescsihw {
+my ($conf, $vmid, $opt) = @_;
+
+my $device = parse_drive($opt, $conf-{$opt});
+
+my $maxdev = ($conf-{scsihw}  ($conf-{scsihw} !~ m/^lsi/)) ? 256 : 7;
+my $controller = int($device-{index} / $maxdev);
+
+my $devices_list = vm_devices_list($vmid);
+foreach my $opt (keys %{$devices_list}) {
+   if (PVE::QemuServer::valid_drivename($opt)) {
+   my $drive = PVE::QemuServer::parse_drive($opt, $conf-{$opt});
+   if($drive-{interface} eq 'scsi'  $drive-{index}  
(($maxdev-1)*($controller+1))) {
+   return 1;
+   }
+   }
+}
+
+my $scsihwid=scsihw$controller;
+
+vm_deviceunplug($vmid, $conf, $scsihwid);
+
+return 1;
+}
+
 sub qemu_add_pci_bridge {
 my ($storecfg, $conf, $vmid, $device) = @_;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : unplug scsi controller if no more disk exist

2015-02-13 Thread Alexandre Derumier
we need to remove the scsi controller, if we have removed all attached disks on 
this controller.

Or live-migration will break

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : move global iothread option as drive option

2015-02-13 Thread Alexandre Derumier
We can create 1iothread by drive, which give us better performance.
Like this it's also possible to enable|disable iothread by drive.
Hotplug of iothread is also working fine.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/2] add virtio iothread option

2015-02-13 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   23 ++-
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cd9b09f..a811064 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -152,12 +152,6 @@ mkdir $lock_dir;
 my $pcisysfs = /sys/bus/pci;
 
 my $confdesc = {
-iothread = {
-   optional = 1,
-   type = 'boolean',
-   description = Enable iothread dataplane.,
-   default = 0,
-},
 onboot = {
optional = 1,
type = 'boolean',
@@ -571,7 +565,7 @@ PVE::JSONSchema::register_standard_option(pve-qm-sata, 
$satadesc);
 my $virtiodesc = {
 optional = 1,
 type = 'string', format = 'pve-qm-drive',
-typetext = '[volume=]volume,] [,media=cdrom|disk] 
[,cyls=c,heads=h,secs=s[,trans=t]] [,snapshot=on|off] 
[,cache=none|writethrough|writeback|unsafe|directsync] [,format=f] 
[,backup=yes|no] [,rerror=ignore|report|stop] 
[,werror=enospc|ignore|report|stop] [,aio=native|threads]  
[,discard=ignore|on]',
+typetext = '[volume=]volume,] [,media=cdrom|disk] 
[,cyls=c,heads=h,secs=s[,trans=t]] [,snapshot=on|off] 
[,cache=none|writethrough|writeback|unsafe|directsync] [,format=f] 
[,backup=yes|no] [,rerror=ignore|report|stop] 
[,werror=enospc|ignore|report|stop] [,aio=native|threads]  [,discard=ignore|on] 
[,iothread=no|yes]',
 description = Use volume as VIRTIO hard disk (n is 0 to  . 
($MAX_VIRTIO_DISKS - 1) . ).,
 };
 PVE::JSONSchema::register_standard_option(pve-qm-virtio, $virtiodesc);
@@ -940,7 +934,7 @@ my $format_size = sub {
 # ideX = [volume=]volume-id[,media=d][,cyls=c,heads=h,secs=s[,trans=t]]
 #[,snapshot=on|off][,cache=on|off][,format=f][,backup=yes|no]
 #[,rerror=ignore|report|stop][,werror=enospc|ignore|report|stop]
-#[,aio=native|threads][,discard=ignore|on]
+#[,aio=native|threads][,discard=ignore|on][,iothread=no|yes]
 
 sub parse_drive {
 my ($key, $data) = @_;
@@ -961,7 +955,7 @@ sub parse_drive {
 foreach my $p (split (/,/, $data)) {
next if $p =~ m/^\s*$/;
 
-   if ($p =~ 
m/^(file|volume|cyls|heads|secs|trans|media|snapshot|cache|format|rerror|werror|backup|aio|bps|mbps|mbps_max|bps_rd|mbps_rd|mbps_rd_max|bps_wr|mbps_wr|mbps_wr_max|iops|iops_max|iops_rd|iops_rd_max|iops_wr|iops_wr_max|size|discard)=(.+)$/)
 {
+   if ($p =~ 
m/^(file|volume|cyls|heads|secs|trans|media|snapshot|cache|format|rerror|werror|backup|aio|bps|mbps|mbps_max|bps_rd|mbps_rd|mbps_rd_max|bps_wr|mbps_wr|mbps_wr_max|iops|iops_max|iops_rd|iops_rd_max|iops_wr|iops_wr_max|size|discard|iothread)=(.+)$/)
 {
my ($k, $v) = ($1, $2);
 
$k = 'file' if $k eq 'volume';
@@ -1003,6 +997,7 @@ sub parse_drive {
 return undef if $res-{backup}  $res-{backup} !~ m/^(yes|no)$/;
 return undef if $res-{aio}  $res-{aio} !~ m/^(native|threads)$/;
 return undef if $res-{discard}  $res-{discard} !~ m/^(ignore|on)$/;
+return undef if $res-{iothread}  $res-{iothread} !~ m/^(no|yes)$/;
 
 return undef if $res-{mbps_rd}  $res-{mbps};
 return undef if $res-{mbps_wr}  $res-{mbps};
@@ -1050,7 +1045,7 @@ sub print_drive {
 my ($vmid, $drive) = @_;
 
 my $opts = '';
-foreach my $o (@qemu_drive_options, 'mbps', 'mbps_rd', 'mbps_wr', 
'mbps_max', 'mbps_rd_max', 'mbps_wr_max', 'backup') {
+foreach my $o (@qemu_drive_options, 'mbps', 'mbps_rd', 'mbps_wr', 
'mbps_max', 'mbps_rd_max', 'mbps_wr_max', 'backup', 'iothread') {
$opts .= ,$o=$drive-{$o} if $drive-{$o};
 }
 
@@ -1148,7 +1143,7 @@ sub print_drivedevice_full {
 if ($drive-{interface} eq 'virtio') {
my $pciaddr = print_pci_addr($drive-{interface}$drive-{index}, 
$bridges);
$device = 
virtio-blk-pci,drive=drive-$drive-{interface}$drive-{index},id=$drive-{interface}$drive-{index}$pciaddr;
-   $device .= ,iothread=iothread0 if $conf-{iothread};
+   $device .= ,iothread=iothread-$drive-{interface}$drive-{index} if 
$drive-{iothread}  $drive-{iothread} eq 'yes';
 } elsif ($drive-{interface} eq 'scsi') {
$maxdev = ($conf-{scsihw}  ($conf-{scsihw} !~ m/^lsi/)) ? 256 : 7;
my $controller = int($drive-{index} / $maxdev);
@@ -2672,8 +2667,6 @@ sub config_to_command {
push @$cmd, '-smbios', type=1,$conf-{smbios1};
 }
 
-push @$cmd, '-object', iothread,id=iothread0 if $conf-{iothread};
-
 if ($q35) {
# the q35 chipset support native usb2, so we enable usb controller
# by default for this machine type
@@ -3100,6 +3093,10 @@ sub config_to_command {
}
}
 
+   if($drive-{interface} eq 'virtio'){
+   push @$cmd, '-object', iothread,id=iothread-$ds if 
$drive-{iothread}  $drive-{iothread} eq 'yes';
+   }
+
 if ($drive-{interface} eq 'scsi') {
 
my $maxdev = ($scsihw !~ m/^lsi/) ? 256 : 7;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com

Re: [pve-devel] ZFS tool for asynchronous replica

2015-02-13 Thread Pablo Ruiz
I am using this same one at production on a few machines w/o an issue. Also
around google you will find a port over bash instead of ksh (which in fact
requires changing no more than 10 lines)..

Sometimes when a software has no recent releases, does not mean it is not
maintained, but that it requires no more changes to do it's dutty as
expected.. ;)

On Fri, Feb 13, 2015 at 10:12 AM, Wolfgang Link w.l...@proxmox.com wrote:

 Know anybody a open source tool for asynchronous replica with zfs.
 I found this here, but it seem to be no more active.
 http://www.bolthole.com/solaris/zrep/


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/2] implement virtio iothread hotplug

2015-02-13 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   38 +-
 1 file changed, 37 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a811064..ae08f25 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3254,6 +3254,8 @@ sub vm_deviceplug {
 
 } elsif ($deviceid =~ m/^(virtio)(\d+)$/) {
 
+   qemu_iothread_add($vmid, $deviceid, $device);
+
 qemu_driveadd($storecfg, $vmid, $device);
 my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
$device);
 
@@ -,7 +3335,8 @@ sub vm_deviceunplug {
 qemu_devicedel($vmid, $deviceid);
 qemu_devicedelverify($vmid, $deviceid);
 qemu_drivedel($vmid, $deviceid);
-   
+   qemu_iothread_del($conf, $vmid, $deviceid);
+
 } elsif ($deviceid =~ m/^(scsihw)(\d+)$/) {
 
qemu_devicedel($vmid, $deviceid);
@@ -3373,6 +3376,25 @@ sub qemu_devicedel {
 my $ret = vm_mon_cmd($vmid, device_del, id = $deviceid);
 }
 
+sub qemu_iothread_add {
+my($vmid, $deviceid, $device) = @_;
+
+if($device-{iothread}  $device-{iothread} eq 'yes') {
+   my $iothreads = vm_iothreads_list($vmid);
+   qemu_objectadd($vmid, iothread-$deviceid, iothread) if 
!$iothreads-{iothread-$deviceid};
+}
+}
+
+sub qemu_iothread_del {
+my($conf, $vmid, $deviceid) = @_;
+
+my $device = parse_drive($deviceid, $conf-{$deviceid});
+if($device-{iothread}  $device-{iothread} eq 'yes') {
+   my $iothreads = vm_iothreads_list($vmid);
+   qemu_objectdel($vmid, iothread-$deviceid) if 
$iothreads-{iothread-$deviceid};
+}
+}
+
 sub qemu_objectadd {
 my($vmid, $objectid, $qomtype) = @_;
 
@@ -4060,6 +4082,7 @@ sub vmconfig_update_disk {
 
# skip non hotpluggable value
if ($safe_num_ne($drive-{discard}, $old_drive-{discard}) 
|| 
+   $safe_string_ne($drive-{iothread}, 
$old_drive-{iothread}) ||
$safe_string_ne($drive-{cache}, $old_drive-{cache})) 
{
die skip\n;
}
@@ -6147,4 +6170,17 @@ sub lspci {
 return $devices;
 }
 
+sub vm_iothreads_list {
+my ($vmid) = @_;
+
+my $res = vm_mon_cmd($vmid, 'query-iothreads');
+
+my $iothreads = {};
+foreach my $iothread (@$res) {
+   $iothreads-{ $iothread-{id} } = $iothread-{thread-id};
+}
+
+return $iothreads;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] ZFS tool for asynchronous replica

2015-02-13 Thread Lindsay Mathieson
On Sat, 14 Feb 2015 10:38:30 AM you wrote:
 Is it safe to use KVM images? the replica is crash safe

Hmmm - didn't express that very well. I meant are the replica images crash 
consistent?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel