Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 9:15 PM, Michael Rasmussen  wrote:

> On Fri, 14 Oct 2016 19:56:04 +0200 Andreas Steinel 
> wrote:
> > Isn't there a chicken and egg problem now? Where is the name defined if I
> > install via PXE in an automatic fashion?
>
> For Debian based distributions preseed exists. For Redhat based
> distributions kickstart exists. Both are capable of configuring the
> hostname.
>

Of course, but then the DHCP entry with the MAC has to be the first to load
the preseed/kickstart file with the correct name. If you already have DHCP,
then you also have DNS and we are at the beginning of the discussion.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 19:56:04 +0200
Andreas Steinel  wrote:

> 
> Isn't there a chicken and egg problem now? Where is the name defined if I
> install via PXE in an automatic fashion?
> 
For Debian based distributions preseed exists. For Redhat based
distributions kickstart exists. Both are capable of configuring the
hostname.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
PENGUINICITY!!


pgpA8qUmOgRXS.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
Yes, that's one way to go. We used ddns almost 10 years ago and went for a
"full static" setup.

Isn't there a chicken and egg problem now? Where is the name defined if I
install via PXE in an automatic fashion?

Until now I setup the VM automatically (CLI) and give it a name and
retrieve the mac, insert it into our ldap including PXE options and bootup
the VM. A few minutes later, the VM is installed and configured with a
static IP, given by DHCP (we retrieve via DHCP but set this retrieved IP
statically if it is not a dynamic pool address). This setup works fine for
creating a VM, but we still need to identify it afterwards if you see it in
the GUI. Therefore I add the IP/Hostname to the comment such that I can
find it easily.

On Fri, Oct 14, 2016 at 7:01 PM, Michael Rasmussen  wrote:

> On Fri, 14 Oct 2016 16:59:52 +0200
> Andreas Steinel  wrote:
>
> > On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen 
> wrote:
> >
> > > On Fri, 14 Oct 2016 16:09:48 +0200
> > > Andreas Steinel  wrote:
> > >
> > > >
> > > > How do you guys solve this problem in big environments? Is there a
> > > simpler
> > > > way I don't see right now?
> > > >
> > > You could use DHCP assigned IP and a DNS server which automatically
> > > adds or removes IP from domain. If some VM's needs static IP this can
> > > be handled by DHCP as well.
> > >
> >
> > You mean a "self registering dns name"?
> >
> See here for an example:
> http://askubuntu.com/questions/162265/how-to-setup-
> dhcp-server-and-dynamic-dns-with-bind
> The magic words in DHCP is ddns-updates and ddns-updates on and
> ddns-update-style interim
>
> > I'm currently using a solution based on LDAP-backed DNS and DHCP
> (including
> > PXE options). The problem I have is how to "join" the information based
> on
> > IDs in Proxmox VE with the hostname. I can join based on MAC if all
> > machines are registered, but not otherwise.
> >
> > There was a discussion about getting IP addresses via qemu-agent and
> > display it in the Proxmox VE GUI, but that only helps for VMs running
> with
> > the agent. It would be simpler to add - SQL-technically speaking - a
> > hostname column to the KVM VMs. I need something to join and the VM name
> > can be ambiguous.
> >
> > In the end I want to see on the overview page of a VM what the hostname
> > and/or IP is for logging in. Currently I did it in the comment.
> In the proxmox gui you can assign a MAC yourself which will be the VM's
> layer 2 address. The DHCP server can be configured to hand-out specific
> IP to specific MAC; a bunch of other options is available like
> automatic PXE setup
> (https://debian-administration.org/article/478/Setting_up_a_server_for_
> PXE_network_booting)
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
> --
> /usr/games/fortune -es says:
> Logic is a systematic method of coming to the wrong conclusion with
> confidence.
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 16:59:52 +0200
Andreas Steinel  wrote:

> On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen  wrote:
> 
> > On Fri, 14 Oct 2016 16:09:48 +0200
> > Andreas Steinel  wrote:
> >  
> > >
> > > How do you guys solve this problem in big environments? Is there a  
> > simpler  
> > > way I don't see right now?
> > >  
> > You could use DHCP assigned IP and a DNS server which automatically
> > adds or removes IP from domain. If some VM's needs static IP this can
> > be handled by DHCP as well.
> >  
> 
> You mean a "self registering dns name"?
> 
See here for an example:
http://askubuntu.com/questions/162265/how-to-setup-dhcp-server-and-dynamic-dns-with-bind
The magic words in DHCP is ddns-updates and ddns-updates on and
ddns-update-style interim

> I'm currently using a solution based on LDAP-backed DNS and DHCP (including
> PXE options). The problem I have is how to "join" the information based on
> IDs in Proxmox VE with the hostname. I can join based on MAC if all
> machines are registered, but not otherwise.
> 
> There was a discussion about getting IP addresses via qemu-agent and
> display it in the Proxmox VE GUI, but that only helps for VMs running with
> the agent. It would be simpler to add - SQL-technically speaking - a
> hostname column to the KVM VMs. I need something to join and the VM name
> can be ambiguous.
> 
> In the end I want to see on the overview page of a VM what the hostname
> and/or IP is for logging in. Currently I did it in the comment.
In the proxmox gui you can assign a MAC yourself which will be the VM's
layer 2 address. The DHCP server can be configured to hand-out specific
IP to specific MAC; a bunch of other options is available like
automatic PXE setup
(https://debian-administration.org/article/478/Setting_up_a_server_for_PXE_network_booting)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Logic is a systematic method of coming to the wrong conclusion with
confidence.


pgpwSv7MWjcyi.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen  wrote:

> On Fri, 14 Oct 2016 16:09:48 +0200
> Andreas Steinel  wrote:
>
> >
> > How do you guys solve this problem in big environments? Is there a
> simpler
> > way I don't see right now?
> >
> You could use DHCP assigned IP and a DNS server which automatically
> adds or removes IP from domain. If some VM's needs static IP this can
> be handled by DHCP as well.
>

You mean a "self registering dns name"?

I'm currently using a solution based on LDAP-backed DNS and DHCP (including
PXE options). The problem I have is how to "join" the information based on
IDs in Proxmox VE with the hostname. I can join based on MAC if all
machines are registered, but not otherwise.

There was a discussion about getting IP addresses via qemu-agent and
display it in the Proxmox VE GUI, but that only helps for VMs running with
the agent. It would be simpler to add - SQL-technically speaking - a
hostname column to the KVM VMs. I need something to join and the VM name
can be ambiguous.

In the end I want to see on the overview page of a VM what the hostname
and/or IP is for logging in. Currently I did it in the comment.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 16:09:48 +0200
Andreas Steinel  wrote:

> 
> How do you guys solve this problem in big environments? Is there a simpler
> way I don't see right now?
> 
You could use DHCP assigned IP and a DNS server which automatically
adds or removes IP from domain. If some VM's needs static IP this can
be handled by DHCP as well.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Kitchen activity is highlighted.  Butter up a friend.


pgp9jsXKTt5Tv.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
Hi,

I'd like to discuss a feature request about having a "real" hostname on KVM
machines or some other mechanism to solve my problem.

I have a rather big environment with over a hundred KVM VMs and also
different networks including different DNS settings. Currently I "encode"
further VM information in the first two lines of the comment of each VM and
then "decode" these on automatic tasks on the command line or via API. With
the advent of LXC, there is an option about hostname and dns settings.
Could it be possible and feasible to introduce such things also into the
KVM world? Having the "real" hostname as the name of the KVM is just too
long. I could use pools for this, but I already use pools to differentiate
the machines by purpose.

How do you guys solve this problem in big environments? Is there a simpler
way I don't see right now?

Best,
Andreas
(LnxBil)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC zfsonlinux 0/3] Update ZFS to 0.6.5.8

2016-10-14 Thread Fabian Grünbichler
this patch series moves from the ZoL package base to a Debian Jessie package
base. because of the different packag names, this requires adding some
transitional packages.

the new packaging base is much closer to upstream and has some other nice 
features:
- cleaner packaging scripts
- included scrub cron job for healthy pools
- zfs-zed as own package

as well as the bug fixes included in 0.6.5.8

tested using PVE 4.3, with ZFS as root and non-root pools.
tested upgrade path from PVE 3.4, with ZFS as root and non-root pools.

the alternative to patch #2 is to increase our packaging delta by patching all
the diverging package names, but I prefer to switch to the ones used by Debian
(especially now that ZFS 0.6.5.8 is also in jessie-backports).

Fabian Grünbichler (3):
  switch pkg source to Debian Jessie
  add transitional packages and relations for upgrades
  bump version to 0.6.5.8-pve11/-pve7

 Makefile|  43 ++--
 pkg-spl.tar.gz  | Bin 14477761 -> 4036845 bytes
 pkg-zfs.tar.gz  | Bin 38521842 -> 11020426 
bytes
 spl-changelog.Debian|   8 +
 spl-patches/fix-control.patch   | 200 +-
 zfs-changelog.Debian|  10 +
 zfs-patches/add-zfsutils-preinst-postinst.patch |  63 ++
 zfs-patches/fix-control.patch   | 259 
 zfs-patches/fix-dependencies-for-upgrades.patch | 137 +
 zfs-patches/fix-dh-installinit.patch|  96 -
 zfs-patches/series  |   4 +-
 zfs-patches/skip-unneeded-pull-requests.patch   |  21 --
 12 files changed, 522 insertions(+), 319 deletions(-)
 create mode 100644 zfs-patches/add-zfsutils-preinst-postinst.patch
 create mode 100644 zfs-patches/fix-dependencies-for-upgrades.patch
 delete mode 100644 zfs-patches/fix-dh-installinit.patch
 delete mode 100644 zfs-patches/skip-unneeded-pull-requests.patch

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC zfsonlinux 2/3] add transitional packages and relations for upgrades

2016-10-14 Thread Fabian Grünbichler
---
 Makefile|  11 +-
 zfs-patches/fix-dependencies-for-upgrades.patch | 137 
 zfs-patches/series  |   1 +
 3 files changed, 147 insertions(+), 2 deletions(-)
 create mode 100644 zfs-patches/fix-dependencies-for-upgrades.patch

diff --git a/Makefile b/Makefile
index 942617b..7cbabaf 100644
--- a/Makefile
+++ b/Makefile
@@ -27,7 +27,14 @@ zfs-zed_${ZFSPKGVER}_amd64.deb   \
 zfs-initramfs_${ZFSPKGVER}_all.deb \
 zfsutils-linux_${ZFSPKGVER}_amd64.deb
 
-DEBS=${SPL_DEBS} ${ZFS_DEBS} 
+ZFS_TRANS_DEBS=\
+libnvpair1_${ZFSPKGVER}_all.deb\
+libuutil1_${ZFSPKGVER}_all.deB \
+libzfs2_${ZFSPKGVER}_all.deb   \
+libzpool2_${ZFSPKGVER}_all.deb \
+zfsutils_${ZFSPKGVER}_all.deb
+
+DEBS=${SPL_DEBS} ${ZFS_DEBS} ${ZFS_TRANS_DEBS}
 
 all: ${DEBS}
 
@@ -47,7 +54,7 @@ spl ${SPL_DEBS}: ${SPLSRC}
cd ${SPLDIR}; dpkg-buildpackage -b -uc -us 
 
 .PHONY: zfs
-zfs ${ZFS_DEBS}: ${ZFSSRC}
+zfs ${ZFS_DEBS} ${ZFS_TRANS_DEBS}: ${ZFSSRC}
rm -rf ${ZFSDIR}
tar xf ${ZFSSRC}
mv ${ZFSDIR}/debian/changelog ${ZFSDIR}/debian/changelog.org
diff --git a/zfs-patches/fix-dependencies-for-upgrades.patch 
b/zfs-patches/fix-dependencies-for-upgrades.patch
new file mode 100644
index 000..92c4fe9
--- /dev/null
+++ b/zfs-patches/fix-dependencies-for-upgrades.patch
@@ -0,0 +1,137 @@
+From 4d9b9f40e1b92fba360aed1158578a75f32cbdce Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Fabian=20Gr=C3=BCnbichler?= 
+Date: Wed, 12 Oct 2016 13:16:03 +0200
+Subject: [PATCH] ensure upgrade path from existing PVE ZFS packages
+
+---
+ debian/control.in | 59 ++-
+ 1 file changed, 58 insertions(+), 1 deletion(-)
+
+diff --git a/debian/control.in b/debian/control.in
+index 22dd958..2bee2bf 100644
+--- a/debian/control.in
 b/debian/control.in
+@@ -28,6 +28,8 @@ Package: libnvpair1linux
+ Section: contrib/libs
+ Architecture: linux-any
+ Depends: ${misc:Depends}, ${shlibs:Depends}
++Replaces: libnvpair1 (<< 0.6.5.8-pve11~bpo80)
++Breaks: libnvpair1 (<< 0.6.5.8-pve11~bpo80)
+ Description: Solaris name-value library for Linux
+  This library provides routines for packing and unpacking nv pairs for
+  transporting data across process boundaries, transporting between
+@@ -37,6 +39,8 @@ Package: libuutil1linux
+ Section: contrib/libs
+ Architecture: linux-any
+ Depends: ${misc:Depends}, ${shlibs:Depends}
++Replaces: libuutil1 (<< 0.6.5.8-pve11~bpo80)
++Breaks: libuutil1 (<< 0.6.5.8-pve11~bpo80)
+ Description: Solaris userland utility library for Linux
+  This library provides a variety of glue functions for ZFS on Linux:
+   * libspl: The Solaris Porting Layer userland library, which provides APIs
+@@ -54,6 +58,8 @@ Architecture: linux-any
+ Depends: libzfs2linux (= ${binary:Version}), libzpool2linux (= 
${binary:Version}),
+  libnvpair1linux (= ${binary:Version}), libuutil1linux (= ${binary:Version}),
+  ${misc:Depends}
++Replaces: libzfs-dev (<< 0.6.5.8-pve11~bpo80)
++Breaks: libzfs-dev (<< 0.6.5.8-pve11~bpo80)
+ Provides: libnvpair-dev, libuutil-dev
+ Description: OpenZFS filesystem development files for Linux
+  Header files and static libraries for compiling software against
+@@ -66,6 +72,8 @@ Package: libzfs2linux
+ Section: contrib/libs
+ Architecture: linux-any
+ Depends: ${misc:Depends}, ${shlibs:Depends}
++Replaces: libzfs2 (<< 0.6.5.8-pve11~bpo80)
++Breaks: libzfs2 (<< 0.6.5.8-pve11~bpo80)
+ Description: OpenZFS filesystem library for Linux
+  The Z file system is a pooled filesystem designed for maximum data
+  integrity, supporting data snapshots, multiple copies, and data
+@@ -77,6 +85,8 @@ Package: libzpool2linux
+ Section: contrib/libs
+ Architecture: linux-any
+ Depends: ${misc:Depends}, ${shlibs:Depends}
++Replaces: libzpool2 (<< 0.6.5.8-pve11~bpo80)
++Breaks: libzpool2 (<< 0.6.5.8-pve11~bpo80)
+ Description: OpenZFS pool library for Linux
+  The Z file system is a pooled filesystem designed for maximum data
+  integrity, supporting data snapshots, multiple copies, and data
+@@ -88,8 +98,10 @@ Package: zfs-initramfs
+ Architecture: all
+ Depends: initramfs-tools,
+  busybox-initramfs | busybox-static | busybox,
+- zfsutils-linux,
++ zfsutils-linux (>= 0.6.5.8-pve11~bpo80),
+  ${misc:Depends}
++Breaks: zfs-initramfs (<< 0.6.5.8-pve11~bpo80)
++Replaces: zfs-initramfs (<< 0.6.5.8-pve11~bpo80)
+ Description: OpenZFS root filesystem capabilities for Linux - initramfs
+  The Z file system is a pooled filesystem designed for maximum data
+  integrity, supporting data snapshots, multiple copies, and data
+@@ -104,7 +116,9 @@ Architecture: linux-any
+ Depends: ${misc:Depends}, ${shlibs:Depends}, ${python:Depends}
+ Recommends: lsb-base, zfs-zed
+ Suggests: nfs-kernel-server, samba-common-bin (>= 3.0.23), zfs-initramfs
++Replaces: 

[pve-devel] [RFC zfsonlinux 1/3] switch pkg source to Debian Jessie

2016-10-14 Thread Fabian Grünbichler
update to 0.6.5.8
drop unneeded patches
refresh no-DKMS and no-dracut patches
---
Note: this patch does not apply because of the binary diff lines,
remove those before applying.

 Makefile|  26 +--
 pkg-spl.tar.gz  | Bin 14477761 -> 4036845 bytes
 pkg-zfs.tar.gz  | Bin 38521842 -> 11020426 
bytes
 spl-patches/fix-control.patch   | 200 +-
 zfs-patches/add-zfsutils-preinst-postinst.patch |  63 ++
 zfs-patches/fix-control.patch   | 259 
 zfs-patches/fix-dh-installinit.patch|  96 -
 zfs-patches/series  |   3 +-
 zfs-patches/skip-unneeded-pull-requests.patch   |  21 --
 9 files changed, 354 insertions(+), 314 deletions(-)
 create mode 100644 zfs-patches/add-zfsutils-preinst-postinst.patch
 delete mode 100644 zfs-patches/fix-dh-installinit.patch
 delete mode 100644 zfs-patches/skip-unneeded-pull-requests.patch

diff --git a/Makefile b/Makefile
index 15bbbcf..942617b 100644
--- a/Makefile
+++ b/Makefile
@@ -17,14 +17,15 @@ SPL_DEBS=   \
 spl_${SPLPKGVER}_amd64.deb
 
 ZFS_DEBS=  \
-libnvpair1_${ZFSPKGVER}_amd64.deb  \
-libuutil1_${ZFSPKGVER}_amd64.deb   \
-libzfs2_${ZFSPKGVER}_amd64.deb \
-libzfs-dev_${ZFSPKGVER}_amd64.deb  \
-libzpool2_${ZFSPKGVER}_amd64.deb   \
+libnvpair1linux_${ZFSPKGVER}_amd64.deb \
+libuutil1linux_${ZFSPKGVER}_amd64.deb  \
+libzfs2linux_${ZFSPKGVER}_amd64.deb\
+libzfslinux-dev_${ZFSPKGVER}_amd64.deb \
+libzpool2linux_${ZFSPKGVER}_amd64.deb  \
 zfs-dbg_${ZFSPKGVER}_amd64.deb \
-zfs-initramfs_${ZFSPKGVER}_amd64.deb   \
-zfsutils_${ZFSPKGVER}_amd64.deb
+zfs-zed_${ZFSPKGVER}_amd64.deb \
+zfs-initramfs_${ZFSPKGVER}_all.deb \
+zfsutils-linux_${ZFSPKGVER}_amd64.deb
 
 DEBS=${SPL_DEBS} ${ZFS_DEBS} 
 
@@ -43,7 +44,6 @@ spl ${SPL_DEBS}: ${SPLSRC}
cd ${SPLDIR}; ln -s ../spl-patches patches
cd ${SPLDIR}; quilt push -a
cd ${SPLDIR}; rm -rf .pc ./patches
-   cd ${SPLDIR}; ./debian/rules override_dh_prep-base-deb-files
cd ${SPLDIR}; dpkg-buildpackage -b -uc -us 
 
 .PHONY: zfs
@@ -55,16 +55,16 @@ zfs ${ZFS_DEBS}: ${ZFSSRC}
cd ${ZFSDIR}; ln -s ../zfs-patches patches
cd ${ZFSDIR}; quilt push -a
cd ${ZFSDIR}; rm -rf .pc ./patches
-   cd ${ZFSDIR}; ./debian/rules override_dh_prep-base-deb-files
cd ${ZFSDIR}; dpkg-buildpackage -b -uc -us 
 
 .PHONY: download
 download:
rm -rf pkg-spl pkg-zfs ${SPLSRC} ${ZFSSRC}
-   # clone pkg-spl and checkout 0.6.5.7-5
-   git clone -b master/debian/jessie/0.6.5.7-5-jessie 
https://github.com/zfsonlinux/pkg-spl.git
-   # clone pkg-zfs and checkout 0.6.5.7-8
-   git clone -b master/debian/jessie/0.6.5.7-8-jessie 
https://github.com/zfsonlinux/pkg-zfs.git
+   # clone pkg-zfsonlinux/spl and checkout 0.6.5.8-2
+   git clone -b debian/0.6.5.8-2 
git://anonscm.debian.org/pkg-zfsonlinux/spl.git pkg-spl
+   # clone pkg-zfsonlinux/zfs and checkout 0.6.5.8-1
+   git clone git://anonscm.debian.org/pkg-zfsonlinux/zfs.git pkg-zfs
+   cd pkg-zfs; git checkout dcd337ff3c03ed63a2435960a6290b9ee847e5f2
tar czf ${SPLSRC} pkg-spl
tar czf ${ZFSSRC} pkg-zfs
 
diff --git a/pkg-spl.tar.gz b/pkg-spl.tar.gz
index c4ac665..96c79f7 100644
Binary files a/pkg-spl.tar.gz and b/pkg-spl.tar.gz differ
diff --git a/pkg-zfs.tar.gz b/pkg-zfs.tar.gz
index 967c1c4..fa0afb9 100644
Binary files a/pkg-zfs.tar.gz and b/pkg-zfs.tar.gz differ
diff --git a/spl-patches/fix-control.patch b/spl-patches/fix-control.patch
index 5714f00..1c3a25a 100644
--- a/spl-patches/fix-control.patch
+++ b/spl-patches/fix-control.patch
@@ -1,18 +1,29 @@
-Index: new/debian/control.in
-===
 new.orig/debian/control.in
-+++ new/debian/control.in
-@@ -35,34 +35,9 @@ Description: Native ZFS filesystem kerne
-  This package provides the source to the SPL kernel module in a form
-  suitable for use by module-assistant or kernel-package.
+From 764866342a7b4090c0530953db2d9dc5ed6a3b0e Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Fabian=20Gr=C3=BCnbichler?= 
+Date: Wed, 12 Oct 2016 10:57:39 +0200
+Subject: [PATCH] remove DKMS and module build
+
+---
+diff --git a/debian/control.in b/debian/control.in
+index 6392ddc..dc6ee96 100644
+--- a/debian/control.in
 b/debian/control.in
+@@ -8,40 +8,16 @@ Build-Depends: autogen,
+autotools-dev,
+debhelper (>= 9),
+dh-autoreconf,
+-   dkms (>> 2.2.0.2-1~),
+libtool
+ Standards-Version: 3.9.8
+ Homepage: http://www.zfsonlinux.org/
+ Vcs-Git: 

[pve-devel] [RFC zfsonlinux 3/3] bump version to 0.6.5.8-pve11/-pve7

2016-10-14 Thread Fabian Grünbichler
---
Note: included for easily distinguished package names when test-building

 Makefile |  6 +++---
 spl-changelog.Debian |  8 
 zfs-changelog.Debian | 10 ++
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 7cbabaf..ca27979 100644
--- a/Makefile
+++ b/Makefile
@@ -2,9 +2,9 @@ RELEASE=4.1
 
 # source form https://github.com/zfsonlinux/
 
-ZFSVER=0.6.5.7
-ZFSPKGREL=pve10~bpo80
-SPLPKGREL=pve6~bpo80
+ZFSVER=0.6.5.8
+ZFSPKGREL=pve11~bpo80
+SPLPKGREL=pve7~bpo80
 ZFSPKGVER=${ZFSVER}-${ZFSPKGREL}
 SPLPKGVER=${ZFSVER}-${SPLPKGREL}
 
diff --git a/spl-changelog.Debian b/spl-changelog.Debian
index 8cb336b..4bb85ea 100644
--- a/spl-changelog.Debian
+++ b/spl-changelog.Debian
@@ -1,3 +1,11 @@
+spl-linux (0.6.5.8-pve7~bpo80) unstable; urgency=medium
+
+  * update spl to debian/0.6.5.8-2
+
+  * switch package upstream sources to Debian (Jessie)
+
+ -- Proxmox Support Team   Wed, 12 Oct 2016 11:16:02 +0200
+
 spl-linux (0.6.5.7-pve6~bpo80) unstable; urgency=medium
 
   * update pkg-spl to jessie/0.6.5.7-5
diff --git a/zfs-changelog.Debian b/zfs-changelog.Debian
index 7732392..683165f 100644
--- a/zfs-changelog.Debian
+++ b/zfs-changelog.Debian
@@ -1,3 +1,13 @@
+zfs-linux (0.6.5.8-pve11~bpo80) unstable; urgency=medium
+
+  * update zfs to debian/0.6.5.8-1
+
+  * switch package upstream sources to Debian (Jessie)
+
+  * add transitional packages for upgrades
+
+ -- Proxmox Support Team   Wed, 12 Oct 2016 11:16:02 +0200
+
 zfs-linux (0.6.5.7-pve10~bpo80) unstable; urgency=medium
 
   * update to pkg-zfs jessie/0.6.5.7-8
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Michael Rasmussen
That might explain the difference.

On October 14, 2016 12:15:42 PM GMT+02:00, Andreas Steinel 
 wrote:
>On Fri, Oct 14, 2016 at 12:08 PM, datanom.net  wrote:
>
>> On 2016-10-14 11:13, Andreas Steinel wrote:
>>>
>>> So, what was your test environment? How big was the difference?
>>>
>>> Are you running your ZFS pool on the proxmox node?
>
>
>Yes, everything local on the node itself.
>___
>pve-devel mailing list
>pve-devel@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 1/2] fix #1147: allow marking non-volume mps as shared

2016-10-14 Thread Fabian Grünbichler
this introduces a new option for non-volume mount points,
modeled after the way we define 'shared' storages: the
boolean flag 'shared' marks a mount point as available on
other nodes (default: false)

when migrating containers with non-volume mount points,
this new property is checked, and a migration is only
allowed if all such mount points are 'shared'.

setting this flag allows containers with non-volume mount
points to be migrated by the ha-manager as well, which was
previously not possible.

for backwards compatibility, the old "workaround" option
'-force' for 'pct migrate' still works, but displays a
warning pointing to the new options.
---
changes to v1: remove "nodes" property

 src/PVE/API2/LXC.pm|  3 +--
 src/PVE/LXC/Config.pm  |  7 +++
 src/PVE/LXC/Migrate.pm | 19 ++-
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 15ebb87..d0e558a 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -848,8 +848,7 @@ __PACKAGE__->register_method({
force => {
type => 'boolean',
description => "Force migration despite local bind / device" .
-   " mounts. WARNING: identical bind / device mounts need to ".
-   " be available on the target node.",
+   " mounts. NOTE: deprecated, use 'shared' property of mount 
point instead.",
optional => 1,
},
},
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 2ec643e..e1b159c 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -245,6 +245,13 @@ my $rootfs_desc = {
description => 'Enable user quotas inside the container (not supported 
with zfs subvolumes)',
optional => 1,
 },
+shared => {
+   type => 'boolean',
+   description => 'Mark this non-volume mount point as available on 
multiple nodes (see \'nodes\')',
+   verbose_description => "Mark this non-volume mount point as available 
on all nodes.\n\nWARNING: This option does not share the mount point 
automatically, it assumes it is shared already!",
+   optional => 1,
+   default => 0,
+},
 };
 
 PVE::JSONSchema::register_standard_option('pve-ct-rootfs', {
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 1c168bb..10b2b69 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -46,11 +46,21 @@ sub prepare {
my ($ms, $mountpoint) = @_;
 
my $volid = $mountpoint->{volume};
+   my $type = $mountpoint->{type};
 
-   # skip dev/bind mps when forced
-   if ($mountpoint->{type} ne 'volume' && $force) {
-   return;
+   # skip dev/bind mps when forced / shared
+   if ($type ne 'volume') {
+   if ($force) {
+   warn "-force is deprecated, please use the 'shared' property on 
individual non-volume mount points instead!\n";
+   return;
+   }
+   if ($mountpoint->{shared}) {
+   return;
+   } else {
+   die "cannot migrate local $type mount point '$ms'\n";
+   }
}
+
my ($storage, $volname) = PVE::Storage::parse_volume_id($volid, 1) if 
$volid;
die "can't determine assigned storage for mountpoint '$ms'\n" if 
!$storage;
 
@@ -151,8 +161,7 @@ sub phase1 {
my $volid = $mountpoint->{volume};
# already checked in prepare
if ($mountpoint->{type} ne 'volume') {
-   $self->log('info', "ignoring mountpoint '$ms' ('$volid') of type " .
-   "'$mountpoint->{type}', migration is forced.")
+   $self->log('info', "ignoring shared '$mountpoint->{type}' mount 
point '$ms' ('$volid')")
if !$snapname;
return;
}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 2/2] fix spelling: 'mountpoint' 'mount point'

2016-10-14 Thread Fabian Grünbichler
---
just rebased

 src/PVE/API2/LXC.pm| 14 +++---
 src/PVE/CLI/pct.pm |  2 +-
 src/PVE/LXC.pm |  2 +-
 src/PVE/LXC/Config.pm  | 14 +++---
 src/PVE/LXC/Migrate.pm |  4 ++--
 src/PVE/VZDump/LXC.pm  |  6 +++---
 6 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index d0e558a..38b1feb 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -276,7 +276,7 @@ __PACKAGE__->register_method({
}
}
 
-   die "mountpoints configured, but 'rootfs' not set - aborting\n"
+   die "mount points configured, but 'rootfs' not set - aborting\n"
if !$storage_only_mode && !defined($mp_param->{rootfs});
 
# check storage access, activate storage
@@ -358,8 +358,8 @@ __PACKAGE__->register_method({
 
if ($mountpoint->{backup}) {
warn "WARNING - unsupported 
configuration!\n";
-   warn "backup was enabled for $type 
mountpoint $ms ('$mountpoint->{mp}')\n";
-   warn "mountpoint configuration will be 
restored after archive extraction!\n";
+   warn "backup was enabled for $type mount 
point $ms ('$mountpoint->{mp}')\n";
+   warn "mount point configuration will be 
restored after archive extraction!\n";
warn "contained files will be restored to 
wrong directory!\n";
}
delete $mp_param->{$ms}; # actually delay 
bind/dev mps
@@ -1254,7 +1254,7 @@ __PACKAGE__->register_method({
if ($fullclone->{$opt}) {
die "fixme: full clone not implemented\n";
} else {
-   print "create linked clone of mountpoint $opt 
($volid)\n";
+   print "create linked clone of mount point $opt 
($volid)\n";
my $newvolid = PVE::Storage::vdisk_clone($storecfg, 
$volid, $newid, $snapname);
push @$newvollist, $newvolid;
$mp->{volume} = $newvolid;
@@ -1300,7 +1300,7 @@ __PACKAGE__->register_method({
 method => 'PUT',
 protected => 1,
 proxyto => 'node',
-description => "Resize a container mountpoint.",
+description => "Resize a container mount point.",
 permissions => {
check => ['perm', '/vms/{vmid}', ['VM.Config.Disk'], any => 1],
 },
@@ -1373,7 +1373,7 @@ __PACKAGE__->register_method({
my (undef, undef, $owner, undef, undef, undef, $format) =
PVE::Storage::parse_volname($storage_cfg, $volid);
 
-   die "can't resize mountpoint owned by another container ($owner)"
+   die "can't resize mount point owned by another container ($owner)"
if $vmid != $owner;
 
die "can't resize volume: $disk if snapshot exists\n"
@@ -1411,7 +1411,7 @@ __PACKAGE__->register_method({
$mp->{mp} = '/';
my $use_loopdev = (PVE::LXC::mountpoint_mount_path($mp, 
$storage_cfg))[1];
$path = PVE::LXC::query_loopdev($path) if $use_loopdev;
-   die "internal error: CT running but mountpoint not 
attached to a loop device"
+   die "internal error: CT running but mount point not 
attached to a loop device"
if !$path;
PVE::Tools::run_command(['losetup', '--set-capacity', 
$path]) if $use_loopdev;
 
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index cf4a014..53a4ec2 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -227,7 +227,7 @@ __PACKAGE__->register_method ({
my $conf = PVE::LXC::Config->load_config($vmid);
my $storage_cfg = PVE::Storage::config();
 
-   defined($conf->{$device}) || die "cannot run command on unexisting 
mountpoint $device\n";
+   defined($conf->{$device}) || die "cannot run command on 
non-existing mount point $device\n";
 
my $mount_point = $device eq 'rootfs' ? 
PVE::LXC::Config->parse_ct_rootfs($conf->{$device}) :
PVE::LXC::Config->parse_ct_mountpoint($conf->{$device});
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 35ce796..bf16f8a 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -869,7 +869,7 @@ sub check_ct_modify_config_perm {
return if $delete;
my $data = $opt eq 'rootfs' ? 
PVE::LXC::Config->parse_ct_rootfs($newconf->{$opt})
: 
PVE::LXC::Config->parse_ct_mountpoint($newconf->{$opt});
-   raise_perm_exc("mountpoint type $data->{type}") if $data->{type} ne 
'volume';
+   raise_perm_exc("mount point type $data->{type}") if $data->{type} 
ne 'volume';
} elsif ($opt eq 'memory' || $opt eq 'swap') {
 

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net  wrote:

> On 2016-10-14 11:13, Andreas Steinel wrote:
>>
>> So, what was your test environment? How big was the difference?
>>
>> Are you running your ZFS pool on the proxmox node?


Yes, everything local on the node itself.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread datanom.net

On 2016-10-14 11:13, Andreas Steinel wrote:

Hi Mir,

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen  
wrote:

I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.


I just benchmarked it in on a full-SSD-ZFS system of mine and got 
reverse

results.
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this
varies:

Test  | sequential 8K | randread 4K | randrw 4K 50/50
--+---+-+
virtio-scsi   |   53k | 57k | 11k
virtio-scsi-single|   35k | 41k | 11k
virtio-scsi IO/Thread |   29k | 43k | 11k
virtio-scsi-single IO |   29k | 44k | 11k


So, what was your test environment? How big was the difference?


Are you running your ZFS pool on the proxmox node?
My benchmarks were made using ZFS over iSCSI.

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Alexandre DERUMIER
>>So, what was your test environment? How big was the difference?

That's strange, they are technical difference between virtio-scsi && 
virtio-scsi-single.

with virtio-scsi-single you have 1 virtio-scsi controller by disk.


for iothread, you should see difference with multiple disk in 1 vm.
This need virtio-scsi-single, because the iothread is  mapped on controller, 
not the disk.



- Mail original -
De: "Andreas Steinel" 
À: "pve-devel" 
Envoyé: Vendredi 14 Octobre 2016 11:13:34
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

Hi Mir, 

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen  wrote: 
> I use virio-scsi-single exclusively because of the hough performance 
> gain in comparison to virtio-scsi so I can concur to that. 

I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse 
results. 
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this 
varies: 

Test | sequential 8K | randread 4K | randrw 4K 50/50 
--+---+-+ 
virtio-scsi | 53k | 57k | 11k 
virtio-scsi-single | 35k | 41k | 11k 
virtio-scsi IO/Thread | 29k | 43k | 11k 
virtio-scsi-single IO | 29k | 44k | 11k 


So, what was your test environment? How big was the difference? 

Best, 
LnxBil 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
Hi Mir,

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen  wrote:
> I use virio-scsi-single exclusively because of the hough performance
> gain in comparison to virtio-scsi so I can concur to that.

I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse
results.
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this
varies:

Test  | sequential 8K | randread 4K | randrw 4K 50/50
--+---+-+
virtio-scsi   |   53k | 57k | 11k
virtio-scsi-single|   35k | 41k | 11k
virtio-scsi IO/Thread |   29k | 43k | 11k
virtio-scsi-single IO |   29k | 44k | 11k


So, what was your test environment? How big was the difference?

Best,
LnxBil
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH kernel] Fix #927: add IPoIB performance regression fix

2016-10-14 Thread Fabian Grünbichler
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Applied [PATCH kernel] update to Ubuntu 4.4.0-43.63, bump version to 4.4.21-69

2016-10-14 Thread Fabian Grünbichler
---
Note: already applied

 Makefile  |   4 ++--
 changelog.Debian  |   8 
 ubuntu-xenial.tgz | Bin 145659146 -> 145650761 bytes
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index 6a608c5..0f41f5a 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@ RELEASE=4.3
 
 # also update proxmox-ve/changelog if you change KERNEL_VER or KREL
 KERNEL_VER=4.4.21
-PKGREL=68
+PKGREL=69
 # also include firmware of previous version into
 # the fw package:  fwlist-2.6.32-PREV-pve
 KREL=1
@@ -127,7 +127,7 @@ ${VIRTUAL_HDR_DEB} pve-headers: 
proxmox-ve/pve-headers.control
 download:
rm -rf ${KERNEL_SRC} ${KERNELSRCTAR}
#git clone git://kernel.ubuntu.com/ubuntu/ubuntu-vivid.git
-   git clone --single-branch -b Ubuntu-4.4.0-42.62 
git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git ${KERNEL_SRC}
+   git clone --single-branch -b Ubuntu-4.4.0-43.63 
git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git ${KERNEL_SRC}
tar czf ${KERNELSRCTAR} --exclude .git ${KERNEL_SRC} 
 
 check_gcc: 
diff --git a/changelog.Debian b/changelog.Debian
index d0a5340..eba21d4 100644
--- a/changelog.Debian
+++ b/changelog.Debian
@@ -1,3 +1,11 @@
+pve-kernel (4.4.21-69) unstable; urgency=medium
+
+  * update to Ubuntu-4.4.0-43.63
+
+  * fix #927: IPoIB performance regression
+
+ -- Proxmox Support Team   Fri, 14 Oct 2016 08:59:12 +0200
+
 pve-kernel (4.4.21-68) unstable; urgency=medium
 
   * update to Ubuntu-4.4.0-42.62
diff --git a/ubuntu-xenial.tgz b/ubuntu-xenial.tgz
index 2db7bd1..c10402b 100644
Binary files a/ubuntu-xenial.tgz and b/ubuntu-xenial.tgz differ
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH kernel] Fix #927: add IPoIB performance regression fix

2016-10-14 Thread Michael Rasmussen
On Thu, 13 Oct 2016 16:49:30 +0200
Wolfgang Bumiller  wrote:

> Fixes kernel bug #111921
> ---
>  ...ck-the-IB-LL-address-into-the-hard-header.patch | 365 
> +
>  Makefile   |   2 +
>  2 files changed, 367 insertions(+)
>  create mode 100644 
> IB-ipoib-move-back-the-IB-LL-address-into-the-hard-header.patch
> 
I wonder if this is the problem I have been facing with NFS over IB
since upgrading to 4.x. Stability and performance problems so bad that
I have temperately disable this storage and configured NFS over the
ethernet nics instead.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
It's gonna be alright,
It's almost midnight,
And I've got two more bottles of wine.


pgpoP8SRyqNEd.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 07:42:38 +0200 (CEST)
Alexandre DERUMIER  wrote:

> 
> Also, currently, we have virtio-scsi-single. I don't known if a lot of user 
> already use it,
> but maybe it could be better to use it as an option   
> scsihw:virtio-scsi,type=generic|block,x=single
> 
> ?
I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
It's gonna be alright,
It's almost midnight,
And I've got two more bottles of wine.


pgph3ARjw3Rlx.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel