Re: [OmniOS-discuss] OmniOS / Nappit slow iscsi / ZFS performance with Proxmox

2015-09-03 Thread Steffen Wagner

Hi Volker,

thanks for this information! I am now just running the connection for 
storage with one nic (10GbE).


Regards,
Steffen

--
Steffen Wagner
August-Bebel-Straße 61
D-68199 Mannheim

M +49 (0) 1523 3544688
E m...@steffenwagner.com
I http://wagnst.de

Get my public GnuPG key:
mail  steffenwagner  com
http://http-keys.gnupg.net/pks/lookup?op=get&search=0x8A3406FB4688EE99

Am 2015-08-30 17:22, schrieb v...@bb-c.de:

The systems are currently connected through a 1 GBit link for
general WAN and LAN communitcation and a 20 GBit link (two 10 GBit
links aggregated) for the iSCSI communication.


This may or may not make a difference but if you do link aggregation
and then use the link only from one client IP then you will only get
one connection on one of the two aggregated links.  In case of iSCSI
I think it is better to configure the two links separately and then
use multipathing.


Regards -- Volker

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] OmniOS / Nappit slow iscsi / ZFS performance with Proxmox

2015-09-03 Thread Steffen Wagner

Hi Michael,

- I am running several VLANs for splitting them up. I have one VLAN for 
Cluster communication (all nodes are connected through LACP Channel) and 
one VLAN (also LACP Channel) for VM Traffic.
The storage an app servers are direct attached with 10GbE, so all 
networks are definately splitted.


- I have set MTU to 9000 and enabled Jumbo Frames in my HP Procurve 
Switch..


Meanwhile I got some performance improvements with setting recordsize 
for pool to 64k and enabling write back cache for all LU's...
I have about 250MB/s with random tests (load of tank is then around 
40-50%), which is quite good and okay for me.


If someone has more helpful advices to tune comstar / ZFS / ... 
parameters I appreciate this highly!


Thank you very much,
Steffen

--
Steffen Wagner
August-Bebel-Straße 61
D-68199 Mannheim

M +49 (0) 1523 3544688
E m...@steffenwagner.com
I http://wagnst.de

Get my public GnuPG key:
mail  steffenwagner  com
http://http-keys.gnupg.net/pks/lookup?op=get&search=0x8A3406FB4688EE99

Am 2015-08-31 00:45, schrieb Michael Talbott:

This may be a given, but, since you didn't mention this in your
network topology.. Make sure the 1g LAN link is on a different subnet
than the 20g iscsi link. Otherwise iscsi traffic might be flowing
through the 1g link. Also jumbo frames can help with iscsi.

Additionally, dd speed tests from /dev/zero to a zfs disk are highly
misleading if you have any compression enabled on the zfs disk (since
only 512 bytes of disk is actually written for nearly any amount of
consecutive zeros)

Michael
Sent from my iPhone

On Aug 30, 2015, at 7:17 AM, Steffen Wagner 
wrote:


Hi everyone!

I just setup a small network with 2 nodes:

* 1 proxmox host on Debian Wheezy hosting KVM VMs

* 1 napp-it host on OmniOS stable

The systems are currently connected through a 1 GBit link for
general WAN and LAN communitcation and a 20 GBit link (two 10 GBit
links aggregated) for the iSCSI communication.

Both connection's bandwidth was confirmed using iperf.

The napp-it system currently has one pool (tank) consisting of 2
mirror vdevs. The 4 disks are SAS3 disks connected to a SAS2
backplane and directly attached (no expander) to the LSI SAS3008
(9300-8i) HBA.

Comstar is running on that Machine with 1 target (vm-storage) in 1
target group (vm-storage-group).

Proxmox has this iSCSI target configured as a "ZFS over iSCSI"
storage using a block size of 8k and the "Write cache" option
enabled.

This is where the problem starts:

dd if=/dev/zero of=/tank/test bs=1G count=20 conv=fdatasync

This dd test yields around 300 MB/s directly on the napp-it system.

dd if=/dev/zero of=/home/test bs=1G count=20 conv=fdatasync

This dd test yields around 100 MB/s on a VM with it's disk on the
napp-it system connected via iSCSI.

The problem here is not the absolute numbers as these tests do not
provide accurate numbers, the problem is the difference between the
two values. I expected at least something around 80% of the local
bandwidth, but this is usually around 30% or less.

What I noticed during the tests: When running the test locally on
the napp-it system, all disks will be fully utilized (read using
iostat -x 1). When running the test inside a VM, the disk
utilization barely reaches 30% (which seems to reflect the results
of the bandwidth displayed by dd).

These 30% are only reached, if the locical unit of the VM disk has
the writeback cache enabled. Disabling it results in 20-30 MB/s with
the dd test mentioned above. Enabling it also increases the disk
utilization.

These values are also seen during the disk migration. Migrating one
disk results in slow speed and low disk utilization. Migrating
several disks in parallel will evetually cause 100% disk
utilization.

I also tested a NFS share as VM storage in proxmox. Running the same
test inside a VM on the NFS share yields results around 200-220
MB/s. This is better (and shows that the traffic is going over the
fast link between the servers), but not really yet as I still lose a
third.

I am fairly new to the Solaris and ZFS world, so any help is greatly
appreciated.

Thanks in advance!

Steffen



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss [1]



Links:
--
[1] http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] openssh on omnios

2015-09-03 Thread Paul B. Henson
> From: Dan McDonald
> Sent: Tuesday, August 11, 2015 7:16 AM
> 
> I think the packaging update may be a bit more complicated than just
pushing
> out openssh, but I don't think it's untenable.

Just wondering if there was any further news on this. Joyent just pushed out
a change to their illumos branch that removes SunSSH completely and replaces
it with OpenSSH which made me think about it :). They actually tweaked
upstream OpenSSH to accept some of the SunSSH specific options and in a
couple other minor ways be more compatible as a drop in replacement.
Personally I don't care about that, vanilla openssh would work for me :),
but their changes might be of interest to other omnios users, or if openssh
becomes the default omnios ssh implementation rather than an optional after
install replacement.

Thanks.

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] openssh on omnios

2015-09-03 Thread Dan McDonald
I knew Joyent was working on this.  I hope they upstream it soon.  I have 7.1p1 
in the upcoming bloody, with only the light patching already in omnios-build, 
plus the recent Lauri T changes.

Dan

Sent from my iPhone (typos, autocorrect, and all)

On Sep 3, 2015, at 5:37 PM, Paul B. Henson  wrote:

>> From: Dan McDonald
>> Sent: Tuesday, August 11, 2015 7:16 AM
>> 
>> I think the packaging update may be a bit more complicated than just
> pushing
>> out openssh, but I don't think it's untenable.
> 
> Just wondering if there was any further news on this. Joyent just pushed out
> a change to their illumos branch that removes SunSSH completely and replaces
> it with OpenSSH which made me think about it :). They actually tweaked
> upstream OpenSSH to accept some of the SunSSH specific options and in a
> couple other minor ways be more compatible as a drop in replacement.
> Personally I don't care about that, vanilla openssh would work for me :),
> but their changes might be of interest to other omnios users, or if openssh
> becomes the default omnios ssh implementation rather than an optional after
> install replacement.
> 
> Thanks.
> 
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] openssh on omnios

2015-09-03 Thread Paul B. Henson
> From: Dan McDonald
> Sent: Thursday, September 03, 2015 2:56 PM
> 
> I knew Joyent was working on this.  I hope they upstream it soon.  I have
7.1p1
> in the upcoming bloody, with only the light patching already in
omnios-build,
> plus the recent Lauri T changes.

Is upstream going to be amenable to ditching SunSSH? As I recall from the
last time the topic was broached, there were a fair number of people who did
not want to lose the SunSSH specific changes (RBAC, a couple of other things
I don't recall offhand). Perhaps as SunSSH gets more and more obsolete, with
only an occasional interoperability bandaid back ported there will be less
resistance.



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss