Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-02-01 Thread Mini Trader
Perhaps it was too good to be true.

It seems that one of the parameters is being disregarded for CIFS shares.

1. When the system first starts and I download from my CIFS share, the
transfer rates are good, around 95mb/sec.
2. If I restart CIFS the rates are good.
3. If I wait sometime after the restart or the system has been connected
for a few hours and I attempt to download the rates are bad, around
20mb/sec!
4. If I run iperf when CIFS rates are bad, my speed is good, over
900megabit.

What is going on with CIFS that is causing the client to have slow download
speeds?  It's like CIFS is forgetting the buffer values because whenever I
restart it I am back to good downloads on my client for a brief period.

I appreciate any suggestions.

On Fri, Jan 29, 2016 at 3:09 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Fri, 29 Jan 2016, Guenther Alka wrote:
>
> With the default mtu 1500 you can max out 1G networks but on 10G you are
>> limited to about 300-400 MB/s.
>> With mtu 9000 that is supported on all of my switches and computers the
>> SMB2 limit is near the limit of 10G
>>
>
> Since this topic started about Moca 2.0, its worth mentioning that this
> consumer-grade networking technology might not adequately support large
> MTUs.  A particular Moca 2.0 device might support large MTUs, but this is
> likely atypical.
>
> Hardware that I am working with does support a somewhat elevated MTU (e.g.
> 2k) with Moca 2.0 but that is because we wrote code to support it and
> tested it between two units of our hardware.  With limited interoperability
> testing, we have not encountered other Moca 2.0 hardware which supports MTU
> over 1500 bytes.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-29 Thread Guenther Alka
With the default mtu 1500 you can max out 1G networks but on 10G you are 
limited to about 300-400 MB/s.
With mtu 9000 that is supported on all of my switches and computers the 
SMB2 limit is near the limit of 10G


Higher mtu values may be of interest when 40G+ becomes more available.
Would be a problem for my switches.

More important is the question if OmniOS could be optimized per default 
to be better prepared to 10G like ipbuffer or some NFS settings beside 
hotplug support for AHCI or the timeout for disks. This would mean 
minimal more RAM for the OS, minimal better 1G performance but opens the 
potential of 10G


Currently if you want to use OmniOS, you must do after setup from ISO/USB
- setup networking manually at CLI (annoying, everywhere in the installer)
- check nic driver config if mtu 9000 is allowed there
- enable hotplug behaviour with AHCI
- reduce timeouts of disks ex to the 7s of TLER (way too high per default)
http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
- modify ip buffers and NFS settings for a proper NFS/SMB performance

while there is no "global best setting" the current OmniOS defaults are 
worser than suboptimal.
If someone compares a default OmniOS vs a BSD or Linux system, the 
OmniOS results are far below the potential.


Even this MoCa problem would have been obsolete with higher ip buffers 
per default



Gea

Am 28.01.2016 um 22:40 schrieb Dale Ghent:

For what it's worth, the max MTU for X540 (and X520, and X550) is 15.5k. You 
can nearly double the frame size that you used in your tests, switch and the 
MacOS ixgbe driver allowing, of course.



On Jan 28, 2016, at 4:20 PM, Günther Alka  wrote:

I have done some tests about different tuning options (network, disk, service, 
client related) -
mainly with 10G ethernet in mind but this may give some ideas about options (on 
new 151017 bloody)

http://napp-it.org/doc/downloads/performance_smb2.pdf


Am 28.01.2016 um 21:15 schrieb Mini Trader:

I most definitely will.  Any other tunables worth looking at or can most of 
these issues be fixed by send/receive buffer size?

This was a nice crash course on how TCP Window sizes can affect your data 
throughput!

On Thu, Jan 28, 2016 at 2:49 PM, Dan McDonald  wrote:


On Jan 28, 2016, at 2:44 PM, Mini Trader  wrote:

Problem has been resolved :)


Makes sense.  Those settings are only inherited by new TCP connections.  Sorry 
I missed a good chunk of this thread, but you pretty much figured it all out.

And you should check out this bloody cycle... SMB2 is on it, and it may help 
you further.  Or you can wait until r151018, but early testing is why we have 
bloody.  :)

Dan




___
OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


--
H  f   G
Hochschule für Gestaltung
university of design

Schwäbisch Gmünd
Rektor-Klaus Str. 100
73525 Schwäbisch Gmünd

Guenther Alka, Dipl.-Ing. (FH)
Leiter des Rechenzentrums
head of computer center

Tel 07171 602 627
Fax 07171 69259
guenther.a...@hfg-gmuend.de
http://rz.hfg-gmuend.de

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-29 Thread Bob Friesenhahn

On Fri, 29 Jan 2016, Guenther Alka wrote:

With the default mtu 1500 you can max out 1G networks but on 10G you are 
limited to about 300-400 MB/s.
With mtu 9000 that is supported on all of my switches and computers the SMB2 
limit is near the limit of 10G


Since this topic started about Moca 2.0, its worth mentioning that 
this consumer-grade networking technology might not adequately 
support large MTUs.  A particular Moca 2.0 device might support large 
MTUs, but this is likely atypical.


Hardware that I am working with does support a somewhat elevated MTU 
(e.g. 2k) with Moca 2.0 but that is because we wrote code to support 
it and tested it between two units of our hardware.  With limited 
interoperability testing, we have not encountered other Moca 2.0 
hardware which supports MTU over 1500 bytes.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
Turns out that running svcadm restart smb/server after tuning the send and
receive buffers has fixed the problem.  I can now transfer at nearly 1GBe
both up and down!

Problem has been resolved :)

On Thu, Jan 28, 2016 at 2:30 PM, Mini Trader 
wrote:

> Is there a way to adjust the default Window Size for CIFS or NFS?
>
> On Thu, Jan 28, 2016 at 1:39 PM, Mini Trader 
> wrote:
>
>> I also tried the following.  Which seems to have improved iperf speeds.
>> But I am still getting the same CIFS speeds.
>>
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> recv_buf=1048576 tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> send_buf=1048576 tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> max_buf=4194304 tcp
>>
>>
>> 
>> Server listening on TCP port 5001
>> TCP window size:  977 KByte
>> 
>> 
>> Client connecting to storage1.midway, TCP port 5001
>> TCP window size:  977 KByte
>> 
>> [  4] local 10.255.0.141 port 33452 connected with 10.255.0.15 port 5001
>> [ ID] Interval   Transfer Bandwidth
>> [  4]  0.0- 1.0 sec   106 MBytes   892 Mbits/sec
>> [  4]  1.0- 2.0 sec   111 MBytes   928 Mbits/sec
>> [  4]  2.0- 3.0 sec   108 MBytes   904 Mbits/sec
>> [  4]  3.0- 4.0 sec   109 MBytes   916 Mbits/sec
>> [  4]  4.0- 5.0 sec   110 MBytes   923 Mbits/sec
>> [  4]  5.0- 6.0 sec   110 MBytes   919 Mbits/sec
>> [  4]  6.0- 7.0 sec   110 MBytes   919 Mbits/sec
>> [  4]  7.0- 8.0 sec   105 MBytes   884 Mbits/sec
>> [  4]  8.0- 9.0 sec   109 MBytes   915 Mbits/sec
>> [  4]  9.0-10.0 sec   111 MBytes   928 Mbits/sec
>> [  4]  0.0-10.0 sec  1.06 GBytes   912 Mbits/sec
>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 50899
>> [  4]  0.0- 1.0 sec  97.5 MBytes   818 Mbits/sec
>> [  4]  1.0- 2.0 sec   110 MBytes   923 Mbits/sec
>> [  4]  2.0- 3.0 sec  49.3 MBytes   414 Mbits/sec
>> [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
>> [  4]  4.0- 5.0 sec  96.7 MBytes   811 Mbits/sec
>> [  4]  5.0- 6.0 sec  99.7 MBytes   836 Mbits/sec
>> [  4]  6.0- 7.0 sec   103 MBytes   861 Mbits/sec
>> [  4]  7.0- 8.0 sec   101 MBytes   851 Mbits/sec
>> [  4]  8.0- 9.0 sec   104 MBytes   876 Mbits/sec
>> [  4]  9.0-10.0 sec   104 MBytes   876 Mbits/sec
>> [  4]  0.0-10.0 sec   966 MBytes   808 Mbits/sec
>>
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p
>> recv_buf tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p
>> send_buf tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p max_buf
>> tcp
>>
>> 
>> Server listening on TCP port 5001
>> TCP window size:  977 KByte
>> 
>> 
>> Client connecting to storage1.midway, TCP port 5001
>> TCP window size:  977 KByte
>> 
>> [  4] local 10.255.0.141 port 33512 connected with 10.255.0.15 port 5001
>> [ ID] Interval   Transfer Bandwidth
>> [  4]  0.0- 1.0 sec  35.2 MBytes   296 Mbits/sec
>> [  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
>> [  4]  2.0- 3.0 sec  34.2 MBytes   287 Mbits/sec
>> [  4]  3.0- 4.0 sec  33.4 MBytes   280 Mbits/sec
>> [  4]  4.0- 5.0 sec  34.1 MBytes   286 Mbits/sec
>> [  4]  5.0- 6.0 sec  35.2 MBytes   296 Mbits/sec
>> [  4]  6.0- 7.0 sec  35.4 MBytes   297 Mbits/sec
>> [  4]  7.0- 8.0 sec  34.4 MBytes   288 Mbits/sec
>> [  4]  8.0- 9.0 sec  35.0 MBytes   294 Mbits/sec
>> [  4]  9.0-10.0 sec  33.4 MBytes   280 Mbits/sec
>> [  4]  0.0-10.0 sec   346 MBytes   289 Mbits/sec
>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 41435
>> [  4]  0.0- 1.0 sec  57.6 MBytes   483 Mbits/sec
>> [  4]  1.0- 2.0 sec  87.2 MBytes   732 Mbits/sec
>> [  4]  2.0- 3.0 sec  99.3 MBytes   833 Mbits/sec
>> [  4]  3.0- 4.0 sec  99.5 MBytes   835 Mbits/sec
>> [  4]  4.0- 5.0 sec   100 MBytes   842 Mbits/sec
>> [  4]  5.0- 6.0 sec   103 MBytes   866 Mbits/sec
>> [  4]  6.0- 7.0 sec   100 MBytes   840 Mbits/sec
>> [  4]  7.0- 8.0 sec  98.7 MBytes   828 Mbits/sec
>> [  4]  8.0- 9.0 sec   101 MBytes   847 Mbits/sec
>> [  4]  9.0-10.0 sec   105 MBytes   882 Mbits/sec
>> [  4]  0.0-10.0 sec   954 MBytes   799 Mbits/sec
>>
>>
>> On Thu, Jan 28, 2016 at 11:34 AM, Mini Trader 
>> wrote:
>>
>>> Thank you for all the responses! Ive run some more detailed tests using
>>> iperf 2.  The results that I see are inline with the transfer rates so they
>>> describe the behavior that I am seeing.
>>>
>>> Note I used a laptop on same connection as desktop.  So that there would
>>> be a basis to compare 

Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Bob Friesenhahn

On Thu, 28 Jan 2016, Mini Trader wrote:


Turns out that running svcadm restart smb/server after tuning the send and 
receive buffers has fixed the problem.  I can now
transfer at nearly 1GBe both up and down!
Problem has been resolved :)


The next problem you may encounter is that MoCA is basically 
half-duplex so performance will suffer with two-way traffic.  MoCA is 
not at all like Ethernet although it passes Ethernet frames.  It 
"bundles" multiple frames which happens to be going to the same place 
because it seems like it is slow to turn the pipe around.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
You are right about this.

Client connecting to storage1.midway, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.141 port 14766 connected with 10.255.0.15 port 5001
[  5] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 55052
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec  87.2 MBytes   732 Mbits/sec
[  5]  0.0- 1.0 sec  17.6 MBytes   147 Mbits/sec
[  4]  1.0- 2.0 sec  78.4 MBytes   657 Mbits/sec
[  5]  1.0- 2.0 sec  33.4 MBytes   280 Mbits/sec
[  4]  2.0- 3.0 sec  69.5 MBytes   583 Mbits/sec
[  5]  2.0- 3.0 sec  34.7 MBytes   291 Mbits/sec
[  5]  3.0- 4.0 sec  31.8 MBytes   267 Mbits/sec
[  4]  3.0- 4.0 sec  68.1 MBytes   571 Mbits/sec
[  4]  4.0- 5.0 sec  71.9 MBytes   603 Mbits/sec
[  5]  4.0- 5.0 sec  31.9 MBytes   267 Mbits/sec
[  4]  5.0- 6.0 sec  72.1 MBytes   605 Mbits/sec
[  5]  5.0- 6.0 sec  30.5 MBytes   256 Mbits/sec
[  4]  6.0- 7.0 sec  74.0 MBytes   621 Mbits/sec
[  5]  6.0- 7.0 sec  30.3 MBytes   254 Mbits/sec
[  5]  7.0- 8.0 sec  31.0 MBytes   260 Mbits/sec
[  4]  7.0- 8.0 sec  77.8 MBytes   652 Mbits/sec
[  4]  8.0- 9.0 sec  74.9 MBytes   628 Mbits/sec
[  5]  8.0- 9.0 sec  33.5 MBytes   281 Mbits/sec
[  4]  9.0-10.0 sec  57.1 MBytes   479 Mbits/sec
[  4]  0.0-10.0 sec   731 MBytes   613 Mbits/sec
[  5]  9.0-10.0 sec  41.5 MBytes   348 Mbits/sec
[  5]  0.0-10.0 sec   318 MBytes   266 Mbits/sec

On Thu, Jan 28, 2016 at 4:58 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Thu, 28 Jan 2016, Mini Trader wrote:
>
> Turns out that running svcadm restart smb/server after tuning the send and
>> receive buffers has fixed the problem.  I can now
>> transfer at nearly 1GBe both up and down!
>> Problem has been resolved :)
>>
>
> The next problem you may encounter is that MoCA is basically half-duplex
> so performance will suffer with two-way traffic.  MoCA is not at all like
> Ethernet although it passes Ethernet frames.  It "bundles" multiple frames
> which happens to be going to the same place because it seems like it is
> slow to turn the pipe around.
>
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
Is there a way to adjust the default Window Size for CIFS or NFS?

On Thu, Jan 28, 2016 at 1:39 PM, Mini Trader 
wrote:

> I also tried the following.  Which seems to have improved iperf speeds.
> But I am still getting the same CIFS speeds.
>
> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
> recv_buf=1048576 tcp
> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
> send_buf=1048576 tcp
> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
> max_buf=4194304 tcp
>
>
> 
> Server listening on TCP port 5001
> TCP window size:  977 KByte
> 
> 
> Client connecting to storage1.midway, TCP port 5001
> TCP window size:  977 KByte
> 
> [  4] local 10.255.0.141 port 33452 connected with 10.255.0.15 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  4]  0.0- 1.0 sec   106 MBytes   892 Mbits/sec
> [  4]  1.0- 2.0 sec   111 MBytes   928 Mbits/sec
> [  4]  2.0- 3.0 sec   108 MBytes   904 Mbits/sec
> [  4]  3.0- 4.0 sec   109 MBytes   916 Mbits/sec
> [  4]  4.0- 5.0 sec   110 MBytes   923 Mbits/sec
> [  4]  5.0- 6.0 sec   110 MBytes   919 Mbits/sec
> [  4]  6.0- 7.0 sec   110 MBytes   919 Mbits/sec
> [  4]  7.0- 8.0 sec   105 MBytes   884 Mbits/sec
> [  4]  8.0- 9.0 sec   109 MBytes   915 Mbits/sec
> [  4]  9.0-10.0 sec   111 MBytes   928 Mbits/sec
> [  4]  0.0-10.0 sec  1.06 GBytes   912 Mbits/sec
> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 50899
> [  4]  0.0- 1.0 sec  97.5 MBytes   818 Mbits/sec
> [  4]  1.0- 2.0 sec   110 MBytes   923 Mbits/sec
> [  4]  2.0- 3.0 sec  49.3 MBytes   414 Mbits/sec
> [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
> [  4]  4.0- 5.0 sec  96.7 MBytes   811 Mbits/sec
> [  4]  5.0- 6.0 sec  99.7 MBytes   836 Mbits/sec
> [  4]  6.0- 7.0 sec   103 MBytes   861 Mbits/sec
> [  4]  7.0- 8.0 sec   101 MBytes   851 Mbits/sec
> [  4]  8.0- 9.0 sec   104 MBytes   876 Mbits/sec
> [  4]  9.0-10.0 sec   104 MBytes   876 Mbits/sec
> [  4]  0.0-10.0 sec   966 MBytes   808 Mbits/sec
>
> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p recv_buf
> tcp
> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p send_buf
> tcp
> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p max_buf
> tcp
>
> 
> Server listening on TCP port 5001
> TCP window size:  977 KByte
> 
> 
> Client connecting to storage1.midway, TCP port 5001
> TCP window size:  977 KByte
> 
> [  4] local 10.255.0.141 port 33512 connected with 10.255.0.15 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  4]  0.0- 1.0 sec  35.2 MBytes   296 Mbits/sec
> [  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
> [  4]  2.0- 3.0 sec  34.2 MBytes   287 Mbits/sec
> [  4]  3.0- 4.0 sec  33.4 MBytes   280 Mbits/sec
> [  4]  4.0- 5.0 sec  34.1 MBytes   286 Mbits/sec
> [  4]  5.0- 6.0 sec  35.2 MBytes   296 Mbits/sec
> [  4]  6.0- 7.0 sec  35.4 MBytes   297 Mbits/sec
> [  4]  7.0- 8.0 sec  34.4 MBytes   288 Mbits/sec
> [  4]  8.0- 9.0 sec  35.0 MBytes   294 Mbits/sec
> [  4]  9.0-10.0 sec  33.4 MBytes   280 Mbits/sec
> [  4]  0.0-10.0 sec   346 MBytes   289 Mbits/sec
> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 41435
> [  4]  0.0- 1.0 sec  57.6 MBytes   483 Mbits/sec
> [  4]  1.0- 2.0 sec  87.2 MBytes   732 Mbits/sec
> [  4]  2.0- 3.0 sec  99.3 MBytes   833 Mbits/sec
> [  4]  3.0- 4.0 sec  99.5 MBytes   835 Mbits/sec
> [  4]  4.0- 5.0 sec   100 MBytes   842 Mbits/sec
> [  4]  5.0- 6.0 sec   103 MBytes   866 Mbits/sec
> [  4]  6.0- 7.0 sec   100 MBytes   840 Mbits/sec
> [  4]  7.0- 8.0 sec  98.7 MBytes   828 Mbits/sec
> [  4]  8.0- 9.0 sec   101 MBytes   847 Mbits/sec
> [  4]  9.0-10.0 sec   105 MBytes   882 Mbits/sec
> [  4]  0.0-10.0 sec   954 MBytes   799 Mbits/sec
>
>
> On Thu, Jan 28, 2016 at 11:34 AM, Mini Trader 
> wrote:
>
>> Thank you for all the responses! Ive run some more detailed tests using
>> iperf 2.  The results that I see are inline with the transfer rates so they
>> describe the behavior that I am seeing.
>>
>> Note I used a laptop on same connection as desktop.  So that there would
>> be a basis to compare it to the Desktop.
>>
>> For some reason the laptop has a limit of around 500-600 mbit/sec for its
>> downloads, regardless the test still seem to show the behavior
>> that I am seeing.  Note that Linux does not seem to have the same issues
>> where OmniOS does.  Additionally OmniOS does not have the issue
>> when using a direct ethernet connection.  One thing I can say about 

Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Dan McDonald

> On Jan 28, 2016, at 2:44 PM, Mini Trader  wrote:
> 
> Problem has been resolved :)
> 

Makes sense.  Those settings are only inherited by new TCP connections.  Sorry 
I missed a good chunk of this thread, but you pretty much figured it all out.

And you should check out this bloody cycle... SMB2 is on it, and it may help 
you further.  Or you can wait until r151018, but early testing is why we have 
bloody.  :)

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
I most definitely will.  Any other tunables worth looking at or can most of
these issues be fixed by send/receive buffer size?

This was a nice crash course on how TCP Window sizes can affect your data
throughput!

On Thu, Jan 28, 2016 at 2:49 PM, Dan McDonald  wrote:

>
> > On Jan 28, 2016, at 2:44 PM, Mini Trader 
> wrote:
> >
> > Problem has been resolved :)
> >
>
> Makes sense.  Those settings are only inherited by new TCP connections.
> Sorry I missed a good chunk of this thread, but you pretty much figured it
> all out.
>
> And you should check out this bloody cycle... SMB2 is on it, and it may
> help you further.  Or you can wait until r151018, but early testing is why
> we have bloody.  :)
>
> Dan
>
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Dale Ghent
For what it's worth, the max MTU for X540 (and X520, and X550) is 15.5k. You 
can nearly double the frame size that you used in your tests, switch and the 
MacOS ixgbe driver allowing, of course.


> On Jan 28, 2016, at 4:20 PM, Günther Alka  wrote:
> 
> I have done some tests about different tuning options (network, disk, 
> service, client related) -
> mainly with 10G ethernet in mind but this may give some ideas about options 
> (on new 151017 bloody)
> 
> http://napp-it.org/doc/downloads/performance_smb2.pdf
> 
> 
> Am 28.01.2016 um 21:15 schrieb Mini Trader:
>> I most definitely will.  Any other tunables worth looking at or can most of 
>> these issues be fixed by send/receive buffer size?
>> 
>> This was a nice crash course on how TCP Window sizes can affect your data 
>> throughput!
>> 
>> On Thu, Jan 28, 2016 at 2:49 PM, Dan McDonald  wrote:
>> 
>> > On Jan 28, 2016, at 2:44 PM, Mini Trader  wrote:
>> >
>> > Problem has been resolved :)
>> >
>> 
>> Makes sense.  Those settings are only inherited by new TCP connections.  
>> Sorry I missed a good chunk of this thread, but you pretty much figured it 
>> all out.
>> 
>> And you should check out this bloody cycle... SMB2 is on it, and it may help 
>> you further.  Or you can wait until r151018, but early testing is why we 
>> have bloody.  :)
>> 
>> Dan
>> 
>> 
>> 
>> 
>> ___
>> OmniOS-discuss mailing list
>> 
>> OmniOS-discuss@lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
I also tried the following.  Which seems to have improved iperf speeds.
But I am still getting the same CIFS speeds.

root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
recv_buf=1048576 tcp
root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
send_buf=1048576 tcp
root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
max_buf=4194304 tcp



Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to storage1.midway, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.141 port 33452 connected with 10.255.0.15 port 5001
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec   106 MBytes   892 Mbits/sec
[  4]  1.0- 2.0 sec   111 MBytes   928 Mbits/sec
[  4]  2.0- 3.0 sec   108 MBytes   904 Mbits/sec
[  4]  3.0- 4.0 sec   109 MBytes   916 Mbits/sec
[  4]  4.0- 5.0 sec   110 MBytes   923 Mbits/sec
[  4]  5.0- 6.0 sec   110 MBytes   919 Mbits/sec
[  4]  6.0- 7.0 sec   110 MBytes   919 Mbits/sec
[  4]  7.0- 8.0 sec   105 MBytes   884 Mbits/sec
[  4]  8.0- 9.0 sec   109 MBytes   915 Mbits/sec
[  4]  9.0-10.0 sec   111 MBytes   928 Mbits/sec
[  4]  0.0-10.0 sec  1.06 GBytes   912 Mbits/sec
[  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 50899
[  4]  0.0- 1.0 sec  97.5 MBytes   818 Mbits/sec
[  4]  1.0- 2.0 sec   110 MBytes   923 Mbits/sec
[  4]  2.0- 3.0 sec  49.3 MBytes   414 Mbits/sec
[  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
[  4]  4.0- 5.0 sec  96.7 MBytes   811 Mbits/sec
[  4]  5.0- 6.0 sec  99.7 MBytes   836 Mbits/sec
[  4]  6.0- 7.0 sec   103 MBytes   861 Mbits/sec
[  4]  7.0- 8.0 sec   101 MBytes   851 Mbits/sec
[  4]  8.0- 9.0 sec   104 MBytes   876 Mbits/sec
[  4]  9.0-10.0 sec   104 MBytes   876 Mbits/sec
[  4]  0.0-10.0 sec   966 MBytes   808 Mbits/sec

root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p recv_buf
tcp
root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p send_buf
tcp
root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p max_buf tcp


Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to storage1.midway, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.141 port 33512 connected with 10.255.0.15 port 5001
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec  35.2 MBytes   296 Mbits/sec
[  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
[  4]  2.0- 3.0 sec  34.2 MBytes   287 Mbits/sec
[  4]  3.0- 4.0 sec  33.4 MBytes   280 Mbits/sec
[  4]  4.0- 5.0 sec  34.1 MBytes   286 Mbits/sec
[  4]  5.0- 6.0 sec  35.2 MBytes   296 Mbits/sec
[  4]  6.0- 7.0 sec  35.4 MBytes   297 Mbits/sec
[  4]  7.0- 8.0 sec  34.4 MBytes   288 Mbits/sec
[  4]  8.0- 9.0 sec  35.0 MBytes   294 Mbits/sec
[  4]  9.0-10.0 sec  33.4 MBytes   280 Mbits/sec
[  4]  0.0-10.0 sec   346 MBytes   289 Mbits/sec
[  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 41435
[  4]  0.0- 1.0 sec  57.6 MBytes   483 Mbits/sec
[  4]  1.0- 2.0 sec  87.2 MBytes   732 Mbits/sec
[  4]  2.0- 3.0 sec  99.3 MBytes   833 Mbits/sec
[  4]  3.0- 4.0 sec  99.5 MBytes   835 Mbits/sec
[  4]  4.0- 5.0 sec   100 MBytes   842 Mbits/sec
[  4]  5.0- 6.0 sec   103 MBytes   866 Mbits/sec
[  4]  6.0- 7.0 sec   100 MBytes   840 Mbits/sec
[  4]  7.0- 8.0 sec  98.7 MBytes   828 Mbits/sec
[  4]  8.0- 9.0 sec   101 MBytes   847 Mbits/sec
[  4]  9.0-10.0 sec   105 MBytes   882 Mbits/sec
[  4]  0.0-10.0 sec   954 MBytes   799 Mbits/sec


On Thu, Jan 28, 2016 at 11:34 AM, Mini Trader 
wrote:

> Thank you for all the responses! Ive run some more detailed tests using
> iperf 2.  The results that I see are inline with the transfer rates so they
> describe the behavior that I am seeing.
>
> Note I used a laptop on same connection as desktop.  So that there would
> be a basis to compare it to the Desktop.
>
> For some reason the laptop has a limit of around 500-600 mbit/sec for its
> downloads, regardless the test still seem to show the behavior
> that I am seeing.  Note that Linux does not seem to have the same issues
> where OmniOS does.  Additionally OmniOS does not have the issue
> when using a direct ethernet connection.  One thing I can say about Linux
> is that its downloads on the adapters are less than its uploads which
> is the complete opposite as OmniOS.  This Linux behavior is not seen when
> using ethernet.
>
> Both Linux and OmniOS are running on ESXi 6U1.  OmniOS is using the vmxnet
> driver.
>
> The adapters being used are Adaptec ECB6200.  These are bonded Moca 

Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-28 Thread Mini Trader
Thank you for all the responses! Ive run some more detailed tests using
iperf 2.  The results that I see are inline with the transfer rates so they
describe the behavior that I am seeing.

Note I used a laptop on same connection as desktop.  So that there would be
a basis to compare it to the Desktop.

For some reason the laptop has a limit of around 500-600 mbit/sec for its
downloads, regardless the test still seem to show the behavior
that I am seeing.  Note that Linux does not seem to have the same issues
where OmniOS does.  Additionally OmniOS does not have the issue
when using a direct ethernet connection.  One thing I can say about Linux
is that its downloads on the adapters are less than its uploads which
is the complete opposite as OmniOS.  This Linux behavior is not seen when
using ethernet.

Both Linux and OmniOS are running on ESXi 6U1.  OmniOS is using the vmxnet
driver.

The adapters being used are Adaptec ECB6200.  These are bonded Moca 2.0
adapters and are running the latest firmware.

Source Machine: Desktop
Connection: Adapter
Windows <-> OmniOS

Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to storage1, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.141 port 31595 connected with 10.255.0.15 port 5001
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec  34.9 MBytes   293 Mbits/sec
[  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
[  4]  2.0- 3.0 sec  35.2 MBytes   296 Mbits/sec
[  4]  3.0- 4.0 sec  34.4 MBytes   288 Mbits/sec
[  4]  4.0- 5.0 sec  34.5 MBytes   289 Mbits/sec
[  4]  0.0- 5.0 sec   174 MBytes   292 Mbits/sec
[  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 33341
[  4]  0.0- 1.0 sec  46.2 MBytes   388 Mbits/sec
[  4]  1.0- 2.0 sec   101 MBytes   849 Mbits/sec
[  4]  2.0- 3.0 sec   104 MBytes   872 Mbits/sec
[  4]  3.0- 4.0 sec   101 MBytes   851 Mbits/sec
[  4]  4.0- 5.0 sec   102 MBytes   855 Mbits/sec
[  4]  0.0- 5.0 sec   457 MBytes   763 Mbits/sec

Source Machine: Desktop
Connection: Adapter
Windows <-> Linux

Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to media.midway, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.141 port 31602 connected with 10.255.0.73 port 5001
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec   108 MBytes   902 Mbits/sec
[  4]  1.0- 2.0 sec   111 MBytes   929 Mbits/sec
[  4]  2.0- 3.0 sec   111 MBytes   928 Mbits/sec
[  4]  3.0- 4.0 sec   106 MBytes   892 Mbits/sec
[  4]  4.0- 5.0 sec   109 MBytes   918 Mbits/sec
[  4]  0.0- 5.0 sec   545 MBytes   914 Mbits/sec
[  4] local 10.255.0.141 port 5001 connected with 10.255.0.73 port 55045
[  4]  0.0- 1.0 sec  67.0 MBytes   562 Mbits/sec
[  4]  1.0- 2.0 sec  75.6 MBytes   634 Mbits/sec
[  4]  2.0- 3.0 sec  75.1 MBytes   630 Mbits/sec
[  4]  3.0- 4.0 sec  74.5 MBytes   625 Mbits/sec
[  4]  4.0- 5.0 sec  75.7 MBytes   635 Mbits/sec
[  4]  0.0- 5.0 sec   368 MBytes   616 Mbits/sec


Machine: Laptop
Connection: Adapter
Windows <-> OmniOS Notice same issue with 35mb cap.


Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to storage1.midway, TCP port 5001
TCP window size:  977 KByte

[  4] local 10.255.0.54 port 57487 connected with 10.255.0.15 port 5001
[ ID] Interval   Transfer Bandwidth
[  4]  0.0- 1.0 sec  35.5 MBytes   298 Mbits/sec
[  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
[  4]  2.0- 3.0 sec  35.0 MBytes   294 Mbits/sec
[  4]  3.0- 4.0 sec  34.2 MBytes   287 Mbits/sec
[  4]  4.0- 5.0 sec  33.9 MBytes   284 Mbits/sec
[  4]  0.0- 5.0 sec   174 MBytes   291 Mbits/sec
[  4] local 10.255.0.54 port 5001 connected with 10.255.0.15 port 40779
[  4]  0.0- 1.0 sec  28.8 MBytes   242 Mbits/sec
[  4]  1.0- 2.0 sec  55.8 MBytes   468 Mbits/sec
[  4]  2.0- 3.0 sec  43.7 MBytes   366 Mbits/sec
[  4]  3.0- 4.0 sec  50.7 MBytes   425 Mbits/sec
[  4]  4.0- 5.0 sec  52.7 MBytes   442 Mbits/sec
[  4]  0.0- 5.0 sec   233 MBytes   389 Mbits/sec

Machine: Laptop
Connection: Adapter
Windows <-> Linux (not issue on upload, same as desktop)

Server listening on TCP port 5001
TCP window size:  977 KByte


Client connecting to media.midway, TCP port 5001
TCP window size:  977 KByte

Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-27 Thread Bob Friesenhahn

On Wed, 27 Jan 2016, Mini Trader wrote:


Slow CIFS Writes when using Moca 2.0 Adapter.

I am experiencing this only under OmniOS.  I do not see this in Windows or 
Linux.

I have a ZFS CIFS share setup which can easily do writes that would saturate a 
1GBe connection.

My problem appears to be related somehow to the interaction between OmniOS and 
ECB6200 Moca 2.0 adapters.

1. If I write to my OmniOS CIFS share using ethernet my speeds up/down are 
around 110 mb/sec - good

2. If I write to my share using the same source but over the adapter my speeds 
are around 35mb/sec - problem


MoCA has a 3.0+ millisecond latency (I typically see 3.5ms when using 
ping).  This latency is fairly large compared with typical hard drive 
latencies and vastly higher than Ethernet.  There is nothing which can 
be done about this latency.


Unbonded MoCA 2.0 throughput for streaming data is typically 
500Mbit/second, and bonded (two channels) MoCA 2.0 doubles that (the 
claimed specs are of course higher than this and higher speeds can be 
measured under ideal conditions).  This means that typical MoCA 2.0 
(not bonded) achieves a bit less than half of what gigabit Ethernet 
achieves when streaming data over TCP.



3. If I read from the share using the same device over the adapter my speeds 
are around 110mb/sec - good


Reading is normally more of a streaming operation so the TCP will 
stream rather well.



4. If I setup a share on a Windows machine and write to it from the same source 
using the  adapter the speeds are
around 110mb/sec.  The Windows machine is actually a VM whos disks are backed 
by a ZFS NFS share on the same
machine


This seems rather good. Quite a lot depends on what the server side 
does.  If it commits each write to disk before accepting more, then 
the write speed would suffer.



So basically the issue only takes place when writing to the OmniOS CIFS share 
using the adapter, if the adapter is
not used than the write speed is perfect.


If the MoCA adaptor supports bonded mode, then it is useful to know 
that usually bonded mode needs to be enabled.  Is it possible that the 
Windows driver is enabling bonded mode but the OmniOS driver does not?


Try running a TCP streaming benchmark (program to program) to see what 
the peak network throughput is in each case.



Any ideas why/how a Moca 2.0 adapter which is just designed to convert an 
ethernet  signal to a coax and back to
ethernet  would cause issues with writes on OmniOS when the exact same share 
has no issues when using an actual
ethernet connection?  More importantly, why is this happening with OmniOS CIFS 
and not anything else?


Latency, synchronous writes, and possibly bonding not enabled.  Also, 
OmniOS r151016 or later is need to get the latest CIFS implementation 
(based on Nexenta changes), which has been reported on this list to be 
quite a lot faster than the older one.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

2016-01-27 Thread Dan McDonald
And even more will be on the way with this current bloody cycle and r151018. 
(I.e. Smb2)

Dan

Sent from my iPhone (typos, autocorrect, and all)

> On Jan 27, 2016, at 10:35 PM, Bob Friesenhahn  
> wrote:
> 
> enabled.  Also, OmniOS r151016 or later is need to get the latest CIFS 
> implementation (based on Nexenta changes), which has been reported on this 
> list to be quite a lot faster than the older one.
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss