Turns out that running svcadm restart smb/server after tuning the send and
receive buffers has fixed the problem.  I can now transfer at nearly 1GBe
both up and down!

Problem has been resolved :)

On Thu, Jan 28, 2016 at 2:30 PM, Mini Trader <miniflowtra...@gmail.com>
wrote:

> Is there a way to adjust the default Window Size for CIFS or NFS?
>
> On Thu, Jan 28, 2016 at 1:39 PM, Mini Trader <miniflowtra...@gmail.com>
> wrote:
>
>> I also tried the following.  Which seems to have improved iperf speeds.
>> But I am still getting the same CIFS speeds.
>>
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> recv_buf=1048576 tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> send_buf=1048576 tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p
>> max_buf=4194304 tcp
>>
>>
>> ------------------------------------------------------------
>> Server listening on TCP port 5001
>> TCP window size:  977 KByte
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> Client connecting to storage1.midway, TCP port 5001
>> TCP window size:  977 KByte
>> ------------------------------------------------------------
>> [  4] local 10.255.0.141 port 33452 connected with 10.255.0.15 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  4]  0.0- 1.0 sec   106 MBytes   892 Mbits/sec
>> [  4]  1.0- 2.0 sec   111 MBytes   928 Mbits/sec
>> [  4]  2.0- 3.0 sec   108 MBytes   904 Mbits/sec
>> [  4]  3.0- 4.0 sec   109 MBytes   916 Mbits/sec
>> [  4]  4.0- 5.0 sec   110 MBytes   923 Mbits/sec
>> [  4]  5.0- 6.0 sec   110 MBytes   919 Mbits/sec
>> [  4]  6.0- 7.0 sec   110 MBytes   919 Mbits/sec
>> [  4]  7.0- 8.0 sec   105 MBytes   884 Mbits/sec
>> [  4]  8.0- 9.0 sec   109 MBytes   915 Mbits/sec
>> [  4]  9.0-10.0 sec   111 MBytes   928 Mbits/sec
>> [  4]  0.0-10.0 sec  1.06 GBytes   912 Mbits/sec
>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 50899
>> [  4]  0.0- 1.0 sec  97.5 MBytes   818 Mbits/sec
>> [  4]  1.0- 2.0 sec   110 MBytes   923 Mbits/sec
>> [  4]  2.0- 3.0 sec  49.3 MBytes   414 Mbits/sec
>> [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
>> [  4]  4.0- 5.0 sec  96.7 MBytes   811 Mbits/sec
>> [  4]  5.0- 6.0 sec  99.7 MBytes   836 Mbits/sec
>> [  4]  6.0- 7.0 sec   103 MBytes   861 Mbits/sec
>> [  4]  7.0- 8.0 sec   101 MBytes   851 Mbits/sec
>> [  4]  8.0- 9.0 sec   104 MBytes   876 Mbits/sec
>> [  4]  9.0-10.0 sec   104 MBytes   876 Mbits/sec
>> [  4]  0.0-10.0 sec   966 MBytes   808 Mbits/sec
>>
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p
>> recv_buf tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p
>> send_buf tcp
>> root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p max_buf
>> tcp
>>
>> ------------------------------------------------------------
>> Server listening on TCP port 5001
>> TCP window size:  977 KByte
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> Client connecting to storage1.midway, TCP port 5001
>> TCP window size:  977 KByte
>> ------------------------------------------------------------
>> [  4] local 10.255.0.141 port 33512 connected with 10.255.0.15 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  4]  0.0- 1.0 sec  35.2 MBytes   296 Mbits/sec
>> [  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
>> [  4]  2.0- 3.0 sec  34.2 MBytes   287 Mbits/sec
>> [  4]  3.0- 4.0 sec  33.4 MBytes   280 Mbits/sec
>> [  4]  4.0- 5.0 sec  34.1 MBytes   286 Mbits/sec
>> [  4]  5.0- 6.0 sec  35.2 MBytes   296 Mbits/sec
>> [  4]  6.0- 7.0 sec  35.4 MBytes   297 Mbits/sec
>> [  4]  7.0- 8.0 sec  34.4 MBytes   288 Mbits/sec
>> [  4]  8.0- 9.0 sec  35.0 MBytes   294 Mbits/sec
>> [  4]  9.0-10.0 sec  33.4 MBytes   280 Mbits/sec
>> [  4]  0.0-10.0 sec   346 MBytes   289 Mbits/sec
>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 41435
>> [  4]  0.0- 1.0 sec  57.6 MBytes   483 Mbits/sec
>> [  4]  1.0- 2.0 sec  87.2 MBytes   732 Mbits/sec
>> [  4]  2.0- 3.0 sec  99.3 MBytes   833 Mbits/sec
>> [  4]  3.0- 4.0 sec  99.5 MBytes   835 Mbits/sec
>> [  4]  4.0- 5.0 sec   100 MBytes   842 Mbits/sec
>> [  4]  5.0- 6.0 sec   103 MBytes   866 Mbits/sec
>> [  4]  6.0- 7.0 sec   100 MBytes   840 Mbits/sec
>> [  4]  7.0- 8.0 sec  98.7 MBytes   828 Mbits/sec
>> [  4]  8.0- 9.0 sec   101 MBytes   847 Mbits/sec
>> [  4]  9.0-10.0 sec   105 MBytes   882 Mbits/sec
>> [  4]  0.0-10.0 sec   954 MBytes   799 Mbits/sec
>>
>>
>> On Thu, Jan 28, 2016 at 11:34 AM, Mini Trader <miniflowtra...@gmail.com>
>> wrote:
>>
>>> Thank you for all the responses! Ive run some more detailed tests using
>>> iperf 2.  The results that I see are inline with the transfer rates so they
>>> describe the behavior that I am seeing.
>>>
>>> Note I used a laptop on same connection as desktop.  So that there would
>>> be a basis to compare it to the Desktop.
>>>
>>> For some reason the laptop has a limit of around 500-600 mbit/sec for
>>> its downloads, regardless the test still seem to show the behavior
>>> that I am seeing.  Note that Linux does not seem to have the same issues
>>> where OmniOS does.  Additionally OmniOS does not have the issue
>>> when using a direct ethernet connection.  One thing I can say about
>>> Linux is that its downloads on the adapters are less than its uploads which
>>> is the complete opposite as OmniOS.  This Linux behavior is not seen
>>> when using ethernet.
>>>
>>> Both Linux and OmniOS are running on ESXi 6U1.  OmniOS is using the
>>> vmxnet driver.
>>>
>>> The adapters being used are Adaptec ECB6200.  These are bonded Moca 2.0
>>> adapters and are running the latest firmware.
>>>
>>> Source Machine: Desktop
>>> Connection: Adapter
>>> Windows <-> OmniOS
>>>
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to storage1, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.141 port 31595 connected with 10.255.0.15 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec  34.9 MBytes   293 Mbits/sec
>>> [  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
>>> [  4]  2.0- 3.0 sec  35.2 MBytes   296 Mbits/sec
>>> [  4]  3.0- 4.0 sec  34.4 MBytes   288 Mbits/sec
>>> [  4]  4.0- 5.0 sec  34.5 MBytes   289 Mbits/sec
>>> [  4]  0.0- 5.0 sec   174 MBytes   292 Mbits/sec
>>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 33341
>>> [  4]  0.0- 1.0 sec  46.2 MBytes   388 Mbits/sec
>>> [  4]  1.0- 2.0 sec   101 MBytes   849 Mbits/sec
>>> [  4]  2.0- 3.0 sec   104 MBytes   872 Mbits/sec
>>> [  4]  3.0- 4.0 sec   101 MBytes   851 Mbits/sec
>>> [  4]  4.0- 5.0 sec   102 MBytes   855 Mbits/sec
>>> [  4]  0.0- 5.0 sec   457 MBytes   763 Mbits/sec
>>>
>>> Source Machine: Desktop
>>> Connection: Adapter
>>> Windows <-> Linux
>>>
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to media.midway, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.141 port 31602 connected with 10.255.0.73 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec   108 MBytes   902 Mbits/sec
>>> [  4]  1.0- 2.0 sec   111 MBytes   929 Mbits/sec
>>> [  4]  2.0- 3.0 sec   111 MBytes   928 Mbits/sec
>>> [  4]  3.0- 4.0 sec   106 MBytes   892 Mbits/sec
>>> [  4]  4.0- 5.0 sec   109 MBytes   918 Mbits/sec
>>> [  4]  0.0- 5.0 sec   545 MBytes   914 Mbits/sec
>>> [  4] local 10.255.0.141 port 5001 connected with 10.255.0.73 port 55045
>>> [  4]  0.0- 1.0 sec  67.0 MBytes   562 Mbits/sec
>>> [  4]  1.0- 2.0 sec  75.6 MBytes   634 Mbits/sec
>>> [  4]  2.0- 3.0 sec  75.1 MBytes   630 Mbits/sec
>>> [  4]  3.0- 4.0 sec  74.5 MBytes   625 Mbits/sec
>>> [  4]  4.0- 5.0 sec  75.7 MBytes   635 Mbits/sec
>>> [  4]  0.0- 5.0 sec   368 MBytes   616 Mbits/sec
>>>
>>>
>>> Machine: Laptop
>>> Connection: Adapter
>>> Windows <-> OmniOS Notice same issue with 35mb cap.
>>>
>>> ------------------------------------------------------------
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to storage1.midway, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.54 port 57487 connected with 10.255.0.15 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec  35.5 MBytes   298 Mbits/sec
>>> [  4]  1.0- 2.0 sec  35.0 MBytes   294 Mbits/sec
>>> [  4]  2.0- 3.0 sec  35.0 MBytes   294 Mbits/sec
>>> [  4]  3.0- 4.0 sec  34.2 MBytes   287 Mbits/sec
>>> [  4]  4.0- 5.0 sec  33.9 MBytes   284 Mbits/sec
>>> [  4]  0.0- 5.0 sec   174 MBytes   291 Mbits/sec
>>> [  4] local 10.255.0.54 port 5001 connected with 10.255.0.15 port 40779
>>> [  4]  0.0- 1.0 sec  28.8 MBytes   242 Mbits/sec
>>> [  4]  1.0- 2.0 sec  55.8 MBytes   468 Mbits/sec
>>> [  4]  2.0- 3.0 sec  43.7 MBytes   366 Mbits/sec
>>> [  4]  3.0- 4.0 sec  50.7 MBytes   425 Mbits/sec
>>> [  4]  4.0- 5.0 sec  52.7 MBytes   442 Mbits/sec
>>> [  4]  0.0- 5.0 sec   233 MBytes   389 Mbits/sec
>>>
>>> Machine: Laptop
>>> Connection: Adapter
>>> Windows <-> Linux (not issue on upload, same as desktop)
>>>
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to media.midway, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.54 port 57387 connected with 10.255.0.73 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec   110 MBytes   919 Mbits/sec
>>> [  4]  1.0- 2.0 sec   110 MBytes   920 Mbits/sec
>>> [  4]  2.0- 3.0 sec   110 MBytes   921 Mbits/sec
>>> [  4]  3.0- 4.0 sec   110 MBytes   923 Mbits/sec
>>> [  4]  4.0- 5.0 sec   110 MBytes   919 Mbits/sec
>>> [  4]  0.0- 5.0 sec   548 MBytes   919 Mbits/sec
>>> [  4] local 10.255.0.54 port 5001 connected with 10.255.0.73 port 52723
>>> [  4]  0.0- 1.0 sec  49.8 MBytes   418 Mbits/sec
>>> [  4]  1.0- 2.0 sec  55.1 MBytes   462 Mbits/sec
>>> [  4]  2.0- 3.0 sec  55.1 MBytes   462 Mbits/sec
>>> [  4]  3.0- 4.0 sec  53.6 MBytes   449 Mbits/sec
>>> [  4]  4.0- 5.0 sec  56.9 MBytes   477 Mbits/sec
>>> [  4]  0.0- 5.0 sec   271 MBytes   454 Mbits/sec
>>>
>>> Machine: Laptop
>>> Connection: Ethernet
>>> Windows <-> OmniOS (No issues on upload)
>>> ------------------------------------------------------------
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to storage1.midway, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.54 port 57858 connected with 10.255.0.15 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec   113 MBytes   950 Mbits/sec
>>> [  4]  1.0- 2.0 sec   111 MBytes   928 Mbits/sec
>>> [  4]  2.0- 3.0 sec   109 MBytes   912 Mbits/sec
>>> [  4]  3.0- 4.0 sec   111 MBytes   931 Mbits/sec
>>> [  4]  4.0- 5.0 sec   106 MBytes   889 Mbits/sec
>>> [  4]  0.0- 5.0 sec   550 MBytes   921 Mbits/sec
>>> [  4] local 10.255.0.54 port 5001 connected with 10.255.0.15 port 42565
>>> [  4]  0.0- 1.0 sec  38.4 MBytes   322 Mbits/sec
>>> [  4]  1.0- 2.0 sec  68.9 MBytes   578 Mbits/sec
>>> [  4]  2.0- 3.0 sec  67.7 MBytes   568 Mbits/sec
>>> [  4]  3.0- 4.0 sec  66.7 MBytes   559 Mbits/sec
>>> [  4]  4.0- 5.0 sec  63.2 MBytes   530 Mbits/sec
>>> [  4]  0.0- 5.0 sec   306 MBytes   513 Mbits/sec
>>>
>>> Machine: Laptop
>>> Connection: Ethernet
>>> Windows <-> Linux (Exact same speeds this time as OmnioOS)
>>> ------------------------------------------------------------
>>> Server listening on TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> ------------------------------------------------------------
>>> Client connecting to media.midway, TCP port 5001
>>> TCP window size:  977 KByte
>>> ------------------------------------------------------------
>>> [  4] local 10.255.0.54 port 57966 connected with 10.255.0.73 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0- 1.0 sec   110 MBytes   920 Mbits/sec
>>> [  4]  1.0- 2.0 sec   111 MBytes   932 Mbits/sec
>>> [  4]  2.0- 3.0 sec   111 MBytes   931 Mbits/sec
>>> [  4]  3.0- 4.0 sec   108 MBytes   902 Mbits/sec
>>> [  4]  4.0- 5.0 sec   106 MBytes   887 Mbits/sec
>>> [  4]  0.0- 5.0 sec   545 MBytes   913 Mbits/sec
>>> [  4] local 10.255.0.54 port 5001 connected with 10.255.0.73 port 52726
>>> [  4]  0.0- 1.0 sec  63.4 MBytes   532 Mbits/sec
>>> [  4]  1.0- 2.0 sec  62.9 MBytes   528 Mbits/sec
>>> [  4]  2.0- 3.0 sec  66.7 MBytes   560 Mbits/sec
>>> [  4]  3.0- 4.0 sec  65.3 MBytes   548 Mbits/sec
>>> [  4]  4.0- 5.0 sec  66.8 MBytes   560 Mbits/sec
>>> [  4]  0.0- 5.0 sec   326 MBytes   545 Mbits/sec
>>>
>>>
>>> On Wed, Jan 27, 2016 at 10:35 PM, Bob Friesenhahn <
>>> bfrie...@simple.dallas.tx.us> wrote:
>>>
>>>> On Wed, 27 Jan 2016, Mini Trader wrote:
>>>>
>>>> Slow CIFS Writes when using Moca 2.0 Adapter.
>>>>>
>>>>> I am experiencing this only under OmniOS.  I do not see this in
>>>>> Windows or Linux.
>>>>>
>>>>> I have a ZFS CIFS share setup which can easily do writes that would
>>>>> saturate a 1GBe connection.
>>>>>
>>>>> My problem appears to be related somehow to the interaction between
>>>>> OmniOS and ECB6200 Moca 2.0 adapters.
>>>>>
>>>>> 1. If I write to my OmniOS CIFS share using ethernet my speeds up/down
>>>>> are around 110 mb/sec - good
>>>>>
>>>>> 2. If I write to my share using the same source but over the adapter
>>>>> my speeds are around 35mb/sec - problem
>>>>>
>>>>
>>>> MoCA has a 3.0+ millisecond latency (I typically see 3.5ms when using
>>>> ping).  This latency is fairly large compared with typical hard drive
>>>> latencies and vastly higher than Ethernet.  There is nothing which can be
>>>> done about this latency.
>>>>
>>>> Unbonded MoCA 2.0 throughput for streaming data is typically
>>>> 500Mbit/second, and bonded (two channels) MoCA 2.0 doubles that (the
>>>> claimed specs are of course higher than this and higher speeds can be
>>>> measured under ideal conditions).  This means that typical MoCA 2.0 (not
>>>> bonded) achieves a bit less than half of what gigabit Ethernet achieves
>>>> when streaming data over TCP.
>>>>
>>>> 3. If I read from the share using the same device over the adapter my
>>>>> speeds are around 110mb/sec - good
>>>>>
>>>>
>>>> Reading is normally more of a streaming operation so the TCP will
>>>> stream rather well.
>>>>
>>>> 4. If I setup a share on a Windows machine and write to it from the
>>>>> same source using the  adapter the speeds are
>>>>> around 110mb/sec.  The Windows machine is actually a VM whos disks are
>>>>> backed by a ZFS NFS share on the same
>>>>> machine
>>>>>
>>>>
>>>> This seems rather good. Quite a lot depends on what the server side
>>>> does.  If it commits each write to disk before accepting more, then the
>>>> write speed would suffer.
>>>>
>>>> So basically the issue only takes place when writing to the OmniOS CIFS
>>>>> share using the adapter, if the adapter is
>>>>> not used than the write speed is perfect.
>>>>>
>>>>
>>>> If the MoCA adaptor supports bonded mode, then it is useful to know
>>>> that usually bonded mode needs to be enabled.  Is it possible that the
>>>> Windows driver is enabling bonded mode but the OmniOS driver does not?
>>>>
>>>> Try running a TCP streaming benchmark (program to program) to see what
>>>> the peak network throughput is in each case.
>>>>
>>>> Any ideas why/how a Moca 2.0 adapter which is just designed to convert
>>>>> an ethernet  signal to a coax and back to
>>>>> ethernet  would cause issues with writes on OmniOS when the exact same
>>>>> share has no issues when using an actual
>>>>> ethernet connection?  More importantly, why is this happening with
>>>>> OmniOS CIFS and not anything else?
>>>>>
>>>>
>>>> Latency, synchronous writes, and possibly bonding not enabled.  Also,
>>>> OmniOS r151016 or later is need to get the latest CIFS implementation
>>>> (based on Nexenta changes), which has been reported on this list to be
>>>> quite a lot faster than the older one.
>>>>
>>>> Bob
>>>> --
>>>> Bob Friesenhahn
>>>> bfrie...@simple.dallas.tx.us,
>>>> http://www.simplesystems.org/users/bfriesen/
>>>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>>>
>>>
>>>
>>
>
_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to