On 28 January 2015 at 15:03, Donald J. <dona...@4email.net> wrote:

> Not sure about massive, but fairly large, 16.5M rows.
> The initial build and index setup took 8.1G diskspace on Linux.
> I will get a packet trace.
>

That sounds like enough to at least consider to worry about things.

The benefit of a large MSS is in less packets traveling through the stack,
so less CPU overhead. But the difference is not as large that it would
break the project. I think modern distributions set the QDIO buffers at
maximum already, otherwise you would need that too.Make sure the TCP window
size matches the effective packet size. With a low latency connection like
hipersockets, it is tempting to ignore that, but "pretty close" is not good
enough to avoid Nagle's Algorithm kick in. Unless the window is very large.
If the application takes the default, you can set it with tcp_wmem, but
many applications have their own configuration options to override the
defaults even when you don't want that.

Now if you're sending a lot of data, that must come from somewhere. You
really need to look at all resources. It is very well possible the network
is not the bottleneck. I have frequently looked at assumed network issues
that turned out to be disk I/O. We'll be happy to give a hand.

Rob
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to