On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
The patch includes a lot of drc2.patch and drc3.patch, so don't try
and apply it to a patched kernel. Hopefully it will apply cleanly to
vanilla sources.
Tha patch has been minimally tested.
Well, it's taken
On 19.03.2013 05:29, Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues. However, I'm
I wrote:
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues. However, I'm
still waiting for my user
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Basically, this patch:
- allows setting of the tcp timeout via vfs.nfsd.tcpcachetimeo
(I'd suggest you go down to a few minutes instead of 12hrs)
- allows TCP caching to be
Garrett Wollman wrote:
On Mon, 11 Mar 2013 21:25:45 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
To be honest, I'd consider seeing a lot of non-empty receive queues
for TCP connections to the NFS server to be an indication that it is
near/at its load limit. (Sure, if you do
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
Basically, this patch:
- allows setting of the tcp timeout via vfs.nfsd.tcpcachetimeo
(I'd suggest you go down to a few minutes instead of 12hrs)
- allows TCP caching to be disabled by setting
On 11.03.2013 00:46, Rick Macklem wrote:
Andre Oppermann wrote:
On 10.03.2013 03:22, Rick Macklem wrote:
Garett Wollman wrote:
Also, it occurs to me that this strategy is subject to livelock. To
put backpressure on the clients, it is far better to get them to
stop
sending (by advertising a
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much about it.
Unless, of course, NFS is holding on to received mbufs for a longer time.
Well, I have two problems: one is running out of mbufs (caused, we
think, by
How large are you configuring your rings Garrett? Maybe if you tried
reducing them?
Jack
On Mon, Mar 11, 2013 at 9:05 AM, Garrett Wollman
woll...@hergotha.csail.mit.edu wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and
In article
cafoybck-m+71ma7w2ixqnrffn55pe6sotgknzm1atahqe5s...@mail.gmail.com,
jfvo...@gmail.com writes:
How large are you configuring your rings Garrett? Maybe if you tried
reducing them?
I'm not configuring them at all. (Well, hmmm, I did limit the number
of queues to 6 (per interface, it
Then you are using the default ring size, which is 2K descriptors, you
might try reducing to 1K
and see how that works.
Jack
On Mon, Mar 11, 2013 at 10:09 AM, Garrett Wollman
woll...@hergotha.csail.mit.edu wrote:
In article
cafoybck-m+71ma7w2ixqnrffn55pe6sotgknzm1atahqe5s...@mail.gmail.com,
On 11.03.2013 17:05, Garrett Wollman wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much about it.
Unless, of course, NFS is holding on to received mbufs for a longer time.
Well, I have two problems: one
In article 513e3d75.7010...@freebsd.org, an...@freebsd.org writes:
On 11.03.2013 17:05, Garrett Wollman wrote:
Well, I have two problems: one is running out of mbufs (caused, we
think, by ixgbe requiring 9k clusters when it doesn't actually need
them), and one is livelock. Allowing potentially
Andre Oppermann wrote:
On 11.03.2013 17:05, Garrett Wollman wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much
about it.
Unless, of course, NFS is holding on to received mbufs for a longer
time.
Garrett Wollman wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much
about it.
Unless, of course, NFS is holding on to received mbufs for a longer
time.
The NFS server only holds onto receive mbufs until
On Mon, 11 Mar 2013 21:25:45 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
To be honest, I'd consider seeing a lot of non-empty receive queues
for TCP connections to the NFS server to be an indication that it is
near/at its load limit. (Sure, if you do netstat a lot, you will
On 09.03.2013 01:47, Rick Macklem wrote:
Garrett Wollman wrote:
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann
an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS
On 10.03.2013 07:04, Garrett Wollman wrote:
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
[Andre Oppermann's patch:]
if (adapter-max_frame_size =
On 10.03.2013 03:22, Rick Macklem wrote:
Garett Wollman wrote:
Also, it occurs to me that this strategy is subject to livelock. To
put backpressure on the clients, it is far better to get them to stop
sending (by advertising a small receive window) than to accept their
traffic but queue it for
Andre Oppermann wrote:
On 10.03.2013 03:22, Rick Macklem wrote:
Garett Wollman wrote:
Also, it occurs to me that this strategy is subject to livelock. To
put backpressure on the clients, it is far better to get them to
stop
sending (by advertising a small receive window) than to accept
Andre Oppermann wrote:
On 10.03.2013 07:04, Garrett Wollman wrote:
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com
said:
Yes, in the past the code was in this form, it should work fine
Garrett,
just make sure
the 4K pool is large enough.
[Andre Oppermann's
Andre Oppermann wrote:
On 09.03.2013 01:47, Rick Macklem wrote:
Garrett Wollman wrote:
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann
an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce
Garrett Wollman wrote:
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
If reducing the size to 4K doesn't fix the problem, you might want
to
consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
the increased CPU overhead (and some
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
I suspect this indicates that it isn't mutex contention, since the
threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each thread spins for a while
before
In article 20795.29370.194678.963...@hergotha.csail.mit.edu, I wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I've thought about this. My concern is that the separate thread might
not keep up with the trimming demand. If that occurred, the cache would
Garett Wollman wrote:
In article 20795.29370.194678.963...@hergotha.csail.mit.edu, I
wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I've thought about this. My concern is that the separate thread
might
not keep up with the trimming demand. If that
Garrett Wollman wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I suspect this indicates that it isn't mutex contention, since the
threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
[Andre Oppermann's patch:]
if (adapter-max_frame_size = 2048)
adapter- rx_mbuf_sz = MCLBYTES;
-
On Thu, Mar 7, 2013 at 11:54 PM, YongHyeon PYUN pyu...@gmail.com wrote:
On Fri, Mar 08, 2013 at 02:10:41AM -0500, Garrett Wollman wrote:
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo
On Thu, Mar 7, 2013 at 11:54 PM, Andre Oppermann an...@freebsd.org wrote:
On 08.03.2013 08:10, Garrett Wollman wrote:
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a
On Fri, Mar 08, 2013 at 12:27:37AM -0800, Jack Vogel wrote:
On Thu, Mar 7, 2013 at 11:54 PM, YongHyeon PYUN pyu...@gmail.com wrote:
On Fri, Mar 08, 2013 at 02:10:41AM -0500, Garrett Wollman wrote:
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard.
On Fri, 8 Mar 2013 00:31:18 -0800, Jack Vogel jfvo...@gmail.com said:
I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
Garrett maybe if you would try that on your system and see if that helps
you, I could envision making this a tunable at some point perhaps?
If you
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay cache.
Jumbo pages come directly from the kernel_map
On 08.03.2013 18:04, Garrett Wollman wrote:
On Fri, 8 Mar 2013 00:31:18 -0800, Jack Vogel jfvo...@gmail.com said:
I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
Garrett maybe if you would try that on your system and see if that helps
you, I could envision making
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
I've actually been thinking about making the ring mbuf allocation sparse,
and what type
of strategy could be used. Right now I'm thinking of implementing a tunable
threshold,
and
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
I take it then that the hardware works in the traditional way, and
just keeps on using buffers until the
Yes, the write-back descriptor has a bit in the status field that says its
EOP (end of packet)
or not.
Jack
On Fri, Mar 8, 2013 at 12:28 PM, Garrett Wollman woll...@freebsd.orgwrote:
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this
Garrett Wollman wrote:
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann
an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay cache.
The cached
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
If reducing the size to 4K doesn't fix the problem, you might want to
consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
the increased CPU overhead (and some increased mutex contention) of
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a little under 9k, so the ixgbe driver
allocates 32,000 9k clusters for its receive rings. I have noticed,
on the machine that is
On 08.03.2013 08:10, Garrett Wollman wrote:
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a little under 9k, so the ixgbe driver
allocates 32,000 9k clusters for its receive
On Fri, Mar 08, 2013 at 02:10:41AM -0500, Garrett Wollman wrote:
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a little under 9k, so the ixgbe driver
allocates 32,000 9k
44 matches
Mail list logo