Ivan Voras wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14 hours ( a bit too much, but I wanted to
compare different zfs
On Oct 23, 2012, at 2:36 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Ivan Voras wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14
On Oct 18, 2012, at 6:11 PM, Nikolay Denev nde...@gmail.com wrote:
On Oct 15, 2012, at 5:34 PM, Ivan Voras ivo...@freebsd.org wrote:
On 15 October 2012 16:31, Nikolay Denev nde...@gmail.com wrote:
On Oct 15, 2012, at 2:52 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14 hours ( a bit too much, but I wanted to compare
different zfs recordsize settings ),
and
Ivan Voras wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14 hours ( a bit too much, but I wanted to
compare different zfs
On Oct 20, 2012, at 3:11 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14 hours ( a bit too much,
On Oct 20, 2012, at 3:11 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
http://home.totalterror.net/freebsd/nfstest/results.html
Both tests ran for about 14 hours ( a bit too much,
On Oct 20, 2012, at 4:00 PM, Nikolay Denev nde...@gmail.com wrote:
On Oct 20, 2012, at 3:11 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 13:42, Nikolay Denev nde...@gmail.com wrote:
Here are the results from testing both patches :
On 20 October 2012 14:45, Rick Macklem rmack...@uoguelph.ca wrote:
Ivan Voras wrote:
I don't know how to interpret the rise in context switches; as this is
kernel code, I'd expect no context switches. I hope someone else can
explain.
Don't the mtx_lock() calls spin for a little while and
On Sat, Oct 20, 2012 at 3:28 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 14:45, Rick Macklem rmack...@uoguelph.ca wrote:
Ivan Voras wrote:
I don't know how to interpret the rise in context switches; as this is
kernel code, I'd expect no context switches. I hope someone else
On Oct 20, 2012, at 10:45 PM, Outback Dingo outbackdi...@gmail.com wrote:
On Sat, Oct 20, 2012 at 3:28 PM, Ivan Voras ivo...@freebsd.org wrote:
On 20 October 2012 14:45, Rick Macklem rmack...@uoguelph.ca wrote:
Ivan Voras wrote:
I don't know how to interpret the rise in context switches;
Outback Dingo wrote:
On Sat, Oct 20, 2012 at 3:28 PM, Ivan Voras ivo...@freebsd.org
wrote:
On 20 October 2012 14:45, Rick Macklem rmack...@uoguelph.ca wrote:
Ivan Voras wrote:
I don't know how to interpret the rise in context switches; as
this is
kernel code, I'd expect no context
On Oct 15, 2012, at 5:34 PM, Ivan Voras ivo...@freebsd.org wrote:
On 15 October 2012 16:31, Nikolay Denev nde...@gmail.com wrote:
On Oct 15, 2012, at 2:52 PM, Ivan Voras ivo...@freebsd.org wrote:
http://people.freebsd.org/~ivoras/diffs/nfscache_lock.patch
It should apply to HEAD
On 13/10/2012 17:22, Nikolay Denev wrote:
drc3.patch applied and build cleanly and shows nice improvement!
I've done a quick benchmark using iozone over the NFS mount from the Linux
host.
Hi,
If you are already testing, could you please also test this patch:
On Oct 15, 2012, at 2:52 PM, Ivan Voras ivo...@freebsd.org wrote:
On 13/10/2012 17:22, Nikolay Denev wrote:
drc3.patch applied and build cleanly and shows nice improvement!
I've done a quick benchmark using iozone over the NFS mount from the Linux
host.
Hi,
If you are already
On Oct 15, 2012, at 2:52 PM, Ivan Voras ivo...@freebsd.org wrote:
On 13/10/2012 17:22, Nikolay Denev wrote:
drc3.patch applied and build cleanly and shows nice improvement!
I've done a quick benchmark using iozone over the NFS mount from the Linux
host.
Hi,
If you are already
On 15 October 2012 16:31, Nikolay Denev nde...@gmail.com wrote:
On Oct 15, 2012, at 2:52 PM, Ivan Voras ivo...@freebsd.org wrote:
http://people.freebsd.org/~ivoras/diffs/nfscache_lock.patch
It should apply to HEAD without Rick's patches.
It's a bit different approach than Rick's, breaking
On Saturday, October 13, 2012 9:03:22 am Rick Macklem wrote:
rick
ps: I hope John doesn't mind being added to the cc list yet again. It's
just that I suspect he knows a fair bit about mutex implementation
and possible hardware cache line effects.
Currently mtx_pool just uses a simple
Ivan Voras wrote:
On 13/10/2012 17:22, Nikolay Denev wrote:
drc3.patch applied and build cleanly and shows nice improvement!
I've done a quick benchmark using iozone over the NFS mount from the
Linux host.
Hi,
If you are already testing, could you please also test this patch:
On 15 October 2012 22:58, Rick Macklem rmack...@uoguelph.ca wrote:
The problem is that UDP entries very seldom time out (unless the
NFS server isn't seeing hardly any load) and are mostly trimmed
because the size exceeds the highwater mark.
With your code, it will clear out all of the
Ivan Voras wrote:
On 15 October 2012 22:58, Rick Macklem rmack...@uoguelph.ca wrote:
The problem is that UDP entries very seldom time out (unless the
NFS server isn't seeing hardly any load) and are mostly trimmed
because the size exceeds the highwater mark.
With your code, it will
Garrett Wollman wrote:
On Fri, 12 Oct 2012 22:05:54 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I've attached the patch drc3.patch (it assumes drc2.patch has
already been
applied) that replaces the single mutex with one for each hash list
for tcp. It also increases the size of
On Oct 13, 2012, at 5:05 AM, Rick Macklem rmack...@uoguelph.ca wrote:
I wrote:
Oops, I didn't get the readahead option description
quite right in the last post. The default read ahead
is 1, which does result in rsize * 2, since there is
the read + 1 readahead.
rsize * 16 would actually
I wrote:
Oops, I didn't get the readahead option description
quite right in the last post. The default read ahead
is 1, which does result in rsize * 2, since there is
the read + 1 readahead.
rsize * 16 would actually be for the option readahead=15
and for readahead=16 the calculation would
On Fri, 12 Oct 2012 22:05:54 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
I've attached the patch drc3.patch (it assumes drc2.patch has already been
applied) that replaces the single mutex with one for each hash list
for tcp. It also increases the size of NFSRVCACHE_HASHSIZE to 200.
On Oct 11, 2012, at 8:46 AM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 1:09 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick
On Oct 11, 2012, at 7:20 PM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 8:46 AM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 1:09 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick Macklem
Nikolay Denev wrote:
On Oct 11, 2012, at 8:46 AM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 1:09 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On
Nikolay Denev wrote:
On Oct 11, 2012, at 7:20 PM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 8:46 AM, Nikolay Denev nde...@gmail.com wrote:
On Oct 11, 2012, at 1:09 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick
Oops, I didn't get the readahead option description
quite right in the last post. The default read ahead
is 1, which does result in rsize * 2, since there is
the read + 1 readahead.
rsize * 16 would actually be for the option readahead=15
and for readahead=16 the calculation would be rsize * 17.
Garrett Wollman wrote:
On Tue, 9 Oct 2012 20:18:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
And, although this experiment seems useful for testing patches that
try
and reduce DRC CPU overheads, most real NFS servers will be doing
disk
I/O.
We don't always have control
On Tue, 9 Oct 2012 20:18:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
And, although this experiment seems useful for testing patches that try
and reduce DRC CPU overheads, most real NFS servers will be doing disk
I/O.
We don't always have control over what the user does. I think
On Oct 10, 2012, at 3:18 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca
On Oct 11, 2012, at 1:09 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Nikolay Denev wrote:
On Oct 10, 2012, at 3:18 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Garrett Wollman wrote:
On Wed, 3
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each list that a cache entry
is on, rather than a global lock for everything.
On Oct 9, 2012, at 5:12 PM, Nikolay Denev nde...@gmail.com wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each
Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each list that a cache
entry
is on, rather than
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each list that a cache entry
is on, rather than a global lock for everything.
Nikolay Deney wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem rmack...@uoguelph.ca
wrote:
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each list that a cache
entry
is on, rather than
Garrett Wollman wrote:
[Adding freebsd-fs@ to the Cc list, which I neglected the first time
around...]
On Tue, 2 Oct 2012 08:28:29 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I can't remember (I am early retired now;-) if I mentioned this
patch before:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
Simple: just use a sepatate mutex for each list that a cache entry
is on, rather than a global lock for everything. This would reduce
the mutex contention, but I'm not sure how significantly since I
don't have
Garrett Wollman wrote:
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Simple: just use a sepatate mutex for each list that a cache entry
is on, rather than a global lock for everything. This would reduce
the mutex contention, but I'm not sure how
Garrett Wollman wrote:
I had an email conversation with Rick Macklem about six months ago
about NFS server bottlenecks. I'm now in a position to observe my
large-scale NFS server under an actual production load, so I thought I
would update folks on what it looks like. This is a 9.1 prerelease
[Adding freebsd-fs@ to the Cc list, which I neglected the first time
around...]
On Tue, 2 Oct 2012 08:28:29 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
I can't remember (I am early retired now;-) if I mentioned this patch before:
http://people.freebsd.org/~rmacklem/drc.patch
It
I had an email conversation with Rick Macklem about six months ago
about NFS server bottlenecks. I'm now in a position to observe my
large-scale NFS server under an actual production load, so I thought I
would update folks on what it looks like. This is a 9.1 prerelease
kernel (I hope 9.1
46 matches
Mail list logo