[Phil beat me to it]
Yes, the 0s are a result of integer division in DTrace/kernel.
On Jun 14, 2012, at 9:20 PM, Timothy Coalson wrote:
Indeed they are there, shown with 1 second interval. So, it is the
client's fault after all. I'll have to see whether it is somehow
possible to get the
On Jun 14, 2012, at 1:35 PM, Robert Milkowski wrote:
The client is using async writes, that include commits. Sync writes do
not need commits.
What happens is that the ZFS transaction group commit occurs at more-
or-less regular intervals, likely 5 seconds for more modern ZFS
systems. When
Thanks for the suggestions. I think it would also depend on whether
the nfs server has tried to write asynchronously to the pool in the
meantime, which I am unsure how to test, other than making the txgs
extremely frequent and watching the load on the log devices. As for
the integer division
On Fri, Jun 15, 2012 at 12:56 PM, Timothy Coalson tsc...@mst.edu wrote:
Thanks for the suggestions. I think it would also depend on whether
the nfs server has tried to write asynchronously to the pool in the
meantime, which I am unsure how to test, other than making the txgs
extremely
On Jun 13, 2012, at 4:51 PM, Daniel Carosone wrote:
On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
client: ubuntu 11.10
/etc/fstab entry: server:/mainpool/storage /mnt/myelin nfs
bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
0
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8% of available space) during a copy
operation from 37.30 with
Hi Tim,
On Jun 14, 2012, at 12:20 PM, Timothy Coalson wrote:
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8%
The client is using async writes, that include commits. Sync writes do
not need commits.
What happens is that the ZFS transaction group commit occurs at more-
or-less regular intervals, likely 5 seconds for more modern ZFS
systems. When the commit occurs, any data that is in the ARC but not
The client is using async writes, that include commits. Sync writes do not
need commits.
Are you saying nfs commit operations sent by the client aren't always
reported by that script?
What happens is that the ZFS transaction group commit occurs at more-or-less
regular intervals, likely 5
On 14 Jun 2012, at 23:15, Timothy Coalson tsc...@mst.edu wrote:
The client is using async writes, that include commits. Sync writes do not
need commits.
Are you saying nfs commit operations sent by the client aren't always
reported by that script?
They are not reported in your case because
Indeed they are there, shown with 1 second interval. So, it is the
client's fault after all. I'll have to see whether it is somehow
possible to get the server to write cached data sooner (and hopefully
asynchronous), and the client to issue commits less often. Luckily I
can live with the
I noticed recently that the SSDs hosting the ZIL for my pool had a large
number in the SMART attribute for total LBAs written (with some
calculation, it seems to be the total amount of data written to the pool so
far), did some testing, and found that the ZIL is being used quite heavily
(matching
On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
client: ubuntu 11.10
/etc/fstab entry: server:/mainpool/storage /mnt/myelin nfs
bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
0
nfsvers=3
NAME PROPERTY VALUE SOURCE
Interesting...from what I had read about NFSv3 asynchronous writes,
especially bits about does not require the server to commit to stable
storage, led me to expect different behavior. The performance impact
on large writes (which we do a lot of) wasn't severe, so sync=disabled
is probably not
14 matches
Mail list logo