2012-07-20 5:11, Bob Friesenhahn wrote:
On Fri, 20 Jul 2012, Jim Klimov wrote:
Zfs data block sizes are fixed size! Only tail blocks are shorter.
This is the part I am not sure is either implied by the docs
nor confirmed by my practice. But maybe I've missed something...
This is something
On Sat, 21 Jul 2012, Jim Klimov wrote:
During this quick test I did not manage to craft a test which
would inflate a file in the middle without touching its other
blocks (other than using a text editor which saves the whole
file - so that is irrelevant), in order to see if ZFS can
insert smaller
2012-07-22 1:24, Bob Friesenhahn пишет:
On Sat, 21 Jul 2012, Jim Klimov wrote:
During this quick test I did not manage to craft a test which
would inflate a file in the middle without touching its other
blocks (other than using a text editor which saves the whole
file - so that is irrelevant),
On 07/19/12 18:24, Traffanstead, Mike wrote:
iozone doesn't vary the blocksize during the test, it's a very
artificial test but it's useful for gauging performance under
different scenarios.
So for this test all of the writes would have been 64k blocks, 128k,
etc. for that particular step.
On Wed, 18 Jul 2012, Michael Traffanstead wrote:
I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives
with an hpt27xx controller under FreeBSD 10
(but I've seen the same issue with FreeBSD 9).
The system has 8gigs and I'm letting FreeBSD auto-size the ARC.
Running
This is normal. The problem is that with zfs 128k block sizes, zfs
needs to re-read the original 128k block so that it can compose and
write the new 128k block. With sufficient RAM, this is normally avoided
because the original block is already cached in the ARC.
If you were to reduce the zfs
On Fri, 20 Jul 2012, Jim Klimov wrote:
I am not sure if I misunderstood the question or Bob's answer,
but I have a gut feeling it is not fully correct: ZFS block
sizes for files (filesystem datasets) are, at least by default,
dynamically-sized depending on the contiguous write size as
queued by
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
On Thu, Jul 19, 2012 at 8:47 PM, John Martin john.m.mar...@oracle.com wrote:
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives
with an hpt27xx controller under FreeBSD 10 (but I've seen the same issue with
FreeBSD 9).
The system has 8gigs and I'm letting FreeBSD auto-size the ARC.
Running iozone (from ports), everything is fine for file
10 matches
Mail list logo