The thing is- as far as I know the OS doesn't ask the disk to find a place
to fit the data. Instead the OS tracks what space on the disk is free and
then tells the disk where to write the data.
Yes and no, I did not formulate my idea clearly enough, sorry for confusion ;)
Yes - The disks
This is a slow operation which can only be done about 180-250 times per second
for very random I/Os (may be more with HDD/Controller caching, queuing and
faster spindles).
I'm afraid that seeking to very dispersed metadata blocks, such as traversing
the
tree during a scrub on a fragmented
Disks that have been in use for a longer time may have very fragmented free
space on one hand, and not so much of it on another, but ZFS is still
trying to push
bits around evenly. And while it's waiting on some disks, others may be
blocked as
well. Something like that...
This could
tweaked this on the fly.
One key indicator is if your disk queues hover around 10.
Jim
---
- Original Message -
From: jimkli...@cos.ru
To: zfs-discuss@opensolaris.org
Sent: Wednesday, May 11, 2011 3:22:19 AM GMT -08:00 US/Canada Pacific
Subject: Re: [zfs-discuss] Performance problem
It sent a series of blocks to write from the queue, newer disks wrote them
and stay
dormant, while older disks seek around to fit that piece of data... When old
disks
complete the writes, ZFS batches them a new set of tasks.
The thing is- as far as I know the OS doesn't ask the disk to find
I've been going through my iostat, zilstat, and other outputs all to no avail.
None of my disks ever seem to show outrageous service times, the load on the
box is never high, and if the darned thing is CPU bound- I'm not even sure
where to look.
(traversing DDT blocks even if in memory, etc -
Well, as I wrote in other threads - i have a pool named pool on physical
disks, and a compressed volume in this pool which i loopback-mount over iSCSI
to make another pool named dcpool.
When files in dcpool are deleted, blocks are not zeroed out by current ZFS
and they are still allocated for
it is my understanding for write (fast) consider faster HDD (SSD) for ZIL
for read consider faster HDD(SSD) for L2ARC
There were many discussion for V12N env raid1 is better than raidz
On 5/10/2011 3:31 PM, Don wrote:
I've been going through my iostat, zilstat, and other outputs all to no
# dd if=/dev/zero of=/dcpool/nodedup/bigzerofile
Ahh- I misunderstood your pool layout earlier. Now I see what you were doing.
People on this forum have seen and reported that adding a 100Mb file tanked
their
multiterabyte pool's performance, and removing the file boosted it back up.
Sadly I