> I have a question that is related to this topic: Why
> is there only a (tunable) 5 second threshold and not
> also an additional threshold for the buffer size
> (e.g. 50MB)?
> 
> Sometimes I see my system writing huge amounts of
> data to a zfs, but the disks staying idle for 5
> seconds, although the memory consumption is already
> quite big and it really would make sense (from my
> uneducated point of view as an observer) to start
> writing all the data to disks. I think this leads to
> the pumping effect that has been previously mentioned
> in one of the forums here.
> 
> Can anybody comment on this?
> 
> TIA,
> Thomas

because ZFS always writes to a new location on the disk, premature writing
can often result in redundant work ... a single host write to a ZFS object
results in the need to rewrite all of the changed data and meta-data leading
to that object

if a subsequent follow-up write to the same object occurs quickly,
this entire path, once again, has to be recreated, even though only a small 
portion of it is actually different from the previous version

if both versions were written to disk, the result would be to physically write 
potentially large amounts of nearly duplicate information over and over
again, resulting in logically vacant bandwidth

consolidating these writes in host cache eliminates some redundant disk
writing, resulting in more productive bandwidth ... providing some ability to
tune the consolidation time window and/or the accumulated cache size may
seem like a reasonable thing to do, but I think that it's typically a moving
target, and depending on an adaptive, built-in algorithm to dynamically set
these marks (as ZFS claims it does) seems like a better choice

...Bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to