No. From what I've seen, ZFS will periodically flush writes from the
ZIL to disk. You may run into a read starvation situation where ZFS is
so busy flushing to disk that you won't get reads. If you have VMs where
developers expect low latency interactivity, they get unhappy. Trust me. :)
Apparently, I must not be using the right web form...
I would update the case sometimes via the web, and it seems like no one
actually saw it. Or, some other engineer comes along and asks me the
same set of questions that were already answered (and recorded in the
case records!).
Another
their datastore? I'd love to hear from your experience.
Thanks,
-Paul Choi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roy,
Thanks for the info. Yeah, the bug you mentioned is pretty critical. In
terms of SSDs, I have Intel X25-M for L2ARC and X25-E for ZIL. And the
host has 24G RAM. I'm just waiting for that 2010.03 release or
whatever we want to call it when it's released...
-Paul
On 5/18/10 12:49 PM,
Hello,
Is it possible to replicate an entire zpool with AVS? From what I see,
you can replicate a zvol, because AVS is filesystem agnostic. I can
create zvols within a pool, and AVS can replicate replicate those, but
that's not really what I want.
If I create a zpool called disk1,
Hm. That's odd. zpool clear should've cleared the list of errors.
Unless you were accessing files at the same time, so there were more
checksum errors being reported upon reads.
As for zpool scrub, there's no benefit in your case. Since you are
reading from the zpool, and there's checksums
zpool clear just clears the list of errors (and # of checksum errors)
from its stats. It does not modify the filesystem in any manner. You run
zpool clear to make the zpool forget that it ever had any issues.
-Paul
Jonathan Loran wrote:
Hi list,
First off:
# cat /etc/release
could tell which ones are really bad, so we
wouldn't have to recreate them unnecessarily. They are mirrored in
various places, or can be recreated via reprocessing, but
recreating/restoring that many files is no easy task.
Thanks,
Jon
On Jun 1, 2009, at 2:41 PM, Paul Choi wrote:
zpool clear
for mem_inuse.
Running ::memstat in mdb -k also shows Kernel memory usage (probably
includes ZFS overhead) and ZFS File Data memory usage. But it's
painfully slow to run. kstat is probably better.
-Paul Choi
Richard Elling wrote:
Bob Friesenhahn wrote:
On Wed, 6 May 2009, Troy Nancarrow