Stuart Anderson writes:
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
ander...@ligo.caltech.edu
wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read first) of file
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological
On Jun 23, 2009, at 11:50 AM, Richard Elling wrote:
(2) is there some reasonable way to read in multiples of these
blocks in a single IOP? Theoretically, if the blocks are in
chronological creation order, they should be (relatively)
sequential on the drive(s). Thus, ZFS should be able
Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
ander...@ligo.caltech.edu mailto:ander...@ligo.caltech.edu wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS,
Stuart Anderson wrote:
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive.
On Mon, 2009-06-22 at 06:06 -0700, Richard Elling wrote:
Nevertheless, in my lab testing, I was not able to create a random-enough
workload to not be write limited on the reconstructing drive. Anecdotal
evidence shows that some systems are limited by the random reads.
Systems I've run which
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson ander...@ligo.caltech.edu
wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to me,
means that the metadata must be read first) of file creation, and that
only
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance of doing that with a
manageable number of UFS file systems.
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance of
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
ander...@ligo.caltech.eduwrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like it should be at least theoretically possible to do this
13 matches
Mail list logo