I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2 
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors. 

It seems like an exorbitantly long time. The other 5 disks in the stripe with 
the replaced disk were at 90% busy and ~150io/s each during the resilver. Does 
this seem unusual to anyone else? Could it be due to heavy fragmentation or do 
I have a disk in the stripe going bad? Post-resilver no disk is above 30% util 
or noticeably higher than any other disk. 

Thank you in advance. (kernel is snv123)

-J

Sent via iPhone

Is your e-mail Premiere?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to