bofh <goodb...@gmail.com> wrote:

> There's something going on then.  I have 7x 3TB disk at home, in
> raidz3, so about 12TB usable.  2.5TB actually used.  Scrubbing takes
> about 2.5 hours.  I had done the resilvering as well, and that did not
> take 15 hours/drive.  Copying 3TBs onto 2.5" SATA drives did take more
> than a day, but a 2.5" drive's performance is about 1/4 of the 3.5"
> drives from the limited testing I've done.

The performance of a thumper depends on whether you set it up correctly.
A thumper offers 6 independent SATA concrollers that are able to do independent
DMA simultanesously. For this reason, I set up each "row" for ZFS with 6 drives.
$ drives for the net capacity and two parity drives.

I get a sustained local read performance of 600 MB/s this way.

> Additionally, if you're only replacing one drive at a time, you're
> only resilvering 250GB at a time, regardless of the size of the new
> drive.
>
> If you already have 45X 3TB drives waiting to go in, bite the bullet
> and get that eSATA cage, since you want to re-do your zpools.  You can
> reuse it for offsite backups in the future.

This is a miss-interpretation. If you have 7 raid-z2 rows with 6 drives 
each, you may replace up tu 7 drives at once. I did not yet test this but I am 
sure that this will finish in less than a day, so the upgrade may take aprox. a 
week.


> As a side note, on my x4540, I get writes of up to 1.2
> gigabytes/second (but that's just writing zeros to an uncompressed
> pool).  Real performance is lower, of course.

With the original drives delivered by Sun?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
       j...@cs.tu-berlin.de                (uni)  
       joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to