Tim Cook <tim <at> cook.ms> writes:
> 
> Whats the point of arguing what the back-end can do anyways?  This is bulk 
data storage.  Their MAX input is ~100MB/sec.  The backend can more than 
satisfy that.  Who cares at that point whether it can push 500MB/s or 
5000MB/s?  It's not a database processing transactions.  It only needs to be 
able to push as fast as the front-end can go.  --Tim

True, what they have is sufficient to match GbE speed. But internal I/O 
throughput matters for resilvering RAID arrays, scrubbing, local data 
analysis/processing, etc. In their case they have 3 15-drive RAID6 arrays per 
pod. If their layout is optimal they put 5 drives on the PCI bus (to minimize 
this number) & 10 drives behind PCI-E links per array, so this means the PCI 
bus's ~100MB/s practical bandwidth is shared by 5 drives, so 20MB/s per 
(1.5TB-)drive, so it is going to take minimun 20.8 hours to resilver one of 
their arrays.

-mrb

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to