On Sat, Sep 5, 2009 at 12:30 AM, Marc Bevand <m.bev...@gmail.com> wrote:

> Tim Cook <tim <at> cook.ms> writes:
> >
> > Whats the point of arguing what the back-end can do anyways?  This is
> bulk
> data storage.  Their MAX input is ~100MB/sec.  The backend can more than
> satisfy that.  Who cares at that point whether it can push 500MB/s or
> 5000MB/s?  It's not a database processing transactions.  It only needs to
> be
> able to push as fast as the front-end can go.  --Tim
>
> True, what they have is sufficient to match GbE speed. But internal I/O
> throughput matters for resilvering RAID arrays, scrubbing, local data
> analysis/processing, etc. In their case they have 3 15-drive RAID6 arrays
> per
> pod. If their layout is optimal they put 5 drives on the PCI bus (to
> minimize
> this number) & 10 drives behind PCI-E links per array, so this means the
> PCI
> bus's ~100MB/s practical bandwidth is shared by 5 drives, so 20MB/s per
> (1.5TB-)drive, so it is going to take minimun 20.8 hours to resilver one of
> their arrays.
>
> -mrb
>
>
But none of that matters.  The data is replicated at a higher layer,
combined with raid-6.  They'd have to see triple disk failure across
multiple arrays at the same time...  They aren't concerned with performance,
the home users they're backing up aren't ever going to get anything remotely
close to gigE speeds.  Absolute BEST case scenario *MIGHT* push 20mbit if
the end-user is lucky enough to have FIOS or docsis 3.0 in their area, and
has large files with a clean link.

Even rebuilding two failed disks that setup will push 2MB/sec all day long.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to