nope the 4-port is PCI 66mhz,
http://www.addonics.com/products/host_controller/adsa4r5.asp taken from
their parts lists so 15 drives will be all running through that
133Mbytes/sec they will see a bottleneck. Me I would get a 3 slot pcie 4x
and use two 8 port sata controllers and save the other for a dual port
10gige card
I however don't agree with your 60MB/s on gige I have seen up to 85MB/s with
a decent TOE and quick disks, take this from multiple hosts (multiple nics)
and you will see bottleneck issues with the disks...

On Thu, Sep 3, 2009 at 12:40 PM, Jake Anderson <ya...@vapourforge.com>wrote:

> On 03/09/09 10:37, Mark Walkom wrote:
>
>> I was thinking the same, but I reckon because they are just backing
>> up/archiving data it wouldn't be too bad.
>> ie They aren't looking for huge performance, just huge, cheap storage.
>>
>>
>> 2009/9/3 Morgan Storey<m...@morganstorey.com>
>>
>>
>>
>>> I know I am a geek but that is hot.
>>> I am wondering if they see any throughput issues with the sata backplanes
>>> and pci sata cards.
>>>
>>>
>> The backplanes are probably fine, each sata cable is good for
> ~300mbytes/sec most physical disks couldn't hope to hit that.
> lesse what their maximum xfer rate is.
> each drive can hit 103mb/sec average (better than I thought)
> each sata channel will max out at 250mbyte/sec
>
> so they are going to be loosing some bandwidth there.
> their backplanes take 5 disks, so a potential bandwidth of well call it
> 500mbyte/second
> so 50% is out the window there actual bandwidth per 5 disks is going to be
> 250mbyte
>
> they have 9 of these channels for a total bandwith available of
> 2250mbyte/sec (2 gigabytes a second, that'll rip some dvds fast)
>
> standard PCI tops out at 133mbyte/sec so thats out ;->
>
> it looks like they are using PCI-E SATA cards
> the mbo they have and the cards they are using indicate they have 3x of
> something like this
> http://www.syba.com/index.php?controller=Product&action=Info&Id=861
> which maxes out at 250mbyte/sec per (1x PCI-E 1x lane)
> and one 4 port card which if it comes from that mob must be a PCI by the
> look of things.
> but I'll assume that its PCI-E and at least 4 lanes.
>
> so the total xfer rate is 1750mbyte/sec
> (or 883 if they are using the PCI card)
>
> Vs the total possible xfer rate of 4500
> they aren't doing *too* badly
>
> given that on a gigabit ethernet connection you are lucky to push
> 30mbyte/sec (or 60 if you tweak it), I think its not going to be a big issue
> ;->
>
>
> If they wanted more oomph their best bet would be to put 2x 16 port PCI-E
> 16x cards into a motherboard that supported it (most decent SLI motherboards
> will do that)
>
> better still one with 4x pci-E 16 slots so that you can put some 10gigE
> cards in as well something like
>
> http://www.aria.co.uk/Products/Components/Motherboards/Socket+AM3+(AMD)/MSI+790FX-GD70+AM3+Motherboard+with+4+x+PCIe+x+16+?productId=36604<http://www.aria.co.uk/Products/Components/Motherboards/Socket+AM3+%28AMD%29/MSI+790FX-GD70+AM3+Motherboard+with+4+x+PCIe+x+16+?productId=36604>
> say (but with intel of course ;->)
>
> That should net you (assuming you use dual port 10gig-E nics) an xfer rate
> out of the box of around 2400mbytes/sec
> almost fast enough to spy on teh entirez intarwebz!!
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to