On Thu, May 27, 2010 at 2:39 AM, Marc Bevand <m.bev...@gmail.com> wrote:
> Hi,
>
> Brandon High <bhigh <at> freaks.com> writes:
>>
>> I only looked at the Megaraid 8888 that he mentioned, which has a PCIe
>> 1.0 4x interface, or 1000MB/s.
>
> You mean x8 interface (theoretically plugged into that x4 slot below...)
>
>> The board also has a PCIe 1.0 4x electrical slot, which is 8x
>> physical. If the card was in the PCIe slot furthest from the CPUs,
>> then it was only running 4x.

The tests were done connecting both cards to the PCIe 2.0 x8 slot#6
that connects directly to the Intel 5520 chipset.

I totally ignored the differences between PCIe 1.0 and 2.0. My fault.


>
> If Giovanni had put the Megaraid 8888 in this slot, he would have seen
> an even lower throughput, around 600MB/s:
>
> This slot is provided by the ICH10R which as you can see on:
> http://www.supermicro.com/manuals/motherboard/5500/MNL-1062.pdf
> is connected to the northbridge through a DMI link, an Intel-
> proprietary PCIe 1.0 x4 link. The ICH10R supports a Max_Payload_Size
> of only 128 bytes on the DMI link:
> http://www.intel.com/Assets/PDF/datasheet/320838.pdf
> And as per my experience:
> http://opensolaris.org/jive/thread.jspa?threadID=54481&tstart=45
> a 128-byte MPS allows using just about 60% of the theoretical PCIe
> throughput, that is, for the DMI link: 250MB/s * 4 links * 60% = 600MB/s.
> Note that the PCIe x4 slot supports a larger, 256-byte MPS but this is
> irrevelant as the DMI link will be the bottleneck anyway due to the
> smaller MPS.
>
>> > A single 3Gbps link provides in theory 300MB/s usable after 8b-10b
> encoding,
>> > but practical throughput numbers are closer to 90% of this figure, or
> 270MB/s.
>> > 6 disks per link means that each disk gets allocated 270/6 = 45MB/s.
>>
>> ... except that a SFF-8087 connector contains four 3Gbps connections.
>
> Yes, four 3Gbps links, but 24 disks per SFF-8087 connector. That's
> still 6 disks per 3Gbps (according to Giovanni, his LSI HBA was
> connected to the backplane with a single SFF-8087 cable).


Correct. The backplane on the SC646E1 only has one SFF-8087 cable to the HBA.


>> It may depend on how the drives were connected to the expander. You're
>> assuming that all 18 are on 3 channels, in which case moving drives
>> around could help performance a bit.
>
> True, I assumed this and, frankly, this is probably what he did by
> using adjacent drive bays... A more optimal solution would be spread
> the 18 drives in a 5+5+4+4 config so that the 2 most congested 3Gbps
> links are shared by only 5 drives, instead of 6, which would boost the
> througput by 6/5 = 1.2x. Which would change my first overall 810MB/s
> estimate to 810*1.2 = 972MB/s.

The chassis has 4 columns of 6 disks. The 18 disks I was testing were
all on columns #1 #2 #3.

Column #0 still has a pair of SSDs and more disks which I havent' used
in this test. I'll try to move things around to make use of the 4 port
multipliers and test again.

SuperMicro is going to release 6Gb/s backplane that uses the LSI
SAS2X36 chipset in the near future, I've been told.

Good thing this is still a lab experience. Thanks very much for the
invaluable help!

-- 
Giovanni
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to