I have been testing Solaris to Linux performance using IBoIP and the
results have been poorer than expected. I typically get about
3.5Gbit/sec aggregate between the Solaris host and 1 more Linux hosts.
The setup follows:
----------------
- The Solaris host and the primary Linux host are connected to a DDR
switch via DDR interfaces.
- An SDR switch is also connected to the DDR switch
- Many other Linux hosts are connected to the SDR switch
----------------
The Solaris host is connected to the DDR switch with two dual-port DDR
cards, all four ports active. I used iperf and a few other tools for
testing.
I've done tests using multiple interfaces on Solaris connecting to three
or four of the Linux hosts, including the one on the DDR switch with the
DDR card. In every case, I get approximately the same performance: 3 -
3.5Gbit/second total throughput.
Linux to Linux tests approach theoretical -- about 6Gbit/sec to SDR
hosts, and the aggregate throughput increases as I add more Linux hosts.
While testing, I modified the MTU settings. Linux performs
significantly worse when the MTU is set to the Solaris maximum size of
2040 (under 2 Gbit/sec). The Linux maximum MTU for IBoIP approaches 65K.
I realize that we're not going to get fantastic performance with IBoIP,
but while we're using IBoIP, I'd like to at least get 20Gbit/sec from
four interfaces on the Solaris host aggregate.
Best,
Jesse
_______________________________________________
networking-discuss mailing list
[email protected]