Rico,
What is the device id of the bge interface? Not all of the Broadcom's
chip support jumbo. And as far as I know, there is not complete jumbo
code support in s10 yet.
Thanks,
Raymond
Rico Magsipoc wrote:
I'm hoping to get the community's thoughts on some issues I've ran into
implementing jumbo frames on an x86 V20z Solaris10 box using Broadcom chipset
BCM579x. The details--
cerebro-B-017:tmp[138]> cat /etc/release
Solaris 10 6/06 s10x_u2wos_09a X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 09 June 2006
cerebro-B-017:tmp[142]> tail /etc/system
*
* set nautopush=32
* set maxusers=40
*
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13
set bge:bge_jumbo_enable = 1
set rlim_fd_max = 65536
set rlim_fd_cur = 65536
cerebro-B-017:tmp[143]> modinfo | grep bge
119 ffffffffeffe3000 10210 162 1 bge (BCM579x driver v0.47)
cerebro-B-017:tmp[145]> ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 9000 index 2
inet 192.168.5.17 netmask fffffc00 broadcast 192.168.7.255
Now, with the above settings, I run 'dd if=/dev/zero
of=/ifs/tmp/`hostname`.testfile bs=1024k count=10000' onto an NFS-mounted
filesystem. Note that the NFS server and my edge switch are also using jumbo
frames. My physical connection is as follows--
NFS Svr <---> Foundry 48G <---> V20z box
My throughput hovers around a very poor 1.5MB/sec. If I changed my MTU back to
the default 1500, this jumps to approx 115MB/sec. Counters on my Foundry
switch show a large number of CRC align errors, fragments and jabbers. I've
swapped my Cat6 cables a number of times, eliminating the physical connection
as the cause of the problem. Can anyone shed some insight on this serious
performance degradation?
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
[email protected]
_______________________________________________
networking-discuss mailing list
[email protected]