Richard Skelton writes:
> iprb0: flags=201000843 UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS mtu 1500 index
> 2
> inet 172.29.216.232 netmask fffffe00 broadcast 172.29.217.255
> ether 0:d0:b7:a7:1:43
> e1000g1: flags=201000843 UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS mtu 1500
> index 4
> inet 172.29.216.233 netmask fffffe00 broadcast 172.29.217.255
> ether 0:e:c:50:f8:2a
Having two separate interfaces up on the same subnet but without IPMP
configured is a little odd. Are you sure that's what you wanted?
> How can I test the card further?
At this point, it sounds like it's probably a driver or hardware
problem. You'd likely need to contact the driver group for help.
As for testing, here's a start.
- If you've forced duplex and/or link speed, or you've disabled
autonegotiation on either side of the link, then fix it back to
the default.
- Use "kstat e1000g" to dump out the statistics for this driver. If
something's going wrong with the link, that should provide clues.
(But since the statistics aren't documented, you may need to work
with the driver author to narrow it down.)
- Use "netstat -s" to get TCP/IP statistics before and after failed
transmission. Make sure that it's not a higher-level networking
problem. (Though I don't _think_ it is.)
- If the link peer (switch or other system) has accessible link
statistics, look there for problems.
- Use 'snoop -rd e1000g1' to see if it's receiving anything at all
from the wire.
There've been a number of e1000g bugs in recent weeks. One crucial
bug that can cause what appear to be I/O failures is in the LSO and
hardware checksum feature. I'd recommend adding this to
/kernel/drv/e1000g.conf:
tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
And then unplumb, "modunload -i 0", and "update_drv e1000g" before
replumbing.
--
James Carlson, Solaris Networking <[EMAIL PROTECTED]>
Sun Microsystems / 35 Network Drive 71.232W Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677
_______________________________________________
networking-discuss mailing list
[email protected]