Hi,

I am testing my GLDv3 network driver on Solaris10. In the test01 of NICDrv 
test, my driver often fails with "netperf: data send error: Broken pipe" error 
which causes waiting for some sessions completion to timeout. But, the test 
report seems indicate that some netperf sessions (3 of the total 20 for server 
4G memory in the following example) were not successfully started in the 
beginning thus they will never finish. 

I am wondering why it happens? The broken pipe error does not cause driver hang 
thus the test can continue and other tests are not affected.

Tom
Test_Case_Start| 2856 tests/functional/test01/runme | 13:35:00 7813084113444 0 |
stdout|  
stdout| ASSERTION ID: functional/test01
stdout| 
stdout| DESCRIPTION: 
stdout|          Test data transmit/receive functionality under promiscuous mode
stdout|  
stdout| STRATEGY: 
stdout|          - Start multi-session TCP traffic with 65000/1460/1 byte 
payloads
stdout|          - During TCP data transmission, repeatedly enable/disable 
promiscuous mode
stdout|          - Start multi-session UDP traffic with 65000/1460/1 byte 
payloads
stdout|          - During UDP data transmission, repeatedly enable/disable 
promiscuous mode
stdout|          - All TCP/UDP sessions should pass without any errors
stdout|  
stdout| TESTABILITY:  statistical and implicit
stdout|  
stdout| 192.168.27.14 is alive
stdout| 192.168.27.24 is alive
stdout| Turning on/off promiscuous mode by using snoop scripts...
stdout| NETPERF_HOME=/opt/SUNWstc-stf/../SUNWstc-netperf2/bin/
stdout| Command:  MAXQ.auto -s 192.168.27.24 -c 192.168.27.14 -C 192.168.27.14 
-d 65000 -b 65535 -M 192.168.27.0 -m r...@localhost -p nicdrv -i 1 -e 10000 -T 
900 -t 0 -tr bi -S 10 -P TCP_STREAM
stdout| Checking super-user permission...
stdout| Verifying mandatory parameters...
stdout| TCP_NODELAY is off
stdout| Verifying client <-> server pairs...
stdout| Verifying MAXQ.auto is running on SUT system...
stdout| Detecting system and distributing binaries...
stdout| Setting up multicasting subnet...
stdout| delete net 224.0.0.0: gateway 172.17.139.172
stdout| add net 224.0.0.0: gateway 192.168.27.24
stdout| Multicast subnet gateway: 192.168.27.24/qlge2
stdout| Running get_TP on card=1 sess=10 time=900 for performance...
stderr| Starting netserver at port 12865
stderr| Starting netserver at port 12865
stdout| Connecting from 192.168.27.24 -> 192.168.27.14 for 10 sessions...
stdout| Connecting from 192.168.27.14 -> 192.168.27.24 for 10 sessions...
stdout| rsh 192.168.27.14 /tmp/start_netperf.sh 10 192.168.27.24 12865 
192.168.27.14 900 TCP_STREAM 4 65535 65535 65000 10 0 
stdout| Waiting for 20 connections to establish...
stdout| Mon Aug  3 13:35:42 PDT 2009:multicast fired on subnet 192.168.27.0
stdout| 

stderr| netperf: data send error: Broken pipe
stderr| netperf: data send error: Broken pipe
stderr| netperf: data send error: Broken pipe
stdout| Throughput reporting...
stdout| 
stdout| Test Date: Mon Aug  3 13:35:00 PDT 2009
stdout| 
stdout| ========================= SUT info ========================= 
stdout| SunOS hope 5.10 Generic_139556-08 i86pc i386 i86pc
stdout| 
stdout| ================Begin /etc/system ========================= 
stdout| set kmem_flags = 0xf
stdout| ==================End /etc/system ========================= 
stdout| 
stdout|         SUMMARY:
stdout| =================================================
stdout| SUT          :        192.168.27.24
stdout| CLIENTS      :        192.168.27.14
stdout| SOCKET_BUFFER:        65535
stdout| MESSAGE_SIZE :        65000
stdout| PROTOCOL_TYPE:        TCP_STREAM
stdout| TCP_NODELAY  :        0
stdout| TRAFFIC_TYPE :        bi
stdout| # OF CARDS   :        1
stdout| PORT PER CARD:        1
stdout| TOTAL_SESSION:        20
stdout| timeout_short:        900
stdout| THROUGH_PUT TCP TX :        2.69 mbits/s
stdout|                 RX :        243.85 mbits/s
stdout|                 BI :        246.54 mbits/s
stdout| Finished 17 out of 20 sessions: failed
-- 
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to