Hi ,

     I write a sample test routine and want to test a SOCK_RDM socket  tipc
socket,
     The server routine receive packets and then drop it , then sleep for 10
ms ,
        while (1)  {
              recv the packet ;
             usleep 10000 ;
             }

  and the client send packets  continously,  I think when the receive buffer
is full, the client should be blocked. But the result is the client
continously send the packets.   I found same packets were lost after a few
minutes.

from linux man page :
SOCK_RDM Provides a reliable datagram layer that does not guarantee
ordering.

Am I misunderstanding the documents  or  it's a bug of tipc ?

 Thanks for your help,
                                                        icehong
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to