Hi Icehong,
 
You may wish to post the snippets of your actual code that exhibit this
behaviour.
 
What you describe should not occur as packets should not be lost (unless
you set the TIPC_SRC_DROPPABLE or TIPC_DEST_DROPPABLE bits on your send
socket using setsockopt()).  But it is possible you have discovered a
bug.
 
Questions:
1. Is it possible that your link between sender and receiver may have
gone down at any time?
2. What version of TIPC are you using?
3. What operating system and what version of that OS are you using?
 
Elmer
 


________________________________

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ? ?
Sent: Thursday, March 13, 2008 10:17 AM
To: tipc-discussion@lists.sourceforge.net
Subject: [tipc-discussion] About SOCK_RDM socket


Hi , 
    
     I write a sample test routine and want to test a SOCK_RDM socket
tipc socket,  
     The server routine receive packets and then drop it , then sleep
for 10 ms ,
        while (1)  { 
              recv the packet ; 
             usleep 10000 ; 
             }
 
  and the client send packets  continously,  I think when the receive
buffer is full, the client should be blocked. But the result is the
client continously send the packets.   I found same packets were lost
after a few minutes. 
 
from linux man page :
SOCK_RDM Provides a reliable datagram layer that does not guarantee
ordering.
 
Am I misunderstanding the documents  or  it's a bug of tipc ? 
 
Thanks for your help,
                                                        icehong 
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to