Hi All,

To make the long story short, I have a toy version of an ORB being developed, and the biggest problem is slow network speed over TCP/IP.

There is an object called 'endpoint' on both sides, with incoming and outgoing message queues. This endpoint object has a socket assigned, with nodelay:

conn.setsockopt(socket.IPPROTO_TCP,socket.TCP_NODELAY,1)

The endpoint is running two separate threads - those are dedicated for reading/writing messages from/into the socket object, as shown below:


   def _process_incoming(self):
       try:
           while not self.stop_requested.isSet():
               data = self.read_str()
               while not self.stop_requested.isSet():
                   try:
                       self.incoming.put(data,1)
                       break
                   except orb.util.smartqueue.Full:
                       pass
               if not self.stop_requested.isSet():
                   if self.router:
                       self.router.on_message_arrived(self)
       except Exception, e:
           if self.router:
               if not isinstance(e,TransportClosedError):
                   self.router.logger.error(dumpexc(e))
               self.router.unregister_endpoint(self)
           self.shutdown()
           raise SystemExit(0)

   def _process_outgoing(self):
       try:
           while not self.stop_requested.isSet():
               data_ok = False
               while not self.stop_requested.isSet():
                   try:
                       data = self.outgoing.get(1)
                       data_ok = True
                       break
                   except orb.util.smartqueue.Empty:
                       pass
               if data_ok:
                   self.write_str(data)
       except Exception, e:
           if self.router:
               if not isinstance(e,TransportClosedError):
                   self.router.logger.error(dumpexc(e))
               self.router.unregister_endpoint(self)
           self.shutdown()
           raise SystemExit(0)


The main point is that the sender does not need to wait for the message to be actually written into the socket (unless the outgoing queue becomes full).

When I try to send a message and receive an answer for it, I can only get as much as 130 request+response message pairs per second. Apparently, it is the same from messages size =77bytes to message size=16 Kbytes.

However, if I send 100 outgoing messages first, then read back all answers then the speed goes up to 1300 message pairs/sec. I suspect that this has something to do with TCP/IP. Since this will be used for RPC/RMI, it would be very important to lower the time needed to exchange messages. Is there any way I can speed this up?

Or do you think that this speed is the best I can get? My friend tried to do the same thing in Java, and he said that he could reach 1000 messages/sec. (Is there a special "socket.flush()" method in Java that we do not have in Python?)

Thanks,

  Laszlo





--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to