I'm writing an application using Mina which continuously creates a large
number of short lived UDP sockets with different local ports but the same
destination address/port.

What I need to be able to do is create a UDP socket with either a unique or
ephemeral local port, send one packet to the same destination on the same
port, wait for a response, then close the socket.  Do this, say, 10 million
times.

Using Mina 2.0, what I've done is, using the same NioDatagramConnector, call
connect() again and again.  I would use NioDatagramAcceptor.newSession()
except that it's not the remote address that varies here, it's the local
address.

What I've found is that connect() eats up about 36% of the total application
CPU time, and that the process itself pins the CPU to 100% ... which would
be fine, except I'm only seeing about 3000 round trips / second on trivial
20 byte messages.  I know that's a lot of system calls, but even so, if I
were writing this in C/C++, I think I could do much better than 3000 round
trips/second.  At least 10K, 20K round trips/second should be readily
achievable.  Of course I need to rate limit so that only < 64K fds are being
used at a time.

I'm guessing that one of the performance implications with connect() is that
it spins up machinery to perform an IoFuture callback ... but with UDP
sockets, it doesn't seem necessary to do async notification on connect() as
no packets are actually being exchanged -- just reservation in the kernel
for the datagram socket.

Using the java nio DatagramChannel directly, I could call connect() myself,
and I'm sure it would be very fast.  But I see no (obvious) way to construct
an IoSession from an already existing and bound DatagramChannel.

Advice?

-- 
View this message in context: 
http://www.nabble.com/performance-of-NioDatagramConnector.connect%28%29-tp14683147s16868p14683147.html
Sent from the Apache MINA Support Forum mailing list archive at Nabble.com.

Reply via email to