We have camel deployed inside AMQ, providing a routing service for us. It consumes from an incoming queue, annotates the messages and then sends them on to a second queue from which various engines collect the messages and process them. Something like this:
Producer -> activemq:IN_QUEUE -> Camel: annotation code -> activemq:OUT_QUEUE -> engines As we push messages through this we are seeing very large numbers of TIME_WAIT sockets (into the thousands which then leads to a shortage of file descriptors and bad things happening). If I completely remove Camel processing from the mix, it all behaves nicely. If I replace our annotation code with a simple route from IN_QUEUE to OUT_QUEUE we still see all the TIME_WAIT sockets. It looks awfully like camel is using a new socket for each message it processes. I understand that TIME_WAIT is a normal part of tcpip but to have so many of them is hopeless. Around this issue a couple of things occur to me: 1) Is Camel really opening that many sockets deliberately (using the activemq component) or is it a bug. 2) What I'd really like to do is not use sockets for this processing. In my simple view of the world, the two queues involved both have a "socket facing end" and an "internal end". My camel processing only accesses the internal ends of the queues so I thought I would be able to use a vm/seda type component. Sadly it doesn't seem to work - the messages simply don't get picked up from the IN queue if I specify seda:IN_QUEUE. Are my expectations wrong here or have I just not got the configuration quite right? We're currently using a slightly old snapshot build of AMQ 5.1 and a snapshot build of Camel 1.2 (although I've just tried dropping in the 1.3.0 RC 2 build of Camel and that didn't help). We're running on Java 5. Thanks in advance for any suggestions or guidance. -Dominic -- View this message in context: http://www.nabble.com/Getting-lots-of-TIME_WAIT-sockets-tp16119896s22882p16119896.html Sent from the Camel - Users mailing list archive at Nabble.com.
