I should follow up with some of my results.  I did resolve the OOM issue by
setting the thread size (-Xss) in the JVM to 128k.  I was able to ramp up to
6k concurrent connections without any trouble at all.  I was testing
scalability and fail over and was able to fail over these connections to
another ActiveMQ router in ~30 seconds.  I did not push any higher than 6k
since that was far and away more than I would need, but it did seem that
there was significantly more headroom.




Hellweek wrote:
> 
> For the past 2 weeks we have been testing ActiveMQ to answer this question
> ourself.  Here is what we found.
> 
> Creating a Connection takes one thread.
> Creating a Session takes one thread.
> Creating a NON Persisted Topic with a messageListner takes one thread,
> when the topic is no longer used the thread is returned to the system.
> Creating a Persisted Topic with a messageListner takes one thread this
> thread seems to stay around even when the Producer is disposed.
> Creating a QUEUE creates 1 thread per QUEUE when the QUEUE is disposed the
> thread servicing the QUEUE is still in use.
> 
> On Windows 32 Bit we were able to get up to 2,000 threads before we could
> not create anymore.
> On Linux 32 Bit we were able to get up to about 2,500 threads before we
> could not create anymore.  It is important to note that I had to make
> ulimit changes to memory size, virtual memory size and file handles in
> order to reach more then 1,000.
> 
> 
> On Windows 64 Bit with a 64 bit JVM we were able to get up to 5,000
> threads before we could not create anymore.
> On Linux 64 Bit with a 64bit JVM we were able to get up to about 6,000
> threads before we could not create anymore.  It is important to note that
> I had to make ulimit changes to memory size, virtual memory size and file
> handles in order to reach more.
> 
> 
> 
> 
> Ken Ringdahl wrote:
>> 
>> I'm running a load test that intends to throw on the order of several
>> thousand connections at a 2 node broker system with failover (ActiveMQ
>> 4.1.1) configured for tcp transport only.  However, I'm just using a
>> single broker node right now and am getting out of memory errors at about
>> 400 connections.  I've set the Xmx to 1024M and the memory is only ~110
>> MB when  this error occurs in the log.  I've pasted the exceptions below. 
>> The latter 2 exceptions do not immediately follow the first.  They come
>> maybe a minute or so later.  I've tried setting the preFetch to 1 in case
>> it's keeping messages in memory.  But, to be honest, my test doesn't send
>> messages until all of the connections have been established.  Any
>> suggestions as to what might be the problem here?
>> 
>> 
>> 
>> jvm 1    | Exception in thread "ActiveMQ Transport Server:
>> tcp://localhost:61616" java.lang.OutOfMemoryError: unable to create new
>> native thread
>> jvm 1    |      at java.lang.Thread.start0(Native Method)
>> jvm 1    |      at java.lang.Thread.start(Thread.java:574)
>> jvm 1    |      at
>> org.apache.activemq.thread.DedicatedTaskRunner.<init>(DedicatedTaskRunner.java:45)
>> jvm 1    |      at
>> org.apache.activemq.thread.TaskRunnerFactory.createTaskRunner(TaskRunnerFactory.java:77)
>> jvm 1    |      at
>> org.apache.activemq.broker.TransportConnection.<init>(TransportConnection.java:174)
>> jvm 1    |      at
>> org.apache.activemq.broker.jmx.ManagedTransportConnection.<init>(ManagedTransportConnection.java:55)
>> jvm 1    |      at
>> org.apache.activemq.broker.jmx.ManagedTransportConnector.createConnection(ManagedTransportConnector.java:56)
>> jvm 1    |      at
>> org.apache.activemq.broker.TransportConnector$1.onAccept(TransportConnector.java:147)
>> jvm 1    |      at
>> org.apache.activemq.transport.tcp.TcpTransportServer.run(TcpTransportServer.java:167)
>> jvm 1    |      at java.lang.Thread.run(Thread.java:595)
>> jvm 1    | Exception in thread "RMI RenewClean-[172.16.105.110:36883]"
>> java.lang.OutOfMemoryError: unable to create new native thread
>> jvm 1    |      at java.lang.Thread.start0(Native Method)
>> jvm 1    |      at java.lang.Thread.start(Thread.java:574)
>> jvm 1    |      at
>> sun.rmi.transport.tcp.TCPChannel.free(TCPChannel.java:321)
>> jvm 1    |      at sun.rmi.server.UnicastRef.free(UnicastRef.java:395)
>> jvm 1    |      at sun.rmi.server.UnicastRef.done(UnicastRef.java:412)
>> jvm 1    |      at sun.rmi.transport.DGCImpl_Stub.dirty(Unknown Source)
>> jvm 1    |      at
>> sun.rmi.transport.DGCClient$EndpointEntry.makeDirtyCall(DGCClient.java:328)
>> jvm 1    |      at
>> sun.rmi.transport.DGCClient$EndpointEntry.access$1600(DGCClient.java:144)
>> jvm 1    |      at
>> sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:539)
>> jvm 1    |      at java.lang.Thread.run(Thread.java:595)
>> jvm 1    | Exception in thread "ActiveMQ Journal Checkpoint Worker"
>> java.lang.OutOfMemoryError: unable to create new native thread
>> jvm 1    |      at java.lang.Thread.start0(Native Method)
>> jvm 1    |      at java.lang.Thread.start(Thread.java:574)
>> jvm 1    |      at
>> edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:429)
>> jvm 1    |      at
>> edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:857)
>> jvm 1    |      at
>> org.apache.activemq.store.journal.JournalPersistenceAdapter.doCheckpoint(JournalPersistenceAdapter.java:376)
>> jvm 1    |      at
>> org.apache.activemq.store.journal.JournalPersistenceAdapter$2.iterate(JournalPersistenceAdapter.java:129)
>> jvm 1    |      at
>> org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:101)
>> jvm 1    |      at
>> org.apache.activemq.thread.DedicatedTaskRunner.access$000(DedicatedTaskRunner.java:25)
>> jvm 1    |      at
>> org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:39)
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/ActiveMQ-Transport-Server-OutOfMemoryError-while-running-load-test-tf4697790s2354.html#a13737723
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to