Hi, Well, some threads do log: "java.lang.OutOfMemoryError: GC overhead limit exceeded", but the process remains alive/doesn't crash. And if I connect using JConsole, the annotator threads are all alive and waiting for input, etc. Will try to reduce heap space and force OOM.
Thanks, ..meghana On 1 December 2011 19:55, Jaroslaw Cwiklik <uim...@gmail.com> wrote: > Did UIMA AS process ran out of memory? Do you have jvm dump file to examine > the memory leak? If you dont, force the OOM by reducing memory for the UIMA > AS process and use a tool like heapAnalyzer to find the memory leak: > > https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=4544bafe-c7a2-455f-9d43-eb866ea60091 > > Jerry > > On Thu, Dec 1, 2011 at 12:25 AM, Meghana <meghana.mara...@germinait.com>wrote: > >> Hi Jerry, >> >> After last night's run, the ActiveMQ broker has consumed barely >> 50-60MB of RAM (in JConsole), whereas the AS has used all of its >> allocated RAM (2GB). If I only restart the AS (not the broker), it >> works just fine! >> >> Thanks, >> >> ..meghana >> >> On 30 November 2011 20:28, Meghana <meghana.mara...@germinait.com> wrote: >> > Hi Jerry, >> > >> > Yeah, its the default 256M, will try increasing it. The strange thing >> > is, restarting the broker makes no difference; the AS remains >> > unresponsive. However, if i also restart the AS, then things flow as >> > normal. >> > >> > Could this be because i'm using a 2.4.0 snapshot build instead of a >> > stable release? >> > >> > Thanks, >> > >> > ..meghana >> > >> > >> > >> > On 30 November 2011 20:08, Jaroslaw Cwiklik <uim...@gmail.com> wrote: >> >> Meghana, what is your broker's -Xmx setting? On linux, >> >> grep ACTIVEMQ_HOME/bin/activemq for ACTIVEMQ_OPTS_MEMORY. If you are >> >> running with a default setting (256M) perhaps you need to increase the >> >> memory. >> >> >> >> Jerry >> >> >> >> On Mon, Nov 28, 2011 at 11:55 PM, Meghana < >> meghana.mara...@germinait.com>wrote: >> >> >> >>> Hi, >> >>> >> >>> I have an aggregate AE deployed as an AS with a remote primitive. Both >> >>> have their queues at the same broker. Clients send requests to the >> >>> aggregate using the sendCAS() method. We use the default 5.4.1 >> >>> ActiveMQ version and configuration distributed with UIMA AS. The AS's >> >>> CAS pool size is 2, and so is the client's. >> >>> >> >>> After working fine for ~10 hours, the AS listener received 25 >> >>> exceptions of the following form from its collocated delegates: >> >>> 2011-11-29 01:54:55,124 ERROR as.AsListener: >> >>> org.apache.uima.aae.error.UimaAsDelegateException: ----> >> >>> Controller:/AnalysisAggregator Received Exception on >> >>> CAS:61b4719b:133ebbc4eb3:-7fd9 From Delegate:APAnnotator : >> >>> org.apache.uima.adapter.jms.activemq.JmsOutputChannel : 870 >> >>> >> >>> Then, the AS started returning all CASes with status failed, with the >> >>> following logs: >> >>> Nov 29, 2011 2:41:25 AM org.apache.uima.aae.delegate.Delegate$1 >> >>> Delegate.TimerTask.run >> >>> WARNING: Timeout While Waiting For Reply From Delegate:q_async_ae >> >>> Process CAS Request Timed Out. Configured Reply Window Of 1,200,000. >> >>> Cas Reference Id:-3c5bfde0:133ebe967b3:-7fe6 >> >>> Nov 29, 2011 2:41:25 AM >> >>> org.apache.uima.adapter.jms.client.ClientServiceDelegate handleError >> >>> WARNING: Process Timeout - Uima AS Client Didn't Receive Process Reply >> >>> Within Configured Window Of:1,200,000 millis >> >>> Nov 29, 2011 2:41:25 AM >> >>> >> org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl >> >>> notifyOnTimout >> >>> WARNING: Request To Process Cas Has Timed-out. Service >> >>> Queue:q_async_ae. Cas Timed-out on host: 192.168.0.121 >> >>> Nov 29, 2011 2:41:25 AM >> >>> org.apache.uima.adapter.jms.client.ActiveMQMessageSender run >> >>> INFO: UIMA AS Client Message Dispatcher Sending GetMeta Ping To the >> Service >> >>> Nov 29, 2011 2:41:25 AM >> >>> >> org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl >> >>> sendCAS >> >>> INFO: Uima AS Client Sent PING Message To Service: q_async_ae >> >>> Nov 29, 2011 2:41:25 AM >> >>> >> org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl >> >>> handleException >> >>> INFO: Received Exception In Message From:UimaASClient Cas >> >>> Identifier:-3c5bfde0:133ebe967b3:-7fe3 >> >>> Exception:org.apache.uima.jms.error.handler.BrokerConnectionException: >> >>> Unable To Deliver CAS:-3c5bfde0:133ebe967b3:-7fe3 To Destination. >> >>> Connection To Broker tcp://broker_url:61616 Has Been Lost >> >>> >> >>> The remote delegate hasn't logged any errors, and the broker "seems" >> >>> to be up, but it has a few of these in its log: >> >>> 2011-11-29 02:13:33,279 [168.0.121:49414] INFO Transport - Transport >> >>> failed: java.net.SocketException: Connection reset >> >>> java.net.SocketException: Connection reset >> >>> at java.net.SocketInputStream.read(SocketInputStream.java:168) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpBufferedInputStream.fill(TcpBufferedInputStream.java:50) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpTransport$2.fill(TcpTransport.java:575) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpBufferedInputStream.read(TcpBufferedInputStream.java:58) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpTransport$2.read(TcpTransport.java:560) >> >>> at java.io.DataInputStream.readInt(DataInputStream.java:370) >> >>> at >> >>> >> org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:269) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:226) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:218) >> >>> at >> >>> >> org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:201) >> >>> at java.lang.Thread.run(Thread.java:619) >> >>> >> >>> What could be the problem here? Should i update the ActiveMQ distro? >> >>> They have retracted the 5.4.1 release due to the bug at >> >>> https://issues.apache.org/jira/browse/AMQ-3491 and recommend using >> >>> 5.4.3 instead. >> >>> >> >>> Thanks a lot, >> >>> >> >>> ..meghana >> >>> >> >> >> > >>