The only way I can think of to try to answer that question is to use a
profiler to see where the ActiveMQ process is spending its time.  JVisualVM
would be an easy way to do that.

Otherwise I think that question would have to go to the vendor or the
support community for the security product.
On Nov 11, 2015 5:28 AM, "Basmajian, Raffi" <rbasmaj...@ofiglobal.com>
wrote:

> Hi Tim,
>
> I'll check today and report back.
>
> I suspect this may have something to do with enterprise security and
> scanning software installed on the Linux host. Is there any command to
> determine if the security process is interfering with the amq process?
>
>
>
> Sent from my Verizon Wireless 4G LTE smartphone
>
>
> -------- Original message --------
> From: Tim Bain <tb...@alumni.duke.edu>
> Date: 11/11/2015 1:41 AM (GMT-05:00)
> To: ActiveMQ Users <users@activemq.apache.org>
> Cc: asha...@vizuri.com, Kent Eudy <ke...@vizuri.com>
> Subject: Re: JMX connections creating high cpu and GC [ EXTERNAL ]
>
> Do you see the same performance impact from attaching JConsole or
> JVisualVM?
> On Nov 10, 2015 4:09 PM, "Basmajian, Raffi" <rbasmaj...@ofiglobal.com>
> wrote:
>
> > I'm throwing a hail mary on this one,
> >
> > We've set up a broker cluster on A-MQ 5.11 (Fuse 6.2).
> > Six master/slave pairs, full graph topology network of brokers; 12
> brokers
> > total.
> > The cluster is brand new, no message activity; network connectors are
> > active and working properly.
> > Java 8, RHEL 7.1, 1gb/4Gb min/max
> >
> > Problem
> > =======
> > Using JMC (Mission Control) , we connect to a master to view JMX metrics.
> > Almost immediately, cpu activity on the broker shoots to 100%. Heap
> > consumption oscillates between 128Mb and 800Mb over 30s windows, and the
> > rest of the day is generally unpleasant.
> >
> > From a thread view, nearly all cpu activity is consumed by numerous
> > "ActiveMQ Task-xxx" threads; here's the stack trace from one, but they
> are
> > all nearly identical:
> >
> > ActiveMQ Task-220 [2656] (TIMED_WAITING)
> >    sun.misc.Unsafe.park line: not available [native method]
> >    java.util.concurrent.locks.LockSupport.parkNanos line: 215
> >    java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill line:
> > 460
> >    java.util.concurrent.SynchronousQueue$TransferStack.transfer line: 362
> >    java.util.concurrent.SynchronousQueue.poll line: 941
> >    java.util.concurrent.ThreadPoolExecutor.getTask line: 1066
> >    java.util.concurrent.ThreadPoolExecutor.runWorker line: 1127
> >    java.util.concurrent.ThreadPoolExecutor$Worker.run line: 617
> >    java.lang.Thread.run line: 745
> >
> > At first I thought it was related to this issue, and while I found five
> > threads with the name "JMX Server connection timeout" on this instance,
> > none of those threads are consuming cpu resources.
> > https://access.redhat.com/solutions/1169753
> >
> > When we test this on a standalone instance with no network configuration,
> > this problem does not occur. In our QA environment (the config detailed
> > above), the broker config is identical to our standalone instances, the
> > only difference is the network connector. Here's a snippet, there are
> five
> > total pairs like this (for six total m/s pairs in the topology) in each
> > activemq.xml:
> >
> >             <!--        Network #1        -->
> >             <!-- (#1) Queue network link  -->
> >             <networkConnector
> >                 name="queues_nc1"
> >                 userName="${auth.user}"
> >                 password="${auth.password}"
> >                 uri="masterslave(tcp://whatever, tcp://whatever)"
> >                 consumerTTL="1"
> >                 messageTTL="100"
> >                 conduitSubscriptions="false"
> >                 decreaseNetworkConsumerPriority="true"
> >                 suppressDuplicateQueueSubscriptions="true">
> >                 <dynamicallyIncludedDestinations>
> >                     <queue physicalName=">"/>
> >                 </dynamicallyIncludedDestinations>
> >             </networkConnector>
> >             <!-- (#1) Topic network link  -->
> >             <networkConnector
> >                 name="topics_nc1 "
> >                 userName="${auth.user}"
> >                 password="${auth.password}"
> >                 uri="masterslave(tcp://whatever, tcp://whatever)"
> >                 consumerTTL="1"
> >                 messageTTL="100 "
> >                 decreaseNetworkConsumerPriority="true">
> >                 <dynamicallyIncludedDestinations>
> >                     <topic physicalName=">"/>
> >                 </dynamicallyIncludedDestinations>
> >             </networkConnector>
> >
> >
> >
> > This e-mail transmission may contain information that is proprietary,
> > privileged and/or confidential and is intended exclusively for the
> > person(s) to whom it is addressed. Any use, copying, retention or
> > disclosure by any person other than the intended recipient or the
> intended
> > recipient's designees is strictly prohibited. If you are not the intended
> > recipient or their designee, please notify the sender immediately by
> return
> > e-mail and delete all copies. OppenheimerFunds may, at its sole
> discretion,
> > monitor, review, retain and/or disclose the content of all email
> > communications.
> >
>
> This e-mail transmission may contain information that is proprietary,
> privileged and/or confidential and is intended exclusively for the
> person(s) to whom it is addressed. Any use, copying, retention or
> disclosure by any person other than the intended recipient or the intended
> recipient's designees is strictly prohibited. If you are not the intended
> recipient or their designee, please notify the sender immediately by return
> e-mail and delete all copies. OppenheimerFunds may, at its sole discretion,
> monitor, review, retain and/or disclose the content of all email
> communications.
>

Reply via email to