Re: Reloading users and groups properties on change

2016-03-31 Thread Simon Lundström
Noone uses PropertiesLoginModule and reloading?

Gary, so I should file a jira for this right?

BR,
- Simon

On Thu, 2016-03-10 at 17:14:48 +0100, Simon Lundström wrote:
> Hi!
> 
> I talked to Gary Tully on IRC (and mail) and we decided it was best that
> I mailed the mailinglist since he was pretty sure that someone here had
> solved this.
> 
> We are running 5.13.0 and are trying to get {user,group}s.properties to
> be reloaded automatically when they are changed.
> 
> In the init.d-script we've added:
> ACTIVEMQ_OPTS+=" 
> -Djava.security.auth.login.config=/local/activemq/conf/login.config "
> 
> and login.config looks like this:
> activemq-domain {
>   org.apache.activemq.jaas.PropertiesLoginModule required
> debug=true
> reload=true
> org.apache.activemq.jaas.properties.user="users.properties"
> 
> org.apache.activemq.jaas.properties.group="../conf.d/approved/groups.properties"
>   ;
> };
> 
> users.properties:
> system=manager
> nagios=nagios
> 
> groups.properties:
> monitoring=system
> 
> activemq.xml excerpt:
> […]
> 
>   
>   
> 
>   
>   
> 
>   
>  
>
>  queue="aliveness-test"
>  read="monitoring"
>  write="monitoring"
>  admin="monitoring"
>/>
>  
>
>  
>
> […]
> 
> With this configuration the user nagios should be able to access the queue 
> aliveness-test.
> To reproduce, modify groups.properties so it looks like:
> monitoring=system,nagios
> 
> Check your logs (you need to enable debug logging on 
> org.apache.activemq.jaas.ReloadableProperties):
> {"thread":"ActiveMQ NIO Worker 
> 622","level":"DEBUG","loggerName":"org.apache.activemq.jaas.ReloadableProperties","message":"Load
>  of: PropsFile=/local/activemq/conf/../conf.d/approved/groups.properties"}
> so the reloading works, but nagios still can't consume from (or produce to) 
> the queue:
> {"thread":"ActiveMQ NIO Worker 
> 2","level":"WARN","loggerName":"org.apache.activemq.broker.TransportConnection.Service","message":"Security
>  Error occurred on connection to: tcp://0:0:0:0:0:0:0:1:45357, User nagios is 
> not authorized to read from: queue://aliveness-test"}
> 
> Note: If I restart ActiveMQ nagios can consume and produce from the
> queue.
> 
> Is there any configuration that I've missed?
> Is this a bug?
> 
> BR,
> - Simon
> 
> 
> 
> Simon Lundström
> Section for Infrastructure
> 
> IT Services
> Stockholm University
> SE-106 91 Stockholm, Sweden
> 
> www.su.se/english/staff-info/it


Is there a way to be notified when a durable subscriber receives a MQTT message?

2016-03-31 Thread Shobhana
Publisher sends a persistent message to a topic which has more than one
durable subscribers. One or more of these durable subscribers may be offline
when the message is sent. Is there any way to get notified when the message
is delivered to all subscribers?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


RE: Activemq HA without shared Database or Shared file system

2016-03-31 Thread Natarajan, Rajeswari
Is  replicated level DB store production ready now.


-Original Message-
From: James A. Robinson [mailto:j...@highwire.org] 
Sent: Thursday, March 31, 2016 3:19 PM
To: users@activemq.apache.org
Subject: Re: Activemq HA without shared Database or Shared file system

I'm not aware of any other choice.  I initially tried to use the replicated
leveldb system but ran into too many stability issues.


On Thu, Mar 31, 2016 at 3:17 PM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Hi,
>
> Would like to know if ActiveMQ supports HA with message replication
> without the shared DB or shared file system
>
> I see that there is  a  replicated level DB store which requires a zoo
> keeper.  Is there any other mechanism available other than these options
> where messages are getting replicated to the standby.
>
>
> Regards,
> Rajeswari
>
> http://activemq.apache.org/replicated-leveldb-store.html
>


Re: Activemq HA without shared Database or Shared file system

2016-03-31 Thread James A. Robinson
I'm not aware of any other choice.  I initially tried to use the replicated
leveldb system but ran into too many stability issues.


On Thu, Mar 31, 2016 at 3:17 PM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Hi,
>
> Would like to know if ActiveMQ supports HA with message replication
> without the shared DB or shared file system
>
> I see that there is  a  replicated level DB store which requires a zoo
> keeper.  Is there any other mechanism available other than these options
> where messages are getting replicated to the standby.
>
>
> Regards,
> Rajeswari
>
> http://activemq.apache.org/replicated-leveldb-store.html
>


Activemq HA without shared Database or Shared file system

2016-03-31 Thread Natarajan, Rajeswari
Hi,

Would like to know if ActiveMQ supports HA with message replication without the 
shared DB or shared file system

I see that there is  a  replicated level DB store which requires a zoo keeper.  
Is there any other mechanism available other than these options where messages 
are getting replicated to the standby.


Regards,
Rajeswari

http://activemq.apache.org/replicated-leveldb-store.html


Re: Measures to improve the throughput

2016-03-31 Thread Quinn Stevenson
That seems very slow for a simple throughput test - my simple Camel tests have 
usually been in the 1000’s msgs/sec range.

Can you share the full test?

> On Mar 31, 2016, at 12:01 AM, Frizz  wrote:
> 
> In an atempt to improve the throughput of my system I did various tests. I
> started with a simple setup:
> - 1 Topic, non persistent
> - 1 Producer
> - 1 Consumer
> - Messge size: 4k
> - Active MQ 5.12.1
> 
> My system is reasonably fast (8 cores, 32GB memory, SSD) - still I only
> manage to send & receive about 45.000 messages per second.
> 
> Basically my producer looks like this:
>...
>ConnectionFactory factory = new
> ActiveMQConnectionFactory("tcp://loclhost:61616");
>Connection connection = factory.createConnection();
>session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
>producer = session.createProducer(null);
>connection.start();
> 
>for (long i = 0; i < 100; i++) {
>final Topic topic = session.createTopic(topicName);
>producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
> 
>BytesMessage message = session.createBytesMessage();
>message.writeBytes(someByteArray); // size 4k
>producer.send(topic, message);
>}
>...
> 
> I measured the time that the consumer spends in its onMessage() method and
> it's only about 1% of its total runtime (so consuming the messages is not a
> bottleneck (no surprise, because of BytesMessages)).
> 
> 45.000 messages/second does not sound *that* much to me - is this the
> maximum I can expect to get from Active MQ?
> 
> I did some further tests increasing the number of topic consumers - and
> noticed that my msg/sec drop. But the total amount of mes/sec that pass
> through my system stays roughly the same (OK, with some shrinkage/friction):
> 
> producer: 45.000
> consumer: 45.000
> --> 90.000 total
> producer: 28.700
> consumer: 28.700
> consumer: 28.700
> --> 86.100 total
> producer: 20.400
> consumer: 20.400
> consumer: 20.400
> consumer: 20.400
> --> 81.600 total
> producer: 15.700
> consumer: 15.700
> consumer: 15.700
> consumer: 15.700
> consumer: 15.700
> --> 78.500 total
> 
> 
> What I had expected would look more like this:
> producer: 43.000
> consumer: 43.000
> consumer: 43.000
> consumer: 43.000
> consumer: 43.000
> 
> Where does this "global total msg/sec limit" come from? Where is the
> bottleneck?



Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-03-31 Thread Christopher Shannon
The CountStatisticImpl class is basically just a wrapper for an atomic
long.  It's used all over the place (destinations, subscriptions, inside
kahadb, etc) to keep track of various metrics in a non-blocking way.
There's a DestinationStatistics object for each destination and that has
several of those counters in it,  There's an option to disable metrics
tracking but that will only prevent the counters from actually
incrementing, not stop the allocations.  Some of the metrics are required
for parts of the broker and won't honor that flag to disable them (such as
needing message counts in kahadb) but I plan on going back and double
checking all of those metrics at some point soon to make sure everything
that can honor that flag does.  Since you have a lot of destinations you
are seeing a lot of those counters.

If you disable disk syncs then you need to be aware that you are risking
message loss.  Since you are no longer waiting to make sure data is
persisted to the disk before sending the ack to the producer there's a
chance of losing messages if something happens (like a power outage)

On Wed, Mar 30, 2016 at 2:41 PM, Shobhana  wrote:

> Hi Tim & Christopher,
>
> I tried with 5.13.2 version but as you suspected, it did not solve my
> problem.
>
> We don't have any wildcard subscriptions. Most of the Topics have a maximum
> of 8 subscriptions (Ranges between 2 and 8) and a few topics (~25-30 so
> far)
> have more than 8 (this is not fixed, it depends on no of users interested
> in
> these specific topics; the max I have seen is 40).
>
> Btw, I just realized that I have set a very low value for destination
> inactivity (30 secs) and hence many destinations are getting removed very
> early. Later when there is any message published to the same destination,
> it
> would result in destination getting created again. I will correct this by
> increasing this time out to appropriate values based on each destination
> (varies from 1 hour to 1 day)
>
> Today after upgrading to 5.13.2 version in my test env, I tried with
> different configurations to see if there is any improvement. In particular,
> I disabled journal disk sync (since many threads were waiting at KahaDB
> level operations) and also disabled metadata update. With these changes,
> the
> contention moved to a different level (KahaDB update index .. see attached
> thread dumps)
>
> ThreadDump1.txt
> 
> ThreadDump2.txt
> 
>
> I will test again by increasing the index cache size (current value is set
> to the default of 1) to 10 and see if it makes any improvement.
>
> Also histo reports showed a huge number (1393177) of
> org.apache.activemq.management.CountStatisticImpl instances and 1951637
> instances of java.util.concurrent.locks.ReentrantLock$NonfairSync. See
> attached histo for complete report.
>
> histo.txt 
>
> What are these org.apache.activemq.management.CountStatisticImpl instances?
> Is there any way to avoid them?
>
> Thanks,
> Shobhana
>
>
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710055.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>


JMS to STOMP transformation causes throughput drop in STOMP consumers

2016-03-31 Thread xabhi
Hi,
I am trying to benchmark throughput for my nodejs consumer (STOMP). The
producer is in Java sending JMS text and map messages.

With text messages, I see that nodejs consumer is able to handle 10
Kmsgs/sec without any pending messages

But when I send Map messages and using nodejs consumer (with header
'transformation': jms-map-json), the throughput drops to 0.5 Kmsgs/sec.

I am not able to understand where this bottleneck is coming. The broker has
messages in pending queue and i see unacknowledged messages in jconsole.

Why is that the node consumer can consumer text messages faster that map
messages if ultimately both are sent in TEXT format from ActiveMQ?

Does anyone from ActiveMQ dev knows about this behavior? Any help will be
appreciated

Thanks,
Abhishek




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/JMS-to-STOMP-transformation-causes-throughput-drop-in-STOMP-consumers-tp4710148.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ Object Message to json transformation not working

2016-03-31 Thread James A. Robinson
I think what it boils down to is figuring out where you need to put the jar
to make it available to the class loader in the java instance that is
running your broker.

So, for example, on my Linux setup I have a directory

/usr/share/activemq/lib

that contains the jars needed to run ActiveMQ and I could place a jar there
if I needed to add new deserialization classes.

In my particular case I've got this running under a tanukiwrapper, and
another option is to add a new directory to the
/etc/activemq/activemq-wrapper.conf that points to a new jar directory.
E.g., extending this block of configuration:

# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
set.default.ACTIVEMQ_HOME=/usr/share/activemq
wrapper.java.classpath.1=/usr/share/java/tanukiwrapper.jar
wrapper.java.classpath.2=%ACTIVEMQ_HOME%/lib/*
wrapper.java.classpath.3=%ACTIVEMQ_HOME%/lib/web/*
wrapper.java.classpath.4=%ACTIVEMQ_HOME%/lib/extra/*
wrapper.java.classpath.5=%ACTIVEMQ_HOME%/lib/optional/*
wrapper.java.classpath.6=%ACTIVEMQ_HOME%/lib/camel/*


Jim


On Wed, Mar 30, 2016 at 11:54 PM xabhi  wrote:

> How do I make sure that this class is available at broker side?
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-Object-Message-to-json-transformation-not-working-tp4709447p4710145.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>