Hi,
I am currently evaluating ActiveMQ. I have one question that I cannot seem
to find an answer to that I hope you can help with. I see that it can be
installed on Windows. Are there any limitations with the product if Windows
is the OS that it is installed on?
Thanks,
Matt
--
View this
nope. all it needs is a jvm.
On 29 January 2014 10:09, m4tthall m4tth...@outlook.com wrote:
Hi,
I am currently evaluating ActiveMQ. I have one question that I cannot seem
to find an answer to that I hope you can help with. I see that it can be
installed on Windows. Are there any limitations
Hi
If you use the service wrapper on window then there is a limit on
windows 64bit about it can max use 4GB of memory or something like
that.
More details here:
http://activemq.apache.org/java-service-wrapper.html
On Wed, Jan 29, 2014 at 11:09 AM, m4tthall m4tth...@outlook.com wrote:
Hi,
I am
First this was a test to force the brokers to use Provider Flow Control and
that is why the 20kb limit was used. If you can explain why 20kb should not
have been used in this test it might help me better understand what's going
on.
I did not think about the 1kb being added for the header. I'm
Hi,
By Default KahaDb cleanupInterval is 30 seconds. I think this is very small
duration for cleanup. For broker with heavy load this interval is very
small. So I want to increase this value to 60 seconds.
What is the impect on the broker by increasing this value to 60 seconds ?
Thansk,
Anuj
Re-reading the original post, I'm unclear what the concern is then. I
apologize if I'm not answering the real concern.
For testing, those low numbers may work. Watch for negative memory
percentage usage in the broker using JMX. When that happens, messages stop
flowing.
BTW, are the consumers
I'm sorry if I was not clear. In production I have several ActiveMQ brokers
on remote machines all feeding a common ActiveMQ broker that I refer to as a
hub. The hub establishes full duplex static connections with each of the
remote brokers. A Message on any given remote broker is placed into a
Is the PFC kicking-in at the bridge? In my experience, PFC on a bridge often
doesn't clear itself. When that happens, stack traces on the two brokers
show that both brokers are blocked on attempts to send messages to
one-another. Taking a few such traces over a period of time should help to
I want to thank you for your assistance. If by at the bridge you mean the
hub in the middle then I would agree. I just do not understand why a slow
consumer would stop pulling the data, bridge or no bridge.
I am going to set up remote debugging and see if I can figure this out a
little more.
The bridge is the structure that moves messages across the network connection
between two brokers.
One thing I recall now - we had problems with PFC blocking bridges using
duplex network connectors. You can try using non-duplex network connectors
(of course, both brokers then need to configure
Hello!
I'm trying to connect the ActiveMQ NMS client to a server with a
self-signed SSL certificate.
I've added the server certificate to Mono's Trust, My and CA
truststores with `certmgr -add -c object-type message-queue.crt` but the
connections still yield a
I should note that I've seen the discussion at
http://timbish.blogspot.com/2010/04/ussing-ssl-in-nmsactivemq.html but that
Tim uses a CA while I do not.
--
View this message in context:
On 01/29/2014 01:13 PM, Christoffer Sawicki wrote:
Hello!
I'm trying to connect the ActiveMQ NMS client to a server with a
self-signed SSL certificate.
I've added the server certificate to Mono's Trust, My and CA
truststores with `certmgr -add -c object-type message-queue.crt` but the
I'm attempting to configure a broker in Karaf for SSL using encrypted
properties for the keystore/truststore passwords. These properties were
encrypted using Jasypt and we have a bundle responsible for the handling
of the jasypt password that exports a PBEConfig as an OSGi service. Now
I'm
Our system sends small messages (1k) frequently, as data changes in the
system, to serve as notifications to listeners. The users (serving as
both producer and consumer) of these notifications are either human users or
batch processes. The humans process records slowly with pause times while
I don't know if LevelDB is better in regards to startup time - I believe it
is. If there is rarely any backlog of messages, brokers come up within
seconds.
If you really want sub-second message passing performance through a SPOF
(single-point-of-failure), you might consider an alternative
The duplicate broker concept is an interesting one; for our application the
deduping is trivial. We would we have to implement the duplicate
sending/receiving in our code, correct? I see a fanout: transport in
activemq but that does not seem to be quite what we would want.
However why do you
Duplicate Broker
Yeah, two connections, duplicate send and receive. One thing to keep in
mind with that idea - how will the application handle a broker outage? It
won't help to have 2 brokers setup if the producers are blocking on send
calls and that blocks the processor generating
Please respond !!
--
View this message in context:
http://activemq.2283324.n4.nabble.com/KahaDb-cleanUp-interval-tp4676946p4676998.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
All of the documentation for ActiveMQ discusses ssl/tls. I have implemented
ssl using ActiveMQ using an ActiveMQSslConnectionFactory on the client side
and using ActiveMQ-context.xml for setting up the broker. Everything works
fine with url = ssl://ipadress: port: clientAuth - None.
I have a
Our security environment is using a JCEKS keystore and truststore. I have
been able to change the format in the Broker SSLContext to use JCEKS instead
of the default jks. However, on the client side setup I use an
ActiveMQSslConnectionFactory, which has a jks default and does not seem to
have
21 matches
Mail list logo