If you want a "raw TCP/IP" connection, then it needs some type of protocol -
even if home-grown.
Perhaps you could use Camel's Mina connector to implement what you want.
Otherwise, you'll likely want a service that interfaces between the m2m
devices and ActiveMQ. The latter is the path I would s
Thanks for the reply. I understood that in order to pass the data over TCP I
need to follow ActiveMQConnectionFactory classes.
But my concern is, like when we do a socket programming, we connect to a
port and push the data. The server code listens on the same port and reads
the data as it comes.
It is possible to start ActiveMQ asynchronously - there's a "startAsync"
setting on the broker.
With that said, I recommend running separate activemq servers instead of
embedding the brokers. ActiveMQ is not a light-weight tool. Embedding it
inside tomcat will limit scaling of both ActiveMQ and
Hi Fred,
If you are using message expiration, you ultimately need to have a handle
on the degree of time synchronization between your hosts.
The JMS API says for Message#getJMSExpiration() that "Clients should not
receive messages that have expired; however, the JMS API does not guarantee
that th
Which memory % used is this? The broker's overall usage, or a queue's usage?
Also, what version of AMQ is it?
Look at pending message count (or Queue Size, as it's called in JMX).
Enqueue and Dequeue counts are less reliable for telling just how many
messages exist on a Queue, or in a Topic sub
One approach that may work for you: stand up new brokers with the upgrade,
temporarily network the old broker to the new, force all clients to
reconnect to the new.
In order for that to work, the clients need to know to connect to the new
brokers before they are stood up. The failover transport c
G1GC is great for reducing the duration of any single stop-the-world GC
(and hence minimizing latency of any individual operation as well as
avoiding timeouts), but the total time spent performing GCs (and hence the
total amount of time the brokers are paused) is several times that of the
parallel
Hi folks,
I found a situation where the handling of timestamps/expirations by
ActiveMQ 5.10 appears not to be correct, and unfortunately -- being this a
consumer side issue -- it does not appear to be solvable via classic
TimeStampingBrokerPlugin tweaks.
Scenario:
S1\ Broker B1 runs on HOST1
S2\
The brokers are standalone, not master/slave so it looks like there may
need to be a short outage while we upgrade. Does anyone else have this
configuration that has upgraded? Is there anyway around taking an outage?
On 10/20/14, 6:19 PM, "artnaseef" wrote:
>Yeah, it should work fine. Just keep
Hi Julien,
this is actually expected behavior. With a shared file system, only one
broker can be active and the second will have to be "passive" until the lock
on the files has been removed. This can happen when the first broker crashes
or normally stops.
By the way, this is exactly the same behav
Another update:
I ran the broker with the native Java LevelDB and found that I am still
seeing the Warnings in the log file as reported before.
However, to my surprise the broker seem to perform better and even slightly
faster! I always thought the native LevelDB should be faster but I guess the
s/know/knot
On 21 October 2014 11:54, Gary Tully wrote:
> yes. but currently there is no loop detection, so you could get your self
> in a know, expiring form dlq to dlq.
>
> On 20 October 2014 16:42, Tim Bain wrote:
>
>> OK, I wasn't sure that the queue policy for that covered the DLQ, though
yes. but currently there is no loop detection, so you could get your self
in a know, expiring form dlq to dlq.
On 20 October 2014 16:42, Tim Bain wrote:
> OK, I wasn't sure that the queue policy for that covered the DLQ, though it
> makes sense that the DLQ is treated like any other queue rather
Quick update:
I have enabled G1GC for the JVM running the broker and since then had no
problem again. The master broker stays master even under very heavy load.
So, my suggestion and recommendation when using replicated LevelDB would be
to use the G1 garbage collector significantly reducing "stop
As tbain98 correctly said, this is application logic.
The best approach is to implement the processing logic within the receiver
of the message as an idempotent service, i.e. if you perform the same action
twice, it won't change your application state again.
One possible solution could be to have
Based on your suggestion, I looked at the GC behavior of the JVM, and you
were 100% spot on. At the time amq1 gets "demoted" to slave which forces a
failover to amq2 there was a "stop-the-world" GC going on.
Also, I was able to make the failover work correctly with the second cluster
in the networ
16 matches
Mail list logo