Hi Christian,
Correction to what I stated earlier.
In this route with producerWindowSize set in broker-config.xml
from uri=file:/opt/share/EventFileInput?move=.event-done/
setHeader headerName=messageType
simpleMICS/simple
/setHeader
to
Thanks Christian,
On
'Maybe just write a camel route to put those messages into a database for
future analysis? '
- Will definitely consider that.
Does this also mean - I am fundamentally 'not' using DLQ in Message-Broker
for what they are actually meant for?
i.e. is it always a bad idea to
When a message goes into a DLQ it means the consumer cannot handle this
message for one reason or another so it should be looked at. For your
usecase, if it's fine that messages can go undelivered and it's not a big
deal, you can just ignore the DLQ and use discardingDLQBrokerPlugin to
just drop
Not sure why consumers are blocked. Are they acking?
---
No, they are not acking at all, they seem to be blocked.
I ran 2 sets of tests.
For both tests following is the route set up
from uri=file:/opt/share/EventFileInput?move=.event-done/
setHeader
Yes, send the camel route and broker configs, etc. Your description seems
to be mixing camel consumer with jms consumer.. if you send a test
case, i can explain to you what's happening.
On Fri, May 10, 2013 at 9:17 AM, deepak_a angesh...@gmail.com wrote:
Not sure why consumers are blocked. Are
Hi Christian,
Prior to introducing the 'producer window' both the producer and consumer
were blocked
(If I don't consume from my DLQ) - which was expected.
The temporary work-around I have is to simply increase the memoryUsage to a
higher value
and also ensure messages are not queued up in
Yah, moving from one Queue (DLQ) to another queue (your analysis queue)
will still cause the same memory build up. Maybe just write a camel route
to put those messages into a database for future analysis?
You're maybe still hitting producer flow control in your tests if your
settings are 100k for
Thanks Christian.
(I work with Deepak.)
In our system, Camel produces JMS messages from integrated endpoints, which
are consumed by EJB MDBs. The EJB application produces new JMS messages,
which are consumed by Camel to send to other integrated endpoints (eg.
WebSphere MQ). We use transacted
Hi,
Also what was noticed in my test is that - when the CursorMemoryUsage
reached/exceeded the limit (64MB), the producer was blocked (which is
understandable) - but it was surprising to note that consumer was also
blocked (i.e. the camel route that pulled messages from the Queue also
froze) - is
Also, - we are using JDBC store to persist messages. Does this mean messages
that are persisted will also be held up in memory/cache? i.e. will that also
be occupying the broker memory?
--
View this message in context:
Are the consumer and producer on the same connection?
When doing normal sync sends for producer sending persistent messages, even
if producer and consumer are on same connection, ActiveMQ shouldn't block
the entire connection. if doing async sends, you may end up with entire
connection blocked.
Hi,
I see the max memory size as 64MB in the JMX console, each queue has the
size set to 64MB (shown in bytes)
So I must ensure that DeQueue is also purged at regular intervals?
If yes, is there any default purge strategy recommended?
I also notice that messages are in the Queue and also in
Where are you seeing that a queue has default memory limit of 64MB?
If you don't change the broker system usage limits, then the memory limits
for the entire broker is 64MB. This means all queues fall under this 64MB
limit. If you don't consume from your DLQ, they will use up broker
resources and
13 matches
Mail list logo