Hi

I have to implement the following usecase:
- messages are processed and finally delivered to a queue
- from this messages I need to generate multiple files (which
message-content goes to which file is determined by header value)
- I need to create a file based on multiple triggers:
    1. when 1000 messages are available (for one file)
    2. after waiting for an hour and there are less than 1000 messages available
    3. if the context is shutted down
- after a file is created, the collection of messages begins again
until the next file is ready to create
- the consumption of messages for the file should be transacted - this
means that I consume 1000 messages and create the file. if the file is
written, the 1000 messages are commited, otherwise the all remain in
the queue.


my first thought was to use aggregator who can easily split the
messages by header values. I have written a quick testcase that looks
like this (numbers are reduced for testability):
AggregationCollection aggregatePredicate = new PredicateAggregationCollection(
  header("brand"),
  new GroupedExchangeAggregationStrategy(),
  property(Exchange.AGGREGATED_SIZE).isEqualTo(3));
                
from("activemq:queue:aggregator.queue")
  .transacted().policy(txPolicy)
  .routeId("Aggregator-Route")
  .aggregate(aggregatePredicate)
  .batchTimeout(2000)
  .to("mock:result");

The following questions arised:
- how can I trigger the time based consumption? That is, if the 1000
messages are not reached within an hour, it should nevertheless write
a file with all available messages. I learned from other posts that
this cannot be done with "batchTimeout" option.
- how can I trigger the file creation when the context is shutted down?
- the camel jms-consumer seems to consume message by message. how can
I consume and commit/rollback batches of n messages?


Because of these questions I found batchConsumer, but there I still
have the problem that the camel jms-consumer does not support
batchConsumer.

How would a camel expert solve this usecase?

Thanks
Stefan

Reply via email to