tin
On Wed, Sep 11, 2024 at 8:13 AM Jan Šmucr
wrote:
> I'll check one more time tomorrow (I'm on the phone now) but my main
> concern has been the difference between a message that has properties
> which don't match and a message that does not have any properties a
d2626171%7Cb3811028ce6e4b01bcb0db419328ffc5%7C1%7C0%7C638616567191493006%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=CY2lt9Uu0XI2Xc3fLJJFKoFf5MhahFtBvfVvW0segpk%3D&reserved=0<https://docs.oracle.com/javaee/7/api/javax/jms/M
Hello.
I've hit an odd behavior in Artemis (tested with 2.32 and 2.37), and I'd like
to know if it's a bug or a feature.
Say I have an anycast address `addr1` and two queues: `q1` and `q2`. Both
queues have a filter on them, and the `q2`'s filter is a negation of the `q1`'s
filter. Schematicall
Eh, sorry, bad link.
Here:
https://activemq.apache.org/components/artemis/documentation/latest/ha.html#shared-store
Jan
-Original Message-
From: Jan Šmucr
Sent: pátek 9. srpna 2024 12:55
To: users@activemq.apache.org
Subject: RE: Need architecture advice
Hi Vimal,
yes, it can, since
Hi Vimal,
yes, it can, since EFS is only a bit limited version of NFS v4.1. You just need
to be aware of potential throughput and latency limits. It's definitely a good
thing to do some benchmarking first before you fully commit to this solution.
See:
https://activemq.apache.org/components/arte
to finish ...
[Thread-1] Waiting for the other threads to finish ...
[main] Now attempting to receive the previously rejected message #4 ...
[Thread-2] Stopped.
[Thread-0] Stopped.
[Thread-1] Stopped.
[main] Accepted message #4
Jan
From: Jan Šmucr<mailto:jan.sm...@aimtecglobal.com>
Sent: čtvrtek
Hello.
We’re using Artemis to distribute files between various systems. The workflow
is almost always the same:
1. Shell scripts pass files to client applications.
2. These client applications deliver the files to an Artemis broker using
the core protocol.
3. Another core protocol clie
Hello.
Since 2.31, the documentation has been presented in a different way, and it’s
no longer possible to search in it. That’s quite a major and rarely seen
inconvenience.
Is this going to be addressed in the near future?
Thank you.
Jan
[4]
https://www.zabbix.com/documentation/current/en/manual/config/items/itemtypes/prometheus
On Wed, May 22, 2024 at 1:00 PM Jan Šmucr
wrote:
> I had pretty much the same problem, as it's not just the toggles but also
> metrics which aren't available for the entire cluster.
>
I had pretty much the same problem, as it's not just the toggles but also
metrics which aren't available for the entire cluster.
I ended up building "a JMX proxy", which aggregates metrics from all available
nodes, and offers the results via its own JMX interface. This way I can control
the clus
>
> On Sat, Aug 12, 2023 at 11:11 AM Jan Šmucr
> wrote:
>
>> Ok, I'll fix it then. My Jira at work will be happy for another Done
>> task. 😁
>>
>> Jan
>>
>> Zasláno z Outlooku pro Android<https://aka.ms/AAb9ysg>
>> ___
of open / close it all the
> > time this won't happen. But I should have a fix by monday.
> >
> > On Fri, Aug 11, 2023 at 12:24 PM Clebert Suconic
> > wrote:
> > >
> > > I highly recommend you using check-leak.. you would have found what's
&
t;
>
> I would even write a unit-test for memory-leaks.
>
> On Fri, Aug 11, 2023 at 10:06 AM Jan Šmucr wrote:
> >
> > So I’m getting a bit closer. The leak is in PostOfficeImpl and QueueInfo.
> > QueueInfo contains the filterStrings List which appears to contain a lis
CONSUMER_CREATED message contains _AMQ_FilterString = "" whereas the
CONSUMER_CLOSED message contains AMQ_FilterString = null. So the filterStrings
List keeps filling up by empty strings because these don’t get removed based on
a null value.
Jan
From: Jan Šmucr<mailto:jan.sm...@aim
so they are longer-lived - preferrably only
being removed once the application needs to stop consuming.
If there is a need to throttle and/or control threading and parallel
processing of messages, perhaps Camel would be a good fit.
Hope this helps.
Art
On Wed, Aug 9, 2023 at 10:44 PM J
a consumer is closed so at first glance it appears you're
leaking consumers.
Justin
On Wed, Aug 9, 2023 at 7:07 AM Jan Šmucr wrote:
> Hello.
> I’m using a simple master-slave Artemis 2.26.0 cluster, and I’m noticing
> heap usage growing more and more each day no matter the throughput.
Hello.
I’m using a simple master-slave Artemis 2.26.0 cluster, and I’m noticing heap
usage growing more and more each day no matter the throughput. There’s about
670 sessions at the same time opened for producers and consumers. Consumers are
polling queues on regular basis, some once a second (m
My producers used to leak sessions and I could get easily as far as about 16k
sessions per producer without much effect on the broker.
What I do now is that I have a pool of worker threads, and each of the threads
has its own session opened. Whenever there's a need, I ask the pool to do a
Calla
I'd add my opinion from the maintenance perspective.
We use both setups in our cloud service and when it comes to solving issues
it's a lot better to have separate queues, as you can pause these individually.
Selecting what not to consume on the consumer side can get very complicated.
Jan
Dne
Hello.
I’m collecting some guidelines on how to recover from a split brain situation
in the master-slave configuration, and I’ve been wondering – given that both
servers in this situation are live – is it possible to use the scale-down
functionality to move messages from slave to master before
Hello.
I’ve been working with Artemis for a while, and back in the days, when I was
developing our core clients, I had some issues with message property keys
format. We use Java properties style metadata, such as:
control.payload.origin=/home/AIM/inbound/file.txt
control.source.environment=test
I can't see one very important point being mentioned and that is non-heap
memory consumption. Does your metric tool cover the entire instance memory
consumption, or is it monitoring just Artemis heap? A rule of thumb is to have
Xmx somewhere around the half of the entire memory available for the
time as you mentioned. We do re-attach the EBS or EFS data volume so no
messages are lost when the broker eventually starts back up.
Hopefully that helps!
- Lucas
On 2022-07-20, 9:54 PM, "Jan Šmucr" wrote:
CAUTION: This email originated from outside of the organization
e.
>
> On Sat, Jul 23, 2022 at 2:41 PM Clebert Suconic
> wrote:
>
>> Change your message in a way is compatible ?
>>
>> On Fri, Jul 22, 2022 at 3:36 PM Jan Šmucr
>> wrote:
>>
>>> The reason for this is that there's a whole infrastructure built
ally mean 'using the same Core
client as the existing senders/receivers' ?
On Mon, 25 Jul 2022 at 11:10, Jan Šmucr wrote:
>
> > For the broker, it would ordinarily treat the large message as a
> > chunked stream if doing e.g core->core, or amqp->amqp...but in thi
ge structure the brokers core->AMQP
converter is geared around, meaning it falls back to one that tries to
convert it as a string, which presumably fails due to starting to read
your bare payload as a size indicator)
On Mon, 25 Jul 2022 at 06:40, Jan Šmucr wrote:
>
> Thank you for the fee
o change
your producer to have the exact format the converter would have.
On Sat, Jul 23, 2022 at 2:41 PM Clebert Suconic
wrote:
> Change your message in a way is compatible ?
>
> On Fri, Jul 22, 2022 at 3:36 PM Jan Šmucr
> wrote:
>
>> The reason for this is that there
The reason for this is that there's a whole infrastructure built using the core
protocol, and now we need to connect a Python-based Lambda capable of receiving
large messages. Is there any other, core-compatible method?
Jan
Dne 22. 7. 2022 21:21 napsal uživatel Clebert Suconic
:
If you expect
Hello.
So I’ve done some testing and it appears that NFS is somewhere around half of
the speed with our workload type. Certainly not very good news.
Jan
From: Jan Šmucr<mailto:jan.sm...@aimtecglobal.com>
Sent: čtvrtek 21. července 2022 6:53
To: users@activemq.apache.org<mai
x27;d recommend mirroring [1]. Normal HA solutions (e.g.
shared storage or replication) are really designed to be used on local
networks with very low latency.
Justin
[1]
https://activemq.apache.org/components/artemis/documentation/latest/amqp-broker-connections.html#mirroring
On Wed, Jul 20,
Hello.
We too are trying to switch from replication to a more simple model, especially
when it comes to single master-slave pair cluster model which suffers from
split brain issues. AWS EFS and the shared storage model makes sense.
The idea is that before we expand our cluster, there would be on
31 matches
Mail list logo