Hi Ephemeris-

Seeing a RemoveInfo and ShutdownInfo sounds like something (ie Spring JMS or 
Camel) called consumer.close() explicitly

The Spring JMS Template (used by Camel) has a lot of settings that impact how 
Consumers get closed — cacheLevel esp and behavior varies with ACK mode. 

I’d review those settings and also test with a simple Java-based (no Spring, no 
Camel) connection to attempt to reproduce or confirm that it is Spring JMS + 
Camel)

Thanks,
Matt Pavlovich

> On Jul 12, 2023, at 5:53 AM, Ephemeris Lappis <ephemeris.lap...@gmail.com> 
> wrote:
> 
> Hello Matt !
> 
> I've executed a tcpdump in the karaf pod, and I observe very strange
> things (see attached png file). After the initial connection the
> client and server periodically use KeepAliveInfo messages. That seems
> to be a normal openWire dialog. But later, the client (10.153.50.183)
> changes and sends a RemoveInfo that seems to be accepted by the
> server. Then a ShutdownInfo is sent, and then the tcp closing sequence
> of fin/ack terminates the socket. Then a new socket is opened and a
> new openWire dialog begins.
> 
> Why does the client decide to break the keep alive dialog after 30 minutes ?
> Is there a default option that is applied in this configuration that
> we've never observed before with exactly the same configuration files
> ?
> 
> Thanks again.
> 
> Regards.
> 
> 
> Le lun. 10 juil. 2023 à 18:27, Matt Pavlovich <mattr...@gmail.com> a écrit :
>> 
>> Hi Ephemeris-
>> 
>> 1. Yes, you should use failover:() even with one URL (esp if using load 
>> balancer)
>> 
>> 2. These Camel errors are trick to troubleshoot without seeing all the 
>> settings. Camel JMS uses Spring JMS under the covers, and understanding how 
>> that works in combination with Connection Pooling is tricky. Specifically— 
>> setting the max pooled objects is _not_ the same as max connections. Pooled 
>> objects may  include connections + sessions + consumers, depending on your 
>> configuration. Setting that number to ’25’ may be too low. If your used 
>> connections + sessions + consumers is 25 or higher. I think the default is 
>> 500, so you are significantly lowering it.
>> 
>> Adding more debug and add’l logging would be need to root cause.
>> 
>> Also note, this level of debugging is beyond what is reasonable to expect 
>> getting a resolution from ActiveMQ user mailing list and/or Stackoverflow 
>> type developer support.
>> 
>> Thanks,
>> Matt Pavlovich
>> 
>>> On Jul 10, 2023, at 4:41 AM, Ephemeris Lappis <ephemeris.lap...@gmail.com> 
>>> wrote:
>>> 
>>> Hello again.
>>> 
>>> Perhaps you've already seen my previous mail... I've tried some
>>> options and none of them seems to fix my connection issues.
>>> 
>>> I've tried first adding "failover" over my tcp URL, and then
>>> "keekAlice=true" on the tcp transport, and then the two changes
>>> together, and I have exactly the same errors.
>>> 
>>> My connection factory configuration now is :
>>> 
>>> # Connection configuration
>>> type=activemq
>>> connectionFactoryType=ConnectionFactory
>>> 
>>> # Names
>>> name=alice-jms
>>> osgi.jndi.service.name=jms/alice
>>> 
>>> # Connection factory properties
>>> jms.url=failover:(tcp://alice-echanges-activemq:61616?keepAlive=true)
>>> jms.user=alice
>>> jms.password=alice
>>> jms.clientIDPrefix=CATERPILLAR
>>> 
>>> # Set XA transaction
>>> xa=false
>>> 
>>> # Connection pooling
>>> pool=pooledjms
>>> # Maximum number of connections for each user+password (default 1)
>>> pool.maxConnections=256
>>> # Maximum idle time in seconds (default 30 seconds)
>>> pool.connectionIdleTimeout=30
>>> # Interval for connections idle time checking in milliseconds (default
>>> 0 for none)
>>> pool.connectionCheckInterval=15000
>>> 
>>> Except the URL that I changed as explained before, this is the same
>>> configuration that we have been using for a long time either between
>>> VMs, or between docker containers in compose, or between containers
>>> and VM... We've never observed this kind of errors before, and our K8s
>>> experts do not explain what my be different in the cluster environment
>>> that could be the origin of the issue...
>>> 
>>> More ideas from similar experiences ?
>>> 
>>> Thanks again.
>>> 
>>> Regards.
>>> 
>>> 
>>> 
>>> 
>>> Le ven. 7 juil. 2023 à 18:21, Matt Pavlovich <mattr...@gmail.com> a écrit :
>>>> 
>>>> Hi Ephermeris-
>>>> 
>>>> Recommendations when running in the cloud is to be sure to enable a lot of 
>>>> self-healing features. Networking (esp in kubernetes) can be inconsistent 
>>>> when compared to local developer desktop.
>>>> 
>>>> 1. Use failover:( .. ) transport in client urls
>>>> 2. Use PooledConnectionFactory (esp w/ Camel)
>>>> 3. Configure PooledConnectionFactory to expire connections
>>>> 4. If possible, add a scheduled task to your Camel routes to periodically 
>>>> restart these routes (this releases connections and allows new connections 
>>>> to spin up)
>>>> 5. Look into using camel-sjms vs the Spring JMS-based component to see if 
>>>> its a better fit
>>>> 
>>>> Thanks,
>>>> -Matt
>>>> 
>>>>> On Jul 7, 2023, at 10:47 AM, Ephemeris Lappis 
>>>>> <ephemeris.lap...@gmail.com> wrote:
>>>>> 
>>>>> Hello.
>>>>> 
>>>>> We observe strange messages in our logs about ActiveMQ consumers.
>>>>> 
>>>>> 17:33:29.684 WARN [Camel (XXXX_context) thread #452 -
>>>>> JmsConsumer[xxxx.internal.queue]] Setup of JMS message listener
>>>>> invoker failed for destination 'xxxx.internal.queue' - trying to
>>>>> recover. Cause: The Consumer is closed
>>>>> 
>>>>> This message comes from Camel routes executed in Karaf. Similar
>>>>> messages are produced by consumers in SpringBoot applications.
>>>>> 
>>>>> These WARN messages seem not to be always related to real issues, but
>>>>> sometimes application activities have failed after these logs
>>>>> appeared, and a restart of the clients have been required to restore a
>>>>> correct behavior.
>>>>> 
>>>>> We have only seen that on a full Kubernetes configuration where both
>>>>> the ActiveMQ server and its clients (Camel/Kafaf or SpringBoot
>>>>> applications) run in the same namespace as distinct pods using a k8s
>>>>> service to join the openwire endpoint.
>>>>> 
>>>>> Similar configurations with ActiveMQ or Karaf executed in docker
>>>>> containers or VM have never led to messages of this kind.
>>>>> 
>>>>> So, what is the real meaning of this message whose conditions I can't 
>>>>> identify ?
>>>>> 
>>>>> Did someone already experience this behavior, in Kubernetes or not, ?
>>>>> 
>>>>> Thanks in advance.
>>>>> 
>>>>> Regards.
>>>> 
>> 

Reply via email to