Looking at this more, I believe you can simply disable the HTTP-specific idle checking using httpClientIdleScanPeriod=-1 and rely on the normal ping mechanism [1] to keep the connection alive.
I'll probably actually remove the HTTP-specific idle stuff completely and update the docs as it seems to duplicate the existing ping functionality. Justin [1] https://github.com/apache/artemis/blob/main/docs/user-manual/connection-ttl.adoc#detecting-failure-from-the-client On Tue, Dec 16, 2025 at 9:01 PM Justin Bertram <[email protected]> wrote: > > Is the abserved behaviour a bug in Artemis codebase? > > Yes, I believe it is. I've opened a Jira to track this [1]. > > > Or are we configuring something wrong? > > No. > > > What are "appropriate" values for httpClientIdleTime and why? > > Assuming everything works like it is supposed to (which it currently > isn't), "appropriate" values really depend on your use-case. With settings > like this you should aim strike a balance between enough requests to keep > an idle connection alive (assuming you want to) and not wasting bandwidth > with unnecessary requests. > > > Shouldn't the default values work out-of-the-box? > > Generally speaking, defaults should work out-of-the-box. However, they > won't always work out-of-the-box which is why they are configurable in the > first place. You will need to determine what values work best for your > use-case. > > > Would it be possible to add more detail to the documentation...? > > Yes. > > > Justin > > [1] https://issues.apache.org/jira/browse/ARTEMIS-5819 > > On Tue, Dec 16, 2025 at 7:53 AM Stepien, Grzegorz via users < > [email protected]> wrote: > >> Hi, >> we are trying to configure our Artemis Client for HTTP-based >> communication and are running into repeated connection losses when using >> the default httpClientIdleTime setting. We are unsure whether this is a >> configuration error on our end or an Artemis bug. Maybe some of you can >> help? >> >> Background: >> - We have an ActiveMQ Artemis 2.44.0 broker instance running in our >> company-cloud >> - We have a Spring-Boot-based client application which listens on various >> queues and topics on that broker instance >> - Several instances of that client application are deployed at various >> customers on-premise >> -- Out client has a 'spring-boot-starter-artemis' dependency (3.5.8) and >> an artemis-bom dependency (2.44.0) >> --- We could also reproduce the problem with artemis versions 2.41.0 and >> 2.42.0 (both client and broker) and also with older spring boot versions so >> it does not seem to be a new phenomenon. >> -- The Client registers several queue and topic listeners using a >> JmsListenerEndpointRegistrar >> >> Now several of our customers only allow outgoing HTTP communicaion which >> is why we have been experimenting with configuring our Client to use >> HTTP-Transport as explained in " >> https://artemis.apache.org/components/artemis/documentation/latest/configuring-transports.html#configuring-netty-http". >> This results in the erros described below: >> >> Setup: >> I have reproduced the problem locally: I have an artemis 2.44.0 running >> on my local machine and try to connect to that artemis with our spring boot >> client application (also running locally) using >> "spring.artemis.broker-url=tcp://localhost:61616?httpEnabled=true". This, >> unfortunately, results in the connection to the broker being periodically >> closed an reestablished resulting in the some nasty Error-Logs on both >> client and broker side ("spring.artemis.broker-url=tcp://localhost:61616" >> works fine): >> Client side: >> "Caused by: >> org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: >> [errorType=NOT_CONNECTED message=AMQ219006: Channel disconnected] >> at >> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connectionDestroyed(ClientSessionFactoryImpl.java:417) >> at >> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector$Listener.lambda$connectionDestroyed$0(NettyConnector.java:1254) >> ... 6 more >> ... >> 2025-12-16T14:09:38,480+01 o.s.j.c.CachingConnectionFactory: Encountered >> a JMSException - resetting the underlying JMS Connection >> jakarta.jms.IllegalStateException: Session is closed >> at >> org.apache.activemq.artemis.jms.client.ActiveMQSession.checkClosed(ActiveMQSession.java:1416) >> at >> org.apache.activemq.artemis.jms.client.ActiveMQSession.getTransacted(ActiveMQSession.java:302) >> at >> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) >> at java.base/java.lang.reflect.Method.invoke(Method.java:580) >> at >> org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.invoke(CachingConnectionFactory.java:422) >> at jdk.proxy2/jdk.proxy2.$Proxy138.getTransacted(Unknown Source) >> at >> org.springframework.jms.listener.AbstractMessageListenerContainer.commitIfNecessary(AbstractMessageListenerContainer.java:858) >> at >> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:373) >> ..." >> Broker side: >> "2025-12-16 14:09:39,526 WARN [org.apache.activemq.artemis.core.server] >> AMQ222061: Client connection failed, clearing up resources for session >> 79f58d98-da80-11f0-aced-0a0027000018 >> 2025-12-16 14:09:39,527 WARN [org.apache.activemq.artemis.core.server] >> AMQ222107: Cleared up resources for session >> 79f58d98-da80-11f0-aced-0a0027000018 >> 2025-12-16 14:09:39,527 WARN [org.apache.activemq.artemis.core.server] >> AMQ222061: Client connection failed, clearing up resources for session >> 79f5dbb9-da80-11f0-aced-0a0027000018 >> 2025-12-16 14:09:39,528 WARN [org.apache.activemq.artemis.core.server] >> AMQ222107: Cleared up resources for session >> 79f5dbb9-da80-11f0-aced-0a0027000018 >> 2025-12-16 14:09:39,528 WARN [org.apache.activemq.artemis.core.server] >> AMQ222061: Client connection failed, clearing up resources for session >> 79f650ea-da80-11f0-aced-0a0027000018 >> ..." >> >> Now I have done some digging in the artemis codebase and it seems that >> the default value for httpClientIdleTime is 500ms. I have experimented >> around with that value and it seems that the above problems disappear if >> httpClientIdleTime is set to a large enough value - 1s seems to be a >> threshold where the errors disappear on my local machine: >> >> "spring.artemis.broker-url=tcp://localhost:61616?httpEnabled=true&httpClientIdleTime=1000". >> >> I am no expert on the artemis code, but I have noticed that the >> org.apache.activemq.artemis.core.remoting.impl.netty.HttpAcceptorHandler.channelRead(...) >> method (artemis-server module) only schedules a new response for POST >> requests while the >> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.HttpIdleTimer.run() >> method (artemis-core-client module) sends an empty "keep-alive" GET >> request. Is it possible that the latter never gets answered resulting in >> the observed errors? >> >> So my question pretty much boils down to: >> - Is the abserved behaviour a bug in Artemis codebase? >> - Or are we configuring something wrong? What are "appropriate" values >> for httpClientIdleTime and why? Shouldn't the default values work >> out-of-the-box? >> >> Thank you and best regards! >> Grzegorz >> >> P.S.: Would it be possible to add more detail to the documentation at >> https://artemis.apache.org/components/artemis/documentation/latest/configuring-transports.html#configuring-netty-http? >> It is quite vague about what those parameters do in detail, which ones are >> to be set on the broker- and which on the client-side, what their unit is >> (second or millisecond?) and what their default values are. I had to look >> that up in the codebase. >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> >>
