Can you give more details about the timeline and observed behavior? Did the broker declare the store to be full while the NFS server was offline or after it came back? If after, how long after? How much data was in the persistent store before the NFS server dropped? What's the approximate rate of persistent messages into the broker and the rate at which they're typically consumed?
What do the broker's logs say during and immediately after the NFS outage? I'm not sure what I'd expect a short NFS outage to look like from the broker's perspective (would the O/S just hang on disk writes or would exceptions be thrown, for example), so I'm not sure how the broker would react, but understanding the logs might help us know how the O/S manifested this to the broker. Tim On Thu, Mar 24, 2022, 7:55 AM ragul rangarajan <[email protected]> wrote: > Hi Team, > > In my setup, we have an ActiveMQ service that will take files located in a > remote server via NFS and notify them in the queue. > > Due to some issue, the NFS server was not reachable by the server which in > turn reflected in ActiveMQ. Later, the NFS server was recovered within a > few seconds. > > > > kernel: nfs: server 10.**.**.** not responding, still trying > > kernel: nfs: server 10.**.**.** OK > > > When this NFS fluctuation occurred, the queue got full and consumed > persistent store which reached 100%. This issue was resolved only after the > application is restarted. > > > WARN | Usage(default:store) percentUsage=100%, usage=1073764014, > > limit=1073741824, percentUsageMinDelta=1%: Persistent store is Full, 100% > > of 1073741824. Stopping producer (ID:SERVER-1641938964016-1:1:3:1) to > > prevent flooding queue://filenotify. See > > http://activemq.apache.org/producer-flow-control.html for more info > > (blocking for: 72667s) | org.apache.activemq.broker.region.Queue | > ActiveMQ > > Transport: tcp:///10.**.**.**:59904@61616 > > > We would like to know, whether ActiveMQ can be recovered automatically in > any way? or Do we need a restart to recover from this issue? > > Thanks and Regards, > Ragul R >
