[ 
https://issues.apache.org/jira/browse/ARTEMIS-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885627#comment-17885627
 ] 

Juan edited comment on ARTEMIS-5047 at 9/28/24 10:24 PM:
---------------------------------------------------------

I discovered that the Windows SMB client is indirectly changing the Last 
Modified timestamp due to its internal caching behaviour. The client maintains 
a local copy of the file’s metadata when caching is enabled. When the broker 
(Artemis) accesses the lock file for validation, the client compares its local 
cache with the server’s state and tries to synchronize them, which triggers a 
metadata update (e.g., SMB2 SET_INFO) and modifies the Last Modified timestamp, 
even if the file itself wasn’t altered.

Disabling caching resolves this by avoiding local updates, and ensuring 
consistent timestamps. However, since disabling caching may not always be 
practical, is there a way to handle this within Artemis to prevent the issue?

after disabling the cache:


{code:java}
FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 
2024 22:02:20,145  INFO [main] (ActiveMQServerLogger_impl.java:205) - 
AMQ221020: Started NIO Acceptor at localhost,ip-10-110-0-140:61616 for 
protocols [CORE,STOMP]FINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 2024 22:02:20,161 DEBUG 
[main] (NettyAcceptor.java:463) - Acceptor using nioFINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 2024 22:02:20,161 DEBUG 
[main] (ManagementServiceImpl.java:834) - ignoring message 
Notification[uid=null, type=ACCEPTOR_STARTED, 
properties=TypedProperties[factory=org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory,
 host=localhost,ip-10-110-0-140, port=61613]] as the server is not 
initializedFINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 
22:02:20|28 Sep 2024 22:02:20,161  INFO [main] 
(ActiveMQServerLogger_impl.java:205) - AMQ221020: Started NIO Acceptor at 
localhost,ip-10-110-0-140:61613 for protocols [STOMP]FINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 2024 22:02:20,161 DEBUG 
[main] (FileLockNodeManager.java:330) - writing status: LFINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 2024 22:02:20,161 DEBUG 
[main] (FileLockNodeManager.java:403) - trying to lock position: 
0FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 
2024 22:02:20,161 DEBUG [main] (FileLockNodeManager.java:407) - locked 
position: 0FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 
22:02:20|28 Sep 2024 22:02:20,161 DEBUG [main] (FileLockNodeManager.java:428) - 
lock: sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive 
valid]FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 
Sep 2024 22:02:20,161 DEBUG [main] (FileLockNodeManager.java:349) - Modified 
server.lock at 1727560940158FINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:20|28 Sep 2024 22:02:20,161 DEBUG 
[main] (FileLockNodeManager.java:476) - Starting the lock 
monitorFINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 
22:02:20|28 Sep 2024 22:02:20,161  INFO [main] 
(ActiveMQServerLogger_impl.java:101) - AMQ221007: Server is now active{code}
{code:java}
FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 22:02:38|28 Sep 
2024 22:02:38,380 DEBUG [Thread-0 (ActiveMQ-scheduled-threads)] 
(FileLockNodeManager.java:542) - Lock appears to be valid; double check by 
reading statusFINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 
22:02:38|28 Sep 2024 22:02:38,380 DEBUG [Thread-0 (ActiveMQ-scheduled-threads)] 
(FileLockNodeManager.java:358) - getting state...FINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:38|28 Sep 2024 22:02:38,380 DEBUG 
[Thread-0 (ActiveMQ-scheduled-threads)] (FileLockNodeManager.java:403) - trying 
to lock position: 0FINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:38|28 Sep 2024 22:02:38,380 DEBUG 
[Thread-0 (ActiveMQ-scheduled-threads)] (FileLockNodeManager.java:407) - locked 
position: 0FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 
22:02:38|28 Sep 2024 22:02:38,380 DEBUG [Thread-0 (ActiveMQ-scheduled-threads)] 
(FileLockNodeManager.java:428) - lock: 
sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive 
valid]FINEST|3896/0|Service com.docshifter.mq.DocShifterMQ|24-09-28 22:02:38|28 
Sep 2024 22:02:38,380 DEBUG [Thread-0 (ActiveMQ-scheduled-threads)] 
(FileLockNodeManager.java:376) - state: LFINEST|3896/0|Service 
com.docshifter.mq.DocShifterMQ|24-09-28 22:02:38|28 Sep 2024 22:02:38,380 DEBUG 
[Thread-0 (ActiveMQ-scheduled-threads)] (FileLockNodeManager.java:545) - Lock 
appears to be valid; triple check by comparing timestamp {code}


was (Author: JIRAUSER307041):
I discovered that the Windows SMB client is indirectly changing the Last 
Modified timestamp due to its internal caching behaviour. The client maintains 
a local copy of the file’s metadata when caching is enabled. When the broker 
(Artemis) accesses the lock file for validation, the client compares its local 
cache with the server’s state and tries to synchronize them, which triggers a 
metadata update (e.g., SMB2 SET_INFO) and modifies the Last Modified timestamp, 
even if the file itself wasn’t altered.

Disabling caching resolves this by avoiding local updates, and ensuring 
consistent timestamps. However, since disabling caching may not always be 
practical, is there a way to handle this within Artemis to prevent the issue?

> Lost the lock according to the monitor, notifying listeners
> -----------------------------------------------------------
>
>                 Key: ARTEMIS-5047
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-5047
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 2.33.0, 2.37.0
>         Environment: * *Storage Type:* SMB share hosted on AWS FSxN
>  * *Current Version:* NetApp Release 9.14.1P5
>  * *Affected Version:* 2.33.0 / 2.37.0
>  * *Previous Version:* 2.23.1 (issue not observed)
>            Reporter: Juan
>            Priority: Major
>
> After upgrading from version 2.23.1 to 2.33.0 the system has started to fail 
> due to a lock loss. This issue was not present in the previous version and 
> has only surfaced after the upgrade.
> I have seen a similar issue. 
> {noformat}
> (AuditLogger_impl.java:2843) - AMQ601767: STOMP connection c3e5b678 for user 
> unknown@10.118.189.108:64628 created FINEST|6324/0|Service 
> com.docshifter.mq.DocShifterMQ|24-08-22 01:03:51|22 Aug 2024 01:03:51,813 
> WARN [Thread-0 (ActiveMQ-scheduled-threads)] (FileLockNodeManager.java:557) - 
> Lost the lock according to the monitor, notifying listeners 
> FINEST|6324/0|Service com.docshifter.mq.DocShifterMQ|24-08-22 01:03:51|22 Aug 
> 2024 01:03:51,813 ERROR [Thread-0 (ActiveMQ-scheduled-threads)] 
> (ActiveMQServerLogger_impl.java:805) - AMQ222010: Critical IO Error, shutting 
> down the server. file=Lost NodeManager lock, message=NULL 
> FINEST|6324/0|Service com.docshifter.mq.DocShifterMQ|24-08-22 
> 01:03:51|java.io.IOException: lost lock{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact


Reply via email to