Copilot commented on code in PR #2374:
URL: https://github.com/apache/groovy/pull/2374#discussion_r2752094751
##########
src/main/java/org/codehaus/groovy/vmplugin/v8/CacheableCallSite.java:
##########
@@ -124,4 +124,22 @@ public MethodHandle getFallbackTarget() {
public void setFallbackTarget(MethodHandle fallbackTarget) {
this.fallbackTarget = fallbackTarget;
}
+
+ /**
+ * Clear the cache entirely. Called when metaclass changes to ensure
+ * stale method handles are discarded.
+ * <p>
+ * This method synchronizes on the lruCache to ensure atomicity with
+ * concurrent {@link #getAndPut} operations.
+ */
+ public void clearCache() {
+ // Clear the latest hit reference and the LRU cache atomically
+ synchronized (lruCache) {
+ latestHitMethodHandleWrapperSoftReference = null;
+ lruCache.clear();
+ }
+
Review Comment:
The volatile field `latestHitMethodHandleWrapperSoftReference` is being set
inside the synchronized block here, but it's read and written outside the
synchronized block in the `getAndPut()` method (lines 75-83). This mixing of
synchronization mechanisms is inconsistent.
For correctness, either:
1. Always access the volatile field outside locks (move line 138 outside the
synchronized block), or
2. Access the volatile field consistently inside locks (move lines 75-83
inside the synchronized block)
Option 1 is preferable for performance since the volatile keyword already
provides visibility guarantees. The current approach doesn't provide additional
safety but creates confusion about the intended synchronization strategy.
```suggestion
// Clear the LRU cache under lock to synchronize with concurrent
getAndPut operations
synchronized (lruCache) {
lruCache.clear();
}
// Clear the latest hit reference using volatile semantics, outside
the lock
latestHitMethodHandleWrapperSoftReference = null;
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]