On Nov 12, 2007, at 19:10, Andreas L Delmelle wrote:

Hi Chris

On Nov 12, 2007, at 18:29, Chris Bowditch wrote:

<snip />
Thanks for the prompt reply! I've now committed the check for null in the rehash method.

Good! Just checked the code more closely, and I think your suggestion is actually the only way to deal with it.

Just looked closer, and I may have found a slightly better way of dealing with it.

Since the cause of the problem is CacheCleaners interfering with rehash(), maybe the below patch is a better approach. Once rehash() has been entered, and has acquired the lock on the first segment, set a flag. A CacheCleaner will then only be launched if there is no rehash in progress.

Can you try and see if this resolves the issue in your tests? If so, I'll commit it later on (or you can do so, if you like).


Cheers

Andreas


Index: src/java/org/apache/fop/fo/properties/PropertyCache.java
===================================================================
--- src/java/org/apache/fop/fo/properties/PropertyCache.java (revision 594306) +++ src/java/org/apache/fop/fo/properties/PropertyCache.java (working copy)
@@ -40,6 +40,8 @@
private CacheSegment[] segments = new CacheSegment[SEGMENT_MASK + 1];
     /** the table of hash-buckets */
     private CacheEntry[] table = new CacheEntry[8];
+    /** flag indicating whether a rehash() has been issued*/
+    private boolean rehashInProgress = false;

     /* same hash function as used by java.util.HashMap */
     private static int hash(Object x) {
@@ -173,7 +175,8 @@
                 }
             }

-            if (segment.count > (2 * table.length)) {
+            if (segment.count > (2 * table.length)
+                    && !rehashInProgress) {
                 /* launch cleanup in a separate thread,
                  * so it acquires its own lock, and put()
                  * can return immediately */
@@ -239,6 +242,11 @@

         CacheSegment seg = segments[index];
         synchronized (seg) {
+ /* set the flag, so no cleanup-threads can interfere with the rehash + * this is necessary, because a cleanup-thread could still acquire the + * lock on another segment, and thus cause NPEs further below
+             */
+            rehashInProgress = true;
             if (index > 0) {
/* need to recursively acquire locks on all segments */
                 rehash(index - 1);
@@ -273,17 +281,16 @@
                     for (int i = table.length; --i >= 0;) {
for (CacheEntry c = table[i]; c != null; c = c.next) {
                             ref = c.ref;
-                            if (ref != null) {
-                                if ((o = ref.get()) != null) {
-                                    hash = hash(o);
-                                    idx = hash & newLength;
- newTable[idx] = new CacheEntry (c, newTable[idx]); - segments[hash & SEGMENT_MASK].count++;
-                                }
+                            if ((o = ref.get()) != null) {
+                                hash = hash(o);
+                                idx = hash & newLength;
+ newTable[idx] = new CacheEntry(c, newTable[idx]);
+                                segments[hash & SEGMENT_MASK].count++;
                             }
                         }
                     }
                     table = newTable;
+                    rehashInProgress = false;
                 }
             }
         }



Reply via email to