Sent from my iPhone

On 20 May 2013, at 12:58, Dan Berindei <dan.berin...@gmail.com> wrote:

> 
> 
> 
> On Mon, May 20, 2013 at 1:57 PM, Manik Surtani <msurt...@redhat.com> wrote:
>> 
>> On 16 May 2013, at 15:04, Dan Berindei <dan.berin...@gmail.com> wrote:
>> 
>>> Hi guys
>>> 
>>> I'm working on an intermittent failure in NodeMoveAPIPessimisticTest and I 
>>> think I've come across what I think is underspecified behaviour in 
>>> AtomicHashMap.
>>> 
>>> Say we have two transactions, tx1 and tx2, and they both work with the same 
>>> atomic map in a pessimistic cache:
>>> 
>>> 1. tx1: am1 = AtomicMapLookup.get(cache, key)
>>> 2. tx2: am2 = AtomicMapLookup.get(cache, key)
>>> 3. tx1: am1.put(subkey1, value1) // locks the map
>>> 4. tx2: am2.get(subkey1) // returns null
>>> 5. tx1: commit // the map is now {subkey1=value1}
>>> 6. tx2: am2.put(subkey2, value2) // locks the map
>>> 7. tx2: commit // the map is now {subkey2=value2}
>>> 
>>> It's not clear to me from the AtomicMap/AtomicHashMap javadoc if this is ok 
>>> or if it's a bug...
>> 
>> If optimistic, step 7 should fail with a write skew check.  If pessimistic, 
>> step 2 would *usually* block assuming that another thread is updating the 
>> map, but since neither tx1 or tx2 has started updating the map yet, neither 
>> has a write lock on the map.  So that succeeds.  I'm not sure if this is any 
>> different from not using an atomic map:
>> 
>> 1.  tx1: cache.get(k, v); // reads into tx context
>> 2.  tx2: cache.get(k, v);
>> 3.  tx1: cache.put(k, v + 1 );
>> 4.  tx1: commit
>> 5.  tx2: cache.put(k, v + 1 );
>> 6.  tx2: commit
>> 
>> here as well, if using optimistic, step 6 will fail with a WSC but if 
>> pessimistic this will work (since tx2 only requested a write lock after tx1 
>> committed/released its write lock).
> 
> The difference is that in your scenario, you see in the code that tx2 writes 
> to key k, so it's not surprising to find that tx2 overwrote the value written 
> by tx1. But it would be surprising if tx2 also overwrote an unrelated key k2.
> 
> With an atomic map, you only see in the code "map.put(subkey2, value)". Tx2 
> doesn't touch subkey1, so it's not that obvious that it should remove it. It 
> is clear to me why it behaves the way it does now, after reading the 
> implementation, but I don't think it's what most users would expect. (The 
> proof, I guess, is in the current implementation of TreeCache.move()).
+1 and nice explanatiom :-)
> 
> With a FineGrainedAtomicMap in optimistic mode, it's not obvious why tx1 
> writing to subkey1 should cause tx2's write to subkey2 fail, either (see 
> https://issues.jboss.org/browse/ISPN-3123).
+1
>  
>>> Note that today the map is overwritten by tx2 even without step 4 ("tx2: 
>>> am2.get(subkey1)"). I'm pretty sure that's a bug and I fixed it locally by 
>>> using the FORCE_WRITE_LOCK in AtomicHashMapProxy.getDeltaMapForWrite. 
>>> 
>>> However, when the Tree API moves a node it first checks for the existence 
>>> of the destination node, which means NodeMoveAPIPessimisticTest is still 
>>> failing. I'm not sure if I should fix that by forcing a write lock for all 
>>> AtomicHashMap reads, for all TreeCache reads, or only in TreeCache.move().
>> 
>> I think only in TreeCache.move()
> 
> I tend to disagree. I think it's way too easy to introduce a read of a node's 
> structure in a transaction and start losing data without knowing it.
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to