Hi,

The code path of a cache operation, e.g. cache.put(key, value), involves
locking the entry (entries) at java-level:

https://github.com/apache/ignite/blob/788adc0d00e0455e06cc9acabfe9ad425fdcd65b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java#L2845

Also, the locking is released only after the serialization and the ultimate
writing of the cache entry (entries) into the off-heap data page.

My question:  if the locking is at java-level, how do we avoid the conflict
of many threads writing to the same data page offset? Or is it the case
that, before a lock is acquired, the offset within the data page that the
entry will be written to is known?

Thanks,

Reply via email to