Just now saw BlockLocks code. It is documented to be thread-safe. Apologize
for the trouble...
Btw, a small nit. The below method is not returning true. Is that
intentional?
boolean releaseIfValid(long address) {
if (address >= _address && address < _maxAddress) {
long offset = address - _address;
int index = (int) (offset / _chunkSize);
_locks.clear(index);
}
return false;
}
Also, I thought a background thread can attempt merging sparsely populated
slabs into one single slab & release free-mem (in 128MB chunks) back to
OS...
You think it could be beneficial or it would make it needlessly complex?
On Wed, Jul 20, 2016 at 11:38 PM, Aaron McCurry <[email protected]> wrote:
> I don't think there is a race condition because the allocation occurs
> atomically in the BlockLocks class. Do see a problem? Let me know.
>
> Aaron
>
> On Wed, Jul 20, 2016 at 9:19 AM, Ravikumar Govindarajan <
> [email protected]> wrote:
>
> > I came across the following in SlabAllocationCacheValueBufferPool.java.
> Is
> > the below method thread-safe?
> >
> > @Override
> >
> > public CacheValue getCacheValue(int cacheBlockSize) {
> >
> > validCacheBlockSize(cacheBlockSize);
> >
> > int numberOfChunks = getNumberOfChunks(cacheBlockSize);
> >
> > ...
> >
> > }
> >
> >
> > It does allocation in a tight-loop using BlockLocks, Slab & Chunks. Is
> > there a race-condition where 2 threads can pick same slab & chunk?
> >
>