[ 
https://issues.apache.org/jira/browse/IMPALA-7534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16740931#comment-16740931
 ] 

Paul Rogers commented on IMPALA-7534:
-------------------------------------

There are many issues here. Let's take them one by one. The first is to 
understand the purpose of the Loading cache. The blog talks about a one use 
case: read from the cache. If the value is not present, write it. Our code does 
exactly that for partitions:

{code:java}
  private Map<PartitionRef, PartitionMetadata> loadPartitionsFromCache(
      TableMetaRefImpl table, ListMap<TNetworkAddress> hostIndex,
      List<PartitionRef> partitionRefs) {

    ...
      PartitionMetadataImpl val = 
(PartitionMetadataImpl)cache_.getIfPresent(cacheKey);
     ...
      ret.put(ref, val.cloneRelativeToHostIndex(cacheHostIndex_, hostIndex));
{code}

As the blog sort-of mentions, such an operation is inherently unsafe in a 
multi-threaded environment: there is nothing to synchronize readers and 
writers. The Loading cache is designed to solve this use case with its atomic 
"get or load" operation.

To verify this, I created two unit tests. The first loaded a cache using the 
pattern shown above: unsynchronized getIfPresent and put operations. As 
expected, this lead to a complete mish-mash of values, with multiple reads for 
the same value with some values immediately overwritten by others.

Then, the second test case used the Guava "get" (atomic get or load) operation. 
This one was rock solid with no missed or extra reads.

This shows that, when using the loading cache for the case for which it was 
designed, it does work.

This then leads to a more complex question: do we care? Does the metadata code 
work even though we use an unsynchronized implementation? That is, that we are 
using the loading cache as a concurrent cache, setting aside its "loading" 
feature.

Suppose that we load load the same partition in two threads simultaneously. The 
loading cache will ensure one does the load, the other waits for the results. 
Our code ensures that the cache gets and sets are interleaved, with no 
concurrency. Both threads might get the same value, find it is missing, and 
write new values.

Suppose we did this with two versions of the partition data, say 7 and 8. 
Suppose we use a simple key for each partition: just its name: (partition). 
Clearly, we'd end up with the cache holding some version 7 data, with other 
bits holding version 8. This would, in general, not be helpful.

But, this problem does *not* actual occur because of the clever encoding 
described at the top of the file. The key is actually <partition, version>. So, 
if two threads load version 7 at the same time, and overwrite each other, no 
harm is done. If we load version 7 and 8 simultaneously, we harmlessly end up 
with both versions in the cache. So, the potential concurrency issue is not a 
real issue in practice. It is messy, yes, but not a bug.

So, not clear that there is any reason to change anything, other than to make 
the code a bit easier to reason about.

> Handle invalidation races in CatalogdMetaProvider cache
> -------------------------------------------------------
>
>                 Key: IMPALA-7534
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7534
>             Project: IMPALA
>          Issue Type: Sub-task
>            Reporter: Todd Lipcon
>            Assignee: Paul Rogers
>            Priority: Major
>
> There is a well-known race in Guava's LoadingCache that we are using for 
> CatalogdMetaProvider which we are not currently handling:
> - thread 1 gets a cache miss and makes a request to fetch some data from the 
> catalogd. It fetches the catalog object with version 1 and then gets context 
> switched out or otherwise slow
> - thread 2 receives an invalidation for the same object, because it has 
> changed to v2. It calls 'invalidate' on the cache, but nothing is yet cached.
> - thread 1 puts back v1 of the object into the cache
> In essence we've "missed" an invalidation. This is also described in this 
> nice post: https://softwaremill.com/race-condition-cache-guava-caffeine/
> The race is quite unlikely but could cause some unexpected results that are 
> hard to reason about, so we should look into a fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to