anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282167175
 
 

 ##########
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##########
 @@ -71,6 +96,27 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public VALUE get(KEY key) throws IOException {
+    // Here the metadata lock will guarantee that cache is not updated for same
+    // key during get key.
+    if (cache != null) {
+      CacheValue<VALUE> cacheValue = cache.get(new CacheKey<>(key));
+      if (cacheValue == null) {
+        return getFromTable(key);
+      } else {
+        // Doing this because, if the Cache Value Last operation is deleted
+        // means it will eventually removed from DB. So, we should return null.
+        if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
 
 Review comment:
   Why do we even cache the deleted Operations? Delete is not in the 
performance critical path at all. If you can instruct the system to make the 
full commit or flush the buffer when there is a delete op you don't need to 
keep this extra state in the cache. yes, repeated deletes will call state 
machine call back. When do we actually flush / clear this entry?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to