[ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324816&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324816
 ]

ASF GitHub Bot logged work on HDDS-1984:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Oct/19 02:30
            Start Date: 08/Oct/19 02:30
    Worklog Time Spent: 10m 
      Work Description: anuengineer commented on pull request #1555: HDDS-1984. 
Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332313824
 
 

 ##########
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##########
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
     }
     int currentCount = 0;
 
-    try (TableIterator<String, ? extends KeyValue<String, OmBucketInfo>>
-        bucketIter = bucketTable.iterator()) {
-      KeyValue<String, OmBucketInfo> kv = bucketIter.seek(startKey);
-      while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-        kv = bucketIter.next();
-        // Skip the Start Bucket if needed.
-        if (kv != null && skipStartKey &&
-            kv.getKey().equals(startKey)) {
+
+    // For Bucket it is full cache, so we can just iterate in-memory table
+    // cache.
+    Iterator<Map.Entry<CacheKey<String>, CacheValue<OmBucketInfo>>> iterator =
 
 Review comment:
   > But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   The architecture of SkipList prevents us from iterating all the keys. That 
is good enough, I was worried that we will walk all the entries. I missed we 
were using a skipList based hash table.
   
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 324816)
    Time Spent: 2h 20m  (was: 2h 10m)

> Fix listBucket API
> ------------------
>
>                 Key: HDDS-1984
>                 URL: https://issues.apache.org/jira/browse/HDDS-1984
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to