[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326642&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326642
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 23:49
Start Date: 10/Oct/19 23:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540843683
 
 
   Thank You all for the review.
   I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326642)
Time Spent: 7h 10m  (was: 7h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326641&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326641
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 23:49
Start Date: 10/Oct/19 23:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326641)
Time Spent: 7h  (was: 6h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326570&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326570
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597832
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 50 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 943 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 31 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 31 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eb9f4e7930dc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibra

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326572&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326572
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540719122
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1384 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 864 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 965 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3711 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 12ab8a1f7489 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4850b3a |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-compile-hadoop-hdds.txt
 |

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326567&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326567
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538168696
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 944 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2342 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dbf2530a1ece 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326569&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326569
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597298
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 930 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1024 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 787 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 25 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2495 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 80553b9dfed3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a3cf54c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326571
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540345632
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 959 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 340394fd7c3d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eeb58a0 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326568&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326568
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:40
Start Date: 10/Oct/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538169169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 22 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 920 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2305 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 09b9504ba0c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326556&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326556
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:11
Start Date: 10/Oct/19 20:11
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333713631
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   Yup, thanks for the explanation.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326556)
Time Spent: 5h 50m  (was: 5h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326555&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326555
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:10
Start Date: 10/Oct/19 20:10
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711502
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ```
 if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326555)
Time Spent: 5h 40m  (was: 5.5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326550&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326550
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:07
Start Date: 10/Oct/19 20:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711905
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   finally merge
   for (Map.Entry  cacheKey : cacheKeyMap.entrySet()) {
 if (cacheKey.getKey().equals(seekKey) && skipStartKey) {
   continue;
 }
   
 result.add(cacheKey.getValue());
 currentCount++;
   
 if (currentCount == maxKeys) {
   break;
 }
   }
   we have taken care right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326550)
Time Spent: 5h 20m  (was: 5h 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326552&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326552
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:07
Start Date: 10/Oct/19 20:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711905
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   In final merge
   for (Map.Entry  cacheKey : cacheKeyMap.entrySet()) {
 if (cacheKey.getKey().equals(seekKey) && skipStartKey) {
   continue;
 }
   
 result.add(cacheKey.getValue());
 currentCount++;
   
 if (currentCount == maxKeys) {
   break;
 }
   }
   we have taken care right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326552)
Time Spent: 5.5h  (was: 5h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326547&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326547
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:06
Start Date: 10/Oct/19 20:06
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711360
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ``` if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326547)
Time Spent: 4h 50m  (was: 4h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326546&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326546
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:06
Start Date: 10/Oct/19 20:06
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711360
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ``` if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326546)
Time Spent: 4h 40m  (was: 4.5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326549&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326549
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:06
Start Date: 10/Oct/19 20:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711537
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   Let's take an example: we want to list all keys in /vol/buck with 
startKey=null, keyPrefix="key2"
   
   /vol/buck/key1
   /vol/buck/key2
   /vol/buck/key3
   
   keyPrefix="key2"
   startKey=null
   
   so now seeKey="/vol/buck/" seekPrefix="/vol/buck/key2"
   so now when we iterate, we add to map, /vol/buck/key2 because it starts with 
seekKey and key.compareTo will return zero.
   similarly /vol/buck/key3
   
   then when we finally return we will only return /vol/buck/key2 and 
/vol/buck/key3.
   
   Not sure if something i am missing here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326549)
Time Spent: 5h 10m  (was: 5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326548&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326548
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:06
Start Date: 10/Oct/19 20:06
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711502
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ```
 if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326548)
Time Spent: 5h  (was: 4h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326544&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326544
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:05
Start Date: 10/Oct/19 20:05
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ` if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326544)
Time Spent: 4h 20m  (was: 4h 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326545&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326545
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 20:05
Start Date: 10/Oct/19 20:05
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333711226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   ` if (StringUtil.isNotBlank(startKey)) {
 // Seek to the specified key.
 seekKey = getOzoneKey(volumeName, bucketName, startKey);
 skipStartKey = true;
   } else {
 // This allows us to seek directly to the first key with the right 
prefix.
 seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
   }`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326545)
Time Spent: 4.5h  (was: 4h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326526&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326526
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 19:38
Start Date: 10/Oct/19 19:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333700681
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   if startKey is null, we consider seekKey as /vol/bucket/
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326526)
Time Spent: 4h 10m  (was: 4h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326525&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326525
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 19:38
Start Date: 10/Oct/19 19:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333700681
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   if keyPrefix is null, we consider keyPrefix=/vol/bucket/
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326525)
Time Spent: 4h  (was: 3h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326519&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326519
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 19:18
Start Date: 10/Oct/19 19:18
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r333692284
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
 
 Review comment:
   If there is a key which matches exactly with the `keyPrefix` that is passed 
to the list call and the `startKey` value is null/empty, we have to include 
that in the result.
   If that particular key happens to be in the Cache, we are ignoring it.
   `key.compareTo(seekKey) >= 0` --> will evaluate to false
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326519)
Time Spent: 3h 50m  (was: 3h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326499&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326499
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 18:42
Start Date: 10/Oct/19 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540719122
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1384 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 864 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 965 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3711 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 12ab8a1f7489 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4850b3a |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/6/artifact/out/patch-compile-hadoop-hdds.txt
 |

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326444&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326444
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 16:46
Start Date: 10/Oct/19 16:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540673876
 
 
   Thank You @arp7 and @anuengineer for the review.
   I will try to run the benchmark and update it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326444)
Time Spent: 3.5h  (was: 3h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326427&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326427
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 16:28
Start Date: 10/Oct/19 16:28
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540666797
 
 
   Let us get this in, I expect some of these things we will learn the right 
choices only when we really benchmark and test. The sad this is that some of 
these changes can make our system unstable. 
   FYI: @elek , I know that you might not be happy with this approach. But it 
is hard to judge the impact of these change till we have this code in and start 
testing. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326427)
Time Spent: 3h 20m  (was: 3h 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326112&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326112
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 04:19
Start Date: 10/Oct/19 04:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540345632
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 959 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 340394fd7c3d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eeb58a0 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326098&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326098
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 03:39
Start Date: 10/Oct/19 03:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540330495
 
 
   > +1
   > 
   > I am okay to get this committed with a minor comment below, assuming there 
are no unaddressed comments from @anuengineer.
   > 
   > We should benchmark list operations later in case any further optimization 
is needed.
   
   I will run the benchmarks once after the list Operations is fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326098)
Time Spent: 3h  (was: 2h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326097&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326097
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 03:38
Start Date: 10/Oct/19 03:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-540330162
 
 
   Addressed the review comment.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326097)
Time Spent: 2h 50m  (was: 2h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=326082&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326082
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 10/Oct/19 02:49
Start Date: 10/Oct/19 02:49
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r10356
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -645,7 +648,12 @@ public boolean isBucketEmpty(String volume, String bucket)
   @Override
   public List listKeys(String volumeName, String bucketName,
   String startKey, String keyPrefix, int maxKeys) throws IOException {
+
 List result = new ArrayList<>();
+if (maxKeys == 0) {
 
 Review comment:
   Yeah that would be a nice bit of defensive programming. Let's make the check 
`<= 0`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 326082)
Time Spent: 2h 40m  (was: 2.5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=325147&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-325147
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 15:52
Start Date: 08/Oct/19 15:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332593486
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   With current new code when the list happens we should consider entries from 
buffer and DB. (As we return the response to end-user after adding entries to 
cache). So, if user does list as next operation(next to create bucket) the 
bucket might/might not be there until double buffer flushes. As until double 
buffer flushes, we will have entries in cache. (This will not be problem for 
non-HA, as we return the response, only after the flush)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 325147)
Time Spent: 2.5h  (was: 2h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=325142&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-325142
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 15:48
Start Date: 08/Oct/19 15:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332591385
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   The key cache is not full cache, so if double buffer flush is going on well 
in background, this should have around couple of 100 entries. When I started 
freon with 10 threads, i see the value of maximum iteration is 200. So, almost 
in the cache we have 200 entries. (But on tried with busy workload clusters, 
slow disks)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 325142)
Time Spent: 2h 20m  (was: 2h 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=325137&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-325137
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 15:46
Start Date: 08/Oct/19 15:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332590228
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
 ##
 @@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+
+/**
+ * Tests OzoneManager MetadataManager.
+ */
+public class TestOmMetadataManager {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneConfiguration ozoneConfiguration;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS,
+folder.getRoot().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+  }
+  @Test
+  public void testListKeys() throws Exception {
+
+String volumeNameA = "volumeA";
+String volumeNameB = "volumeB";
+String ozoneBucket = "ozoneBucket";
+String hadoopBucket = "hadoopBucket";
+
+
+// Create volumes and buckets.
+TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
+TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
+addBucketsToCache(volumeNameA, ozoneBucket);
+addBucketsToCache(volumeNameB, hadoopBucket);
+
+
+String prefixKeyA = "key-a";
+String prefixKeyB = "key-b";
+TreeSet keysASet = new TreeSet<>();
+TreeSet keysBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysASet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+TreeSet keysAVolumeBSet = new TreeSet<>();
+TreeSet keysBVolumeBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysAVolumeBSet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBVolumeBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+// List all keys which have prefix "key-a"
+List omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+null, prefixKeyA, 100);
+
+Assert.assertEquals(omKeyInfoList.size(),  50);
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+}
+
+
+String startKey = prefixKeyA + 10;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+startKey = prefixKeyA + 38;

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324821&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324821
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:35
Start Date: 08/Oct/19 02:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332314620
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I am ok with putting this change in if we can prove that we can do large 
list keys. You might want to borrow the DB from @nandakumar131 and see if you 
can list keys with this patch, just a thought.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324821)
Time Spent: 2h  (was: 1h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324818&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324818
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332256331
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I feel that we are better off leaving the old code in place...where we can 
read from the DB.. Worst, we might have to make sure that cache is flushed to 
DB before doing the list operation.But practically it may not matter.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324818)
Time Spent: 1.5h  (was: 1h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324820&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324820
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255873
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   How many keys are expected in this cache? and how many in the tree ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324820)
Time Spent: 1h 50m  (was: 1h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324819
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255281
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -645,7 +648,12 @@ public boolean isBucketEmpty(String volume, String bucket)
   @Override
   public List listKeys(String volumeName, String bucketName,
   String startKey, String keyPrefix, int maxKeys) throws IOException {
+
 List result = new ArrayList<>();
+if (maxKeys == 0) {
 
 Review comment:
   or <= 0 ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324819)
Time Spent: 1h 40m  (was: 1.5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324798&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324798
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:35
Start Date: 08/Oct/19 01:35
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332304634
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
 ##
 @@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+
+/**
+ * Tests OzoneManager MetadataManager.
+ */
+public class TestOmMetadataManager {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneConfiguration ozoneConfiguration;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS,
+folder.getRoot().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+  }
+  @Test
+  public void testListKeys() throws Exception {
+
+String volumeNameA = "volumeA";
+String volumeNameB = "volumeB";
+String ozoneBucket = "ozoneBucket";
+String hadoopBucket = "hadoopBucket";
+
+
+// Create volumes and buckets.
+TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
+TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
+addBucketsToCache(volumeNameA, ozoneBucket);
+addBucketsToCache(volumeNameB, hadoopBucket);
+
+
+String prefixKeyA = "key-a";
+String prefixKeyB = "key-b";
+TreeSet keysASet = new TreeSet<>();
+TreeSet keysBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysASet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+TreeSet keysAVolumeBSet = new TreeSet<>();
+TreeSet keysBVolumeBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysAVolumeBSet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBVolumeBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+// List all keys which have prefix "key-a"
+List omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+null, prefixKeyA, 100);
+
+Assert.assertEquals(omKeyInfoList.size(),  50);
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+}
+
+
+String startKey = prefixKeyA + 10;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+startKey = prefixKeyA + 38;
+omKe

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324797&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324797
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:30
Start Date: 08/Oct/19 01:30
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332303770
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
+  cacheKeyMap.put(key, omKeyInfo);
 }
+  } else {
+deletedKeySet.add(key);
+  }
+}
+
+// Get maxKeys from DB if it has.
+
+try (TableIterator>
+ keyIter = getKeyTable().iterator()) {
+  KeyValue< String, OmKeyInfo > kv;
+  keyIter.seek(seekKey);
+  // we need to iterate maxKeys + 1 here because if skipStartKey is true,
+  // we should skip that entry and return the result.
+  while (currentCount < maxKeys + 1 && keyIter.hasNext()) {
+kv = keyIter.next();
 if (kv != null && kv.getKey().startsWith(seekPrefix)) {
-  result.add(kv.getValue());
-  currentCount++;
+
+  // Entry should not be marked for delete, consider only those
+  // entries.
+  if(!deletedKeySet.contains(kv.getKey())) {
+cacheKeyMap.put(kv.getKey(), kv.getValue());
+currentCount++;
+  }
 } else {
   // The SeekPrefix does not match any more, we can break out of the
   // loop.
   break;
 }
   }
 }
+
+// Finally DB entries and cache entries are merged, then return the count
+// of maxKeys from the sorted map.
+currentCount = 0;
+
+for (Map.Entry  cacheKey : cacheKeyMap.entrySet()) {
 
 Review comment:
   The second iteration is unfortunate. We should see if there is a way to 
avoid it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324797)
Time Spent: 1h 10m  (was: 1h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys i

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323818&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323818
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:17
Start Date: 05/Oct/19 00:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597832
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 50 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 943 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 31 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 31 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eb9f4e7930dc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibra

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323805
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:13
Start Date: 05/Oct/19 00:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597298
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 930 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1024 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 787 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 25 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2495 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 80553b9dfed3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a3cf54c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323056&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323056
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:37
Start Date: 03/Oct/19 23:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538169169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 22 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 920 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2305 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 09b9504ba0c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323054&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323054
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:34
Start Date: 03/Oct/19 23:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538168696
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 944 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2342 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dbf2530a1ece 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323018&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323018
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 22:55
Start Date: 03/Oct/19 22:55
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538160179
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323018)
Time Spent: 20m  (was: 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323015&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323015
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 22:54
Start Date: 03/Oct/19 22:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588
 
 
   Implement listKeys API.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323015)
Remaining Estimate: 0h
Time Spent: 10m

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org