[jira] [Commented] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2019-06-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875688#comment-16875688
 ] 

Hadoop QA commented on HDFS-13123:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 90 new + 0 unchanged - 0 fixed = 90 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973273/HDFS-13123.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 2b4d03934a0d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d203045 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27117/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| compile | 

[jira] [Updated] (HDFS-14483) Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9

2019-06-29 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14483:
---
Summary: Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to 
branch-2.9  (was: Backport HDFS-3246 ByteBuffer pread interface to branch-2.9)

> Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2019-06-29 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875653#comment-16875653
 ] 

hemanthboyina commented on HDFS-13123:
--

thanks [~elgoiri] , will be happy to take the prototype.

 i have submitted the initial patch , please review it , and will develop 
further changes according to comments .

> RBF: Add a balancer tool to move data across subcluster 
> 
>
> Key: HDFS-13123
> URL: https://issues.apache.org/jira/browse/HDFS-13123
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS Router-Based Federation Rebalancer.pdf, 
> HDFS-13123.patch
>
>
> Follow the discussion in HDFS-12615. This Jira is to track effort for 
> building a rebalancer tool, used by router-based federation to move data 
> among subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2019-06-29 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-13123:
-
Attachment: HDFS-13123.patch
Status: Patch Available  (was: Open)

> RBF: Add a balancer tool to move data across subcluster 
> 
>
> Key: HDFS-13123
> URL: https://issues.apache.org/jira/browse/HDFS-13123
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS Router-Based Federation Rebalancer.pdf, 
> HDFS-13123.patch
>
>
> Follow the discussion in HDFS-12615. This Jira is to track effort for 
> building a rebalancer tool, used by router-based federation to move data 
> among subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-06-29 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875634#comment-16875634
 ] 

Jinglun commented on HDFS-14547:


Hi [~xkrogen], thanks for your nice suggestions and wish you a happy 
vacation:). Besides following the suggestions, I also make ConstEnumException 
final because jenkins suggests that.

Upload patch-007 and wait jenkins.

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, 
> HDFS-14547.001.patch, HDFS-14547.002.patch, HDFS-14547.003.patch, 
> HDFS-14547.004.patch, HDFS-14547.005.patch, HDFS-14547.006.patch, 
> HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-06-29 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14547:
---
Attachment: HDFS-14547.007.patch

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, 
> HDFS-14547.001.patch, HDFS-14547.002.patch, HDFS-14547.003.patch, 
> HDFS-14547.004.patch, HDFS-14547.005.patch, HDFS-14547.006.patch, 
> HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=269799=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269799
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 29/Jun/19 23:57
Start Date: 29/Jun/19 23:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-506995469
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 481 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 502 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 441 | the patch passed |
   | +1 | compile | 272 | the patch passed |
   | +1 | cc | 272 | the patch passed |
   | +1 | javac | 272 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 518 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1082 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6113 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1038 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 7463d7fcdeb1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/1/testReport/ |
   | Max. process+thread count | 4896 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269799)
Time Spent: 20m  (was: 10m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> 

[jira] [Updated] (HDDS-505) OzoneManager HA

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-505:

Attachment: Handling Write Requests with OM HA.pdf

> OzoneManager HA
> ---
>
> Key: HDDS-505
> URL: https://issues.apache.org/jira/browse/HDDS-505
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: Handling Write Requests with OM HA.pdf, OzoneManager 
> HA.pdf
>
>
> OzoneManager can be a single point of failure in an Ozone cluster. We propose 
> an HA implementation for OM using Ratis (Raft protocol).
> Attached the design document for the proposed implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1499) OzoneManager Cache

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1499:
-
Attachment: (was: Handling Write Requests with OM HA.pdf)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1499) OzoneManager Cache

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1499:
-
Attachment: Handling Write Requests with OM HA.pdf

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
Target Version/s: 0.5.0

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
Status: Patch Available  (was: Open)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=269796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269796
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 29/Jun/19 22:14
Start Date: 29/Jun/19 22:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1038: 
HDDS-1736. Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269796)
Time Spent: 10m
Remaining Estimate: 0h

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
Summary: Cleanup 2phase old HA code for Key requests.  (was: Cleanup 2phase 
old HA code for allocateBlock request)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-06-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1736:
-
Labels: pull-request-available  (was: )

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for allocateBlock request

2019-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Cleanup 2phase old HA code for allocateBlock request
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1736) Cleanup 2phase old HA code for allocateBlock request

2019-06-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1736:


 Summary: Cleanup 2phase old HA code for allocateBlock request
 Key: HDDS-1736
 URL: https://issues.apache.org/jira/browse/HDDS-1736
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
etc., 

Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875573#comment-16875573
 ] 

Eric Yang commented on HDDS-1735:
-

{quote}Do you use these shell scripts? Based on your comments you prefer to use 
plain maven commends. If you don't use the shell script why is it a problem to 
improve them? The current maven build is not affected at all{quote}

If the scripts are offered, then dependencies introduced on the offered 
scripts. It would be more difficult to remove the scripts later.

{quote}BTW maven lifecycle model is pretty limited compared to the graph based 
approach of gradle. For example if you have more than two type of tests, not 
just integration test and unit test, it doesn't fit very well. {quote}

Incorrect assumption, you can use maven profile to trigger different type of 
tests.  For more comprehensive test suites, it can be written as sub-modules.
This avoid elementary mistakes like:

# adding test artifacts into distribution binaries. e.g. include smoke test in 
distribution tarball.
# doing integration test prior to package making.

Maven build life cycle is a production tested process.  It would be foolish to 
ignore the wisdom in the productive workflow.  Gradle provides simplified life 
cycle, and it can be powerful to put in the hand of seasoned developer.  
However, it doesnot help inexperienced developer to sequence the build 
workflow.  In the end, gradle project may become just as messy as Ant project, 
which maven solved to make the build life cycle more process oriented.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-29 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875571#comment-16875571
 ] 

Elek, Marton commented on HDDS-1735:


{quote}This patch will add code that uses pom.ozone.xml, which I would like to 
move to hadoop-ozone-project. It does impact my daily usage of maven, and more 
effort to clean up.
{quote}
It's an independent question. That's the whole point to use shell scripts: In 
case of a project restructure it's enough to updated the shell scripts. As of 
_now_ we have the pom.ozone.xml so I think it's not a big problem to use it.

Do you use these shell scripts? Based on your comments you prefer to use plain 
maven commends. If you don't use the shell script why is it a problem to 
improve them? The current maven build is not affected at all

BTW maven lifecycle model is pretty limited compared to the graph based 
approach of gradle. For example if you have more than two type of tests, not 
just integration test and unit test, it doesn't fit very well. Also in your 
command both unit tests AND integration tests are executed. The only thing what 
I did is created shortcuts to make it easier to remember which maven flags 
should be used. If you have better shortcuts with exactly the same behavior, 
feel free to post it in a patch.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14585) Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9

2019-06-29 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875570#comment-16875570
 ] 

stack commented on HDFS-14585:
--

Yes. It passed on second attempt. Flakey. The findbugs has a covering JIRA 
HADOOP-16386 filed by the mighty [~jojochuang].

I'll commit this after Monday unless objection. Thanks [~leosun08].

> Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9
> -
>
> Key: HDFS-14585
> URL: https://issues.apache.org/jira/browse/HDFS-14585
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14585.branch-2.9.v1.patch, 
> HDFS-14585.branch-2.9.v2.patch, HDFS-14585.branch-2.9.v2.patch, 
> HDFS-14585.branch-2.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875566#comment-16875566
 ] 

Hadoop QA commented on HDFS-14620:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
58s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 57s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973248/HDFS-14620-HDFS-13891-01.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 58b5396fe93a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 02597b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27115/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27115/testReport/ |
| Max. process+thread 

[jira] [Commented] (HDDS-1716) Smoketest results are generated with an internal user

2019-06-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875567#comment-16875567
 ] 

Eric Yang commented on HDDS-1716:
-

Rebot is xml parser for Robot Framework, which is different from rebot in 
[npm|https://www.npmjs.com/package/rebot].  The comment about rebot can be 
discarded.

+1 on second attempt of PR #1002.

> Smoketest results are generated with an internal user
> -
>
> Key: HDDS-1716
> URL: https://issues.apache.org/jira/browse/HDDS-1716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [~eyang] reported the problem in HDDS-1609 that the smoketest results are 
> generated a user (the user inside the docker container) which can be 
> different from the host user.
> There is a minimal risk that the test results can be deleted/corrupted by an 
> other users if the current user is different from uid=1000
> I opened this issue because [~eyang] said me during an offline discussion 
> that HDDS-1609 is a more complex issue and not only about the ownership of 
> the test results.
> I suggest to handle the two problems in different way. With this patch, the 
> permission of the test result files can be fixed easily.
> In HDDS-1609 we can discuss about general security problems and try to find 
> generic solution for them.
> Steps to reproduce _this_ problem:
>  # Use a user which is different from uid=1000
>  # Create a new ozone build (mvn clean install -f pom.ozone.xml -DskipTests)
>  # Go to a compose directory (cd 
> hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/)
>  # Execute tests (./test.sh)
>  # check the ownership of the results (ls -lah ./results)
> Current result: the owner of the result files are the user uid=1000
> Expected result: the owner of the files should be always the current user 
> (even if the current uid is different)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875561#comment-16875561
 ] 

Ayush Saxena edited comment on HDFS-14620 at 6/29/19 5:05 PM:
--

Thanx [~Huachao] for the report.

Can you extend a UT for this?

[~crh] can you too give a check


was (Author: ayushtkn):
Thanx [~Huachao] for the report.

Can you extend a UT for this?

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14620:

Affects Version/s: (was: HDFS-13891)
   3.30
Fix Version/s: (was: HDFS-13891)

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875554#comment-16875554
 ] 

Eric Yang commented on HDDS-1735:
-

[~elek] 
{quote}Can you please clarify what is your suggestion? How should I improve the 
patch?{quote}

The current direction for Ozone continuous integration is not moving in the 
direction that I like to see.  For example, user can perform:

{code}
mvn clean test
{code}

To perform unit test, and selectively launch test case base on:

{code}
mvn clean test -Dtest=[test-case]
{code}

Integration test can be performed by:

{code}
mvn clean integration-test
{code}

Selective integration test can be performed by:

{code}
mvn clean integration-test -Dtest=[test-case]
{code}

To run in environment specific build use profile:

{code}
mvn clean integration-test -Pk8s
{code}

I am questioning the direction of this issue is good for Ozone continuous 
integration because we are creating shell script that wrap around maven 
commands and many unique parameters to get Ozone build going.  This usually 
means the pom.xml and associated configurations are not properly written in 
pom.xml and will lead to problems when performing maven release or maven 
deploy.  This is the reason that I am concerned about introducing more shell 
script wrappers.  

I really like to have ability to use hadoop-ozone-project, instead of -f 
pom.ozone.xml to build the project.  This would be Ozone inline similar to 
other Hadoop sub-projects.  Precommit build can be minimize build time by 
building from hadoop-ozone-project.  I opened HDDS-1661 for that work.  Can we 
avoid reference to pom.ozone.xml in the meantime?

{quote}Can you please confirm that this patch doesn't block you in any of your 
work? (This is a fix for the existing shell scripts which don't modify any part 
of the maven build just help to run certain build steps with maven){quote}

This patch will add code that uses pom.ozone.xml, which I would like to move to 
hadoop-ozone-project.  It does impact my daily usage of maven, and more effort 
to clean up.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Status: Patch Available  (was: Open)

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Attachment: HDFS-14620-HDFS-13891-01.patch

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)
luhuachao created HDFS-14620:


 Summary: RBF: when Disable namespace in kerberos with superuser's 
principal, ERROR appear 'not a super user' 
 Key: HDFS-14620
 URL: https://issues.apache.org/jira/browse/HDFS-14620
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: HDFS-13891
Reporter: luhuachao
 Fix For: HDFS-13891


use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace with 
error info below, as the code judge the principal not equals to hdfs, also hdfs 
is not belong to supergroup.
{code:java}
[hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
hdfs-test@EXAMPLE is not a super user at 
org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14585) Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9

2019-06-29 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875517#comment-16875517
 ] 

Lisheng Sun edited comment on HDFS-14585 at 6/29/19 2:27 PM:
-

 
{quote}I am offering praise on aspects of your work. No response required.
 Test failure looks unrelated but let me retry.
 Reviewing the patch, v2 looks good to me.
{quote}
Thank [~stack] for affirmation of my work.

I confirmed again UT is ok in org.apache.hadoop.ipc.TestRPC of my local.

{code:java}
[INFO] Running org.apache.hadoop.ipc.TestRPC
[INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.925 
s - in org.apache.hadoop.ipc.TestRPC
{code}
So test failure should be unrelated to this patch. Thank you.
  


was (Author: leosun08):
 
{quote}I am offering praise on aspects of your work. No response required.
 Test failure looks unrelated but let me retry.
 Reviewing the patch, v2 looks good to me.
{quote}
Thank [~stack] for affirmation of my work. And I confirmed again test failure 
should be unrelated to this patch. Thank you.
  

> Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9
> -
>
> Key: HDFS-14585
> URL: https://issues.apache.org/jira/browse/HDFS-14585
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14585.branch-2.9.v1.patch, 
> HDFS-14585.branch-2.9.v2.patch, HDFS-14585.branch-2.9.v2.patch, 
> HDFS-14585.branch-2.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14585) Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9

2019-06-29 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875517#comment-16875517
 ] 

Lisheng Sun commented on HDFS-14585:


 
{quote}I am offering praise on aspects of your work. No response required.
 Test failure looks unrelated but let me retry.
 Reviewing the patch, v2 looks good to me.
{quote}
Thank [~stack] for affirmation of my work. And I confirmed again test failure 
should be unrelated to this patch. Thank you.
  

> Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9
> -
>
> Key: HDFS-14585
> URL: https://issues.apache.org/jira/browse/HDFS-14585
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14585.branch-2.9.v1.patch, 
> HDFS-14585.branch-2.9.v2.patch, HDFS-14585.branch-2.9.v2.patch, 
> HDFS-14585.branch-2.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13270) RBF: Router audit logger

2019-06-29 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875515#comment-16875515
 ] 

hemanthboyina commented on HDFS-13270:
--

[~maobaolong] we can make DefaultAuditLogger as Abstract and make it common for 
Namenode and Router

[~elgoiri] any suggestions for this ?

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14483) Backport HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-06-29 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875514#comment-16875514
 ] 

Lisheng Sun edited comment on HDFS-14483 at 6/29/19 1:50 PM:
-

Thank [~stack]  for your comments.  This patch is dependent on getting 
HDFS-14585 merged first, so after HDFS-14585 is merged to branch-2.9, I update 
this patch for branch-2.9. Thank you.


was (Author: leosun08):
Thank [~stack]  for your comments.  This patch is blocked by HDFS-14585, so 
after HDFS-14585 is merged to branch-2.9, I update this patch for branch-2.9. 
Thank you.

> Backport HDFS-3246 ByteBuffer pread interface to branch-2.9
> ---
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14483) Backport HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-06-29 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875514#comment-16875514
 ] 

Lisheng Sun commented on HDFS-14483:


Thank [~stack]  for your comments.  This patch is blocked by HDFS-14585, so 
after HDFS-14585 is merged to branch-2.9, I update this patch for branch-2.9. 
Thank you.

> Backport HDFS-3246 ByteBuffer pread interface to branch-2.9
> ---
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14483) Backport HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-06-29 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14483:
---
Summary: Backport HDFS-3246 ByteBuffer pread interface to branch-2.9  (was: 
Backport HDFS-3246 ByteBuffer pread interface to branch-2.8.x)

> Backport HDFS-3246 ByteBuffer pread interface to branch-2.9
> ---
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875490#comment-16875490
 ] 

Hadoop QA commented on HDFS-14586:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 44 unchanged - 0 fixed = 49 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
|   | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14586 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973239/HDFS-14586.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 11d396c5b50e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d203045 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27114/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27114/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27114/testReport/ |
| Max. process+thread count | 1331 (vs. 

[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-06-29 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875483#comment-16875483
 ] 

hemanthboyina commented on HDFS-14284:
--

[~elgoiri] if we implement HDFS-13270 (Router Audit Logger ) , then this may 
solve your issue 

As we will get to know from which Router(IP) the exception occured.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14257) RBF : NPE when given the Invalid path to create target dir

2019-06-29 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875463#comment-16875463
 ] 

hemanthboyina commented on HDFS-14257:
--

[~elgoiri] is the defect a valid one ?

as in the command a space was given in bewteen directory path(hdfs://hacluster2 
 /hacluster1) 

due to that the system was taking it as two different arguments 

and we are trying to get "hdfs://hacluster2" parent in the code, which is null
{code:java}
item.fs.exists(new Path(item.path.toString()).getParent()){code}
 

> RBF : NPE when given the Invalid path to create target dir
> --
>
> Key: HDFS-14257
> URL: https://issues.apache.org/jira/browse/HDFS-14257
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Harshakiran Reddy
>Assignee: venkata ramkumar
>Priority: Major
>  Labels: RBF
>
> bin> ./hdfs dfs -mkdir hdfs://{color:red}hacluster2 /hacluster1{color}dest2/
> {noformat}
> -mkdir: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystem.fixRelativePart(FileSystem.java:2714)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.fixRelativePart(DistributedFileSystem.java:3229)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1618)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
> at 
> org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:74)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:287)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> bin>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13254) RBF: Cannot mv/cp file cross namespace

2019-06-29 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-13254:


Assignee: hemanthboyina

> RBF: Cannot mv/cp file cross namespace
> --
>
> Key: HDFS-13254
> URL: https://issues.apache.org/jira/browse/HDFS-13254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: hemanthboyina
>Priority: Major
>
> When I try to mv a file from a namespace to another, the client return an 
> error.
>  
> Do we have any plan to support cp/mv file cross namespace?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-29 Thread hu yongfa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu yongfa updated HDFS-14586:
-
Attachment: HDFS-14586.002.patch

> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch, HDFS-14586.002.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-29 Thread hu yongfa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875442#comment-16875442
 ] 

hu yongfa commented on HDFS-14586:
--

[~hexiaoqiao] good idea

may be it still need to get current time twice:

1. before calculate {{end}} time, this can avoid sleep more long time than 
expect

2. before delete/create checkpoint loop, this can check current time with end 
time.

[^HDFS-14586.002.patch]

 

> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch, HDFS-14586.002.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1716) Smoketest results are generated with an internal user

2019-06-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1716?focusedWorklogId=269705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269705
 ]

ASF GitHub Bot logged work on HDDS-1716:


Author: ASF GitHub Bot
Created on: 29/Jun/19 08:14
Start Date: 29/Jun/19 08:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1002: HDDS-1716. 
Smoketest results are generated with an internal user
URL: https://github.com/apache/hadoop/pull/1002#issuecomment-506938589
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 474 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 839 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 437 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 2 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 111 | hadoop-hdds in the patch passed. |
   | +1 | unit | 187 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 3112 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1002 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 3dbb9f3b1d3f 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/2/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269705)
Time Spent: 1h  (was: 50m)

> Smoketest results are generated with an internal user
> -
>
> Key: HDDS-1716
> URL: https://issues.apache.org/jira/browse/HDDS-1716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [~eyang] reported the problem in HDDS-1609 that the smoketest results are 
> generated a user (the user inside the docker container) which can be 
> different from the host user.
> There is a minimal risk that the test results can be deleted/corrupted by an 
> other users if the current user is different from uid=1000
> I opened this issue because [~eyang] said me during an offline discussion 
> that HDDS-1609 is a more complex issue and not only about the ownership of 
> the test results.
> I suggest to handle the two problems in different way. With this patch, the 
> permission of the test result files can be fixed easily.
> In HDDS-1609 we can discuss about general security problems and try to find 
> generic solution for them.
> Steps to reproduce _this_ problem:
>  # Use a user which is different from uid=1000
>  # Create a new ozone build (mvn clean install -f pom.ozone.xml -DskipTests)
>  # Go to a compose directory (cd 

[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-06-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875409#comment-16875409
 ] 

Hadoop QA commented on HDFS-12748:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12748 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973229/HDFS-12748.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f11c27406e2 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-29 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875405#comment-16875405
 ] 

Elek, Marton commented on HDDS-1735:


To be honest I am confused a little bit.
 # Can you please clarify what is your suggestion? How should I improve the 
patch?
 # Can you please confirm that this patch doesn't block you in any of your 
work? (This is a fix for the _existing_ shell scripts which don't modify any 
part of the maven build just help to run certain build steps with maven)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-29 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875402#comment-16875402
 ] 

Mukul Kumar Singh commented on HDDS-1555:
-

+1, the latest patch looks good to me. Will take care of the checkstyle issue 
while committing.

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269689=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269689
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 06:30
Start Date: 29/Jun/19 06:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-506932338
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 485 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 318 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 451 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 521 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 237 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1068 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6192 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/846 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 6c293bf07f18 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/testReport/ |
   | Max. process+thread count | 4878 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/config hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific