[jira] [Commented] (HDFS-14683) WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898587#comment-16898587
 ] 

Hudson commented on HDFS-14683:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-14683. WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY 
(weichiu: rev 99bf1dc9eb18f9b4d0338986d1b8fd2232f1232f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


> WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response
> 
>
> Key: HDFS-14683
> URL: https://issues.apache.org/jira/browse/HDFS-14683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> Quote [~jojochuang]'s 
> [comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
> {quote}
> ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
> but webhdfs GETCONTENTSUMMARY doesn't include that.
> {quote}
> Examples:
> {code:json|title=Directory, Before}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=Directory, After, With EC policy RS-6-3-1024k set}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "ecPolicy": "RS-6-3-1024k",
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=Directory, After, No EC policy set}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "ecPolicy": "",
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=File, After, No EC policy set}
> GET /webhdfs/v1/tmp/file?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 0,
> "ecPolicy": "Replicated",
> "fileCount": 1,
> "length": 29,
> "quota": -1,
> "spaceConsumed": 29,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898590#comment-16898590
 ] 

Hudson commented on HDFS-13131:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-13131. Modifying testcase testEnableAndDisableErasureCodingPolicy. 
(weichiu: rev c2d00c84508ac9af2272843547046a27b23f3bb5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


> Modifying testcase testEnableAndDisableErasureCodingPolicy
> --
>
> Key: HDFS-13131
> URL: https://issues.apache.org/jira/browse/HDFS-13131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13131.patch
>
>
> In testcase testEnableAndDisableErasureCodingPolicy in 
> TestDistributedFileSystem.java, when enable or disable an 
> ErasureCodingPolicy, we should query from enabledPoliciesByName other than 
> policiesByName to Check whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13529) Fix default trash policy emptier trigger time correctly

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898583#comment-16898583
 ] 

Hudson commented on HDFS-13529:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-13529. Fix default trash policy emptier trigger time correctly. (weichiu: 
rev f86de6f76a3079c2655df9b242fc968edfb17b9d)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Fix default trash policy emptier trigger time correctly
> ---
>
> Key: HDFS-13529
> URL: https://issues.apache.org/jira/browse/HDFS-13529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.6, 3.2.0, 2.9.2, 2.8.5
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13529-trunk.001.patch
>
>
> Trash emptier is designed to auto trigger at UTC 00:00, however I am confused 
> all the time that it usually triggers at a few minutes even half a hour after 
> UTC 00:00 actually in our production cluster.
> The main reason is default policy emptier thread sleep more time than as 
> expect, since it does not consider the delete operation time cost itself. 
> especially for a large cluster, auto trash cleaner may cost dozens of minutes.
> The right way is that gets current time {{now}} before calculate {{end}} time.
> {code:java}
>   long now = Time.now();
>   while (true) {
> end = ceiling(now, emptierInterval);
> try { // sleep for interval
>   Thread.sleep(end - now);
> } catch (InterruptedException e) {
>   break;  // exit on interrupt
> }
> try {
>   now = Time.now();
>   .. // delete trash checkpoint
> } catch (Exception e) {
>   LOG.warn("RuntimeException during Trash.Emptier.run(): ", e); 
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14686) HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898588#comment-16898588
 ] 

Hudson commented on HDFS-14686:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy always (weichiu: 
rev 17e8cf501b384af93726e4f2e6f5e28c6e3a8f65)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java


> HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null
> ---
>
> Key: HDFS-14686
> URL: https://issues.apache.org/jira/browse/HDFS-14686
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> The root cause is that *FSOperations#contentSummaryToJSON* doesn't parse 
> *ContentSummary.erasureCodingPolicy* into the json.
> The expected behavior is that *HttpFSFileSystem#getErasureCodingPolicy* 
> should at least return "" (empty string, for directories or symlinks), or 
> "Replicated" (for non-EC files), "RS-6-3-1024k", etc.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898584#comment-16898584
 ] 

Hudson commented on HDFS-14631:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-14631.The DirectoryScanner doesn't fix the wrongly placed replica. 
(weichiu: rev 32607dbd98a7ab70741a2efc98eff548c1e431c1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java


> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch, HDFS-14631.004.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898586#comment-16898586
 ] 

Hudson commented on HDFS-14652:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17022/])
HDFS-14652. HealthMonitor connection retry times should be configurable. 
(weichiu: rev d086d058d87ecb94fc750ba6f3ccae522658ac80)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthMonitor.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/DummyHAService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java


> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898575#comment-16898575
 ] 

Bharat Viswanadham edited comment on HDFS-14652 at 8/2/19 5:37 AM:
---

This newly added property is missed to add core-site.xml. Do we want to fix 
this?

cc [~jojochuang]


was (Author: bharatviswa):
This newly added property is missed to add core-site.xml.

cc [~jojochuang]

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898575#comment-16898575
 ] 

Bharat Viswanadham commented on HDFS-14652:
---

This newly added property is missed to add core-site.xml.

cc [~jojochuang]

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1843) Undetectable corruption after restart of a datanode

2019-08-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1843:
--
Priority: Critical  (was: Major)

> Undetectable corruption after restart of a datanode
> ---
>
> Key: HDDS-1843
> URL: https://issues.apache.org/jira/browse/HDDS-1843
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Hrishikesh Gadre
>Priority: Critical
> Fix For: 0.5.0
>
>
> Right now, all write chunks use BufferedIO ie, sync flag is disabled by 
> default. Also, Rocks Db metadata updates are done in Rocks DB cache first at 
> Datanode. In case, there comes a situation where the buffered chunk data as 
> well as the corresponding metadata update is lost as a part of datanode 
> restart, it may lead to a situation where, it will not be possible to detect 
> the corruption (not even with container scanner) of this nature in a 
> reasonable time frame, until and unless there is a client IO failure or Recon 
> server detects it over time. In order to atleast to detect the problem, Ratis 
> snapshot on datanode should sync the rocks db file . In such a way, 
> ContainerScanner will be able to detect this.We can also add a metric around 
> sync to measure how much of a throughput loss it can incurr.
> Thanks [~msingh] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1895:
-
Description: +HDDS-1540+ adds 4 new api for Ozone rpc client. OM HA 
implementation needs to handle them.  (was: -HDDS-15+40+- adds 4 new api for 
Ozone rpc client. OM HA implementation needs to handle them.)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> +HDDS-1540+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1895:
-
Description: +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA 
implementation needs to handle them.  (was: +HDDS-1540+ adds 4 new api for 
Ozone rpc client. OM HA implementation needs to handle them.)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1895:
-
Labels:   (was: pull-request-available)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1895:


 Summary: Support Key ACL operations for OM HA.
 Key: HDDS-1895
 URL: https://issues.apache.org/jira/browse/HDDS-1895
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


-HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2019-08-01 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang reassigned HDFS-14694:
-

Assignee: Chen Zhang

> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to process exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2019-08-01 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14694:
--
Description: 
HDFS uses file-lease to manage opened files, when a file is not closed 
normally, NN will recover lease automatically after hard limit exceeded. But 
for a long running service(e.g. HBase), the hdfs-client will never die and NN 
don't have any chances to recover the file.

Usually client program needs to process exceptions by themself to avoid this 
condition(e.g. HBase automatically call recover lease for files that not closed 
normally), but in our experience, most services (in our company) don't process 
this condition properly, which will cause lots of files in abnormal status or 
even data loss.

This Jira propose to add a feature that call recoverLease operation 
automatically when DFSOutputSteam close encounters exception. It should be 
disabled by default, but when somebody builds a long-running service based on 
HDFS, they can enable this option.

We've add this feature to our internal Hadoop distribution for more than 3 
years, it's quite useful according our experience.

  was:
HDFS uses file-lease to manage opened files, when a file is not closed 
normally, NN will recover lease automatically after hard limit exceeded. But 
for a long running service(e.g. HBase), the hdfs-client will never die and NN 
don't have any chance to recover the file.

Usually client needs to process exceptions to avoid this condition(e.g. HBase 
will automatically recover lease for files that not closed normally), but in 
our experience, most services (in our company) don't process this condition 
properly, which will cause lots of files in abnormal status and even data loss.

This Jira propose to add a feature that call recoverLease operation 
automatically when DFSOutputSteam close encounters exception. It should be 
disabled by default, but when somebody builds a long-running service based on 
HDFS, they can enable this option.

We've add this feature to our internal distribution for more than 3 years, it's 
quite useful according our experience.


> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Priority: Major
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to process exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2019-08-01 Thread Chen Zhang (JIRA)
Chen Zhang created HDFS-14694:
-

 Summary: Call recoverLease on DFSOutputStream close exception
 Key: HDFS-14694
 URL: https://issues.apache.org/jira/browse/HDFS-14694
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Chen Zhang


HDFS uses file-lease to manage opened files, when a file is not closed 
normally, NN will recover lease automatically after hard limit exceeded. But 
for a long running service(e.g. HBase), the hdfs-client will never die and NN 
don't have any chance to recover the file.

Usually client needs to process exceptions to avoid this condition(e.g. HBase 
will automatically recover lease for files that not closed normally), but in 
our experience, most services (in our company) don't process this condition 
properly, which will cause lots of files in abnormal status and even data loss.

This Jira propose to add a feature that call recoverLease operation 
automatically when DFSOutputSteam close encounters exception. It should be 
disabled by default, but when somebody builds a long-running service based on 
HDFS, they can enable this option.

We've add this feature to our internal distribution for more than 3 years, it's 
quite useful according our experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=287477=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287477
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 02/Aug/19 04:48
Start Date: 02/Aug/19 04:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1147: 
HDDS-1619. Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309980204
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/volume/OMVolumeAclOpResponse.java
 ##
 @@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.volume;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import java.io.IOException;
+
+/**
+ * Response for om volume acl operation request.
+ */
+public class OMVolumeAclOpResponse extends OMClientResponse {
+
+  private OmVolumeArgs omVolumeArgs;
+
+  public OMVolumeAclOpResponse(OmVolumeArgs omVolumeArgs,
+  OMResponse omResponse) {
+super(omResponse);
+this.omVolumeArgs = omVolumeArgs;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+// For OmResponse with failure, this should do nothing. This method is
+// not called in failure scenario in OM code.
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
+  omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 
 Review comment:
   Here we need to check the OMResponse flag also. As for existing acl, we 
should set OMResponse response to false. And in that case, we don't need 
anything to be added to DB.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287477)
Time Spent: 7.5h  (was: 7h 20m)

> Support volume acl operations for OM HA.
> 
>
> Key: HDDS-1619
> URL: https://issues.apache.org/jira/browse/HDDS-1619
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> [HDDS-1539] adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898554#comment-16898554
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
8s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux d40cb95b3f21 4.15.0-54-generic 

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=287472=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287472
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 02/Aug/19 04:44
Start Date: 02/Aug/19 04:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1147: 
HDDS-1619. Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309979423
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
 ##
 @@ -0,0 +1,157 @@
+package org.apache.hadoop.ozone.om.request.volume.acl;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import java.io.IOException;
+import java.util.List;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Base class for OMVolumeAcl Request.
+ */
+public abstract class OMVolumeAclRequest extends OMClientRequest {
+
+  private CheckedBiFunction, OmVolumeArgs, IOException>
+  omVolumeAclOp;
+
+  public OMVolumeAclRequest(OzoneManagerProtocolProtos.OMRequest omRequest,
+  CheckedBiFunction, OmVolumeArgs, IOException> aclOp) {
+super(omRequest);
+omVolumeAclOp = aclOp;
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex,
+  OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper) {
+// protobuf guarantees volume and acls are non-null.
+String volume = getVolumeName();
+List ozoneAcls = getAcls();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+OmVolumeArgs omVolumeArgs = null;
+
+OMResponse.Builder omResponse = onInit();
+OMClientResponse omClientResponse = null;
+IOException exception = null;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean lockAcquired = false;
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+  lockAcquired =
+  omMetadataManager.getLock().acquireLock(VOLUME_LOCK, volume);
+  String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
+  omVolumeArgs = omMetadataManager.getVolumeTable().get(dbVolumeKey);
+  if (omVolumeArgs == null) {
+throw new OMException(OMException.ResultCodes.VOLUME_NOT_FOUND);
+  }
+
+  // result is false upon add existing acl or remove non-existing acl
+  boolean result = true;
+  try {
+omVolumeAclOp.apply(ozoneAcls, omVolumeArgs);
+  } catch (OMException ex) {
+result = false;
+  }
+
+  if (result) {
+// update cache.
+omMetadataManager.getVolumeTable().addCacheEntry(
+new CacheKey<>(dbVolumeKey),
+new CacheValue<>(Optional.of(omVolumeArgs), transactionLogIndex));
+  }
+
+  omClientResponse = onSuccess(omResponse, omVolumeArgs, result);
+} catch (IOException ex) {
+  exception = ex;
+  omMetrics.incNumVolumeUpdateFails();
+  omClientResponse = onFailure(omResponse, ex);
+} finally {
+  if (omClientResponse != null) {
+omClientResponse.setFlushFuture(
+ozoneManagerDoubleBufferHelper.add(omClientResponse,
+transactionLogIndex));
+  }
+  if (lockAcquired) {
+omMetadataManager.getLock().releaseLock(VOLUME_LOCK, volume);
+  }
+}
+
+onComplete(exception);
+
+return omClientResponse;
+  }
+
+  /**
+   * Get the Acls from the request.
+   * @return List of OzoneAcls, for add/remove it is a single element list
+   * for set it can be non-single element list.
+   */
+  abstract List 

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=287471=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287471
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 02/Aug/19 04:42
Start Date: 02/Aug/19 04:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1147: 
HDDS-1619. Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309979373
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
 ##
 @@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.request.volume.acl;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeAclOpResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Handles volume add acl request.
+ */
+public class OMVolumeAddAclRequest extends OMVolumeAclRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeAddAclRequest.class);
+
+  private static CheckedBiFunction,
+  OmVolumeArgs, IOException> volumeAddAclOp;
+
+  static {
+volumeAddAclOp = (acls, volArgs) -> volArgs.addAcl(acls.get(0));
+  }
+
+  private List ozoneAcls;
+  private String volumeName;
+
+  public OMVolumeAddAclRequest(OMRequest omRequest) {
+super(omRequest, volumeAddAclOp);
+OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
+getOmRequest().getAddAclRequest();
+Preconditions.checkNotNull(addAclRequest);
+ozoneAcls = Lists.newArrayList(
+OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
+volumeName = addAclRequest.getObj().getPath().substring(1);
+  }
+
+  @Override
+  public List getAcls() {
+return ozoneAcls;
+  }
+
+  @Override
+  public String getVolumeName() {
+return volumeName;
+  }
+
+  private OzoneAcl getAcl() {
+return ozoneAcls.get(0);
+  }
+
+
+  @Override
+  OMResponse.Builder onInit() {
+return OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.AddAcl)
+.setStatus(OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+  }
+
+  @Override
+  OMClientResponse onSuccess(OMResponse.Builder omResponse,
+  OmVolumeArgs omVolumeArgs, boolean result){
+omResponse.setAddAclResponse(OzoneManagerProtocolProtos.AddAclResponse
+.newBuilder().setResponse(result).build());
+return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build());
+  }
+
+  @Override
+  OMClientResponse onFailure(OMResponse.Builder omResponse,
+  IOException ex) {
+return new OMVolumeAclOpResponse(null,
+createErrorOMResponse(omResponse, ex));
+  }
+
+  @Override
+  void onComplete(IOException ex) {
 
 Review comment:
   For onComplete we need to pass result also.
   As in the case of already existing ACL, it is not a success.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287471)
Time Spent: 7h 10m  (was: 7h)

> Support volume acl operations for OM HA.
> 
>
> Key: HDDS-1619
>  

[jira] [Commented] (HDFS-14099) Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898546#comment-16898546
 ] 

Hadoop QA commented on HDFS-14099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-441/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/441 |
| JIRA Issue | HDFS-14099 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ebfaa8d64fac 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / c2d00c8 |
| Default Java | 1.8.0_212 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-441/3/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  

[jira] [Created] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1894:


 Summary: Support listPipelines by filters in scmcli
 Key: HDDS-1894
 URL: https://issues.apache.org/jira/browse/HDDS-1894
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Junjie Chen


Today scmcli has a subcmd that allow list all pipelines. This ticket is opened 
to filter the results by switches, e.g., filter by Factor: THREE and State: 
OPEN. This will be useful for trouble shooting in large cluster.

 

{code}

bin/ozone scmcli listPipelines

Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: null}, 
Type:RATIS, Factor:ONE, State:OPEN]
Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: null}, 
Type:RATIS, Factor:THREE, State:OPEN]

{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13529) Fix default trash policy emptier trigger time correctly

2019-08-01 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898544#comment-16898544
 ] 

He Xiaoqiao commented on HDFS-13529:


Thanks [~jojochuang] for your help and commit.

> Fix default trash policy emptier trigger time correctly
> ---
>
> Key: HDFS-13529
> URL: https://issues.apache.org/jira/browse/HDFS-13529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.6, 3.2.0, 2.9.2, 2.8.5
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13529-trunk.001.patch
>
>
> Trash emptier is designed to auto trigger at UTC 00:00, however I am confused 
> all the time that it usually triggers at a few minutes even half a hour after 
> UTC 00:00 actually in our production cluster.
> The main reason is default policy emptier thread sleep more time than as 
> expect, since it does not consider the delete operation time cost itself. 
> especially for a large cluster, auto trash cleaner may cost dozens of minutes.
> The right way is that gets current time {{now}} before calculate {{end}} time.
> {code:java}
>   long now = Time.now();
>   while (true) {
> end = ceiling(now, emptierInterval);
> try { // sleep for interval
>   Thread.sleep(end - now);
> } catch (InterruptedException e) {
>   break;  // exit on interrupt
> }
> try {
>   now = Time.now();
>   .. // delete trash checkpoint
> } catch (Exception e) {
>   LOG.warn("RuntimeException during Trash.Emptier.run(): ", e); 
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-08-01 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898543#comment-16898543
 ] 

He Xiaoqiao commented on HDFS-14461:


Thanks [~eyang] for your kind response and sorry for the late feedback. I will 
check it again and try to fix the failed unit test.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  

[jira] [Commented] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-08-01 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898540#comment-16898540
 ] 

He Xiaoqiao commented on HDFS-13901:


Thanks [~LiJinglun] for your works. Just check the logic and it seems that root 
cause is get {{INode}} twice before hold {{readLock}}/{{writeLock}}, right? is 
it possible that get the last {{INode}} instance only one time and share in the 
whole method {{getBlockLocations}}? any other concerns? Thanks.

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898539#comment-16898539
 ] 

Hadoop QA commented on HDFS-14564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 23s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 28s{color} | 

[jira] [Commented] (HDFS-14669) TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails intermittently in trunk

2019-08-01 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898532#comment-16898532
 ] 

qiang Liu commented on HDFS-14669:
--

[~ayushtkn]

submit v4 patch just same as v2.

one thing more is left here, this test function still get 2 little problems.
 # the recheck logics is not actually working, it only recheck assertions but 
didn't rescan to update compared values
 # if assertion fails, the assertion exception was swallowed and a misleading 
time out exception is thrown

should we fix these little problems(a new Jira or a addendum to  HDFS-13819 ), 
or should we just ignore it, after all, things only got little strange when 
asssertion failes.

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails 
> intermittently in trunk
> -
>
> Key: HDFS-14669
> URL: https://issues.apache.org/jira/browse/HDFS-14669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.2.0
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: scanner, test
> Attachments: HDFS-14669-trunk-001.patch, HDFS-14669-trunk.002.patch, 
> HDFS-14669-trunk.003.patch, HDFS-14669-trunk.004.patch
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testDirectoryScannerInFederatedCluster
>  radomlly Failes because of write files of the same name, meaning intent to 
> write 2 files but  2 files are the same name, witch cause a race condition of 
> datanode delete block and the scan action count block.
>  
> Ref :: 
> [https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1207/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testDirectoryScannerInFederatedCluster/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-08-01 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898508#comment-16898508
 ] 

Feilong He edited comment on HDFS-13762 at 8/2/19 3:49 AM:
---

Thanks [~jojochuang] for your attention on this Jira. Before closing it, we 
will post a refreshed formal performance report for this feature. Currently, 
half of the test work has been done.


was (Author: philohe):
Thanks [~jojochuang] for your attention on this Jira. Before closing it, we 
will post a formal performance report for this feature. Currently, half of the 
test work has been done.

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, 
> SCMCacheDesign-2019-07-12.pdf, SCMCacheDesign-2019-07-16.pdf, 
> SCMCacheDesign-2019-3-26.pdf, SCMCacheTestPlan-2019-3-27.pdf, 
> SCMCacheTestPlan.pdf, SCM_Cache_Perf_Results-v1.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14685) DefaultAuditLogger doesn't print CallerContext

2019-08-01 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898526#comment-16898526
 ] 

xuzq commented on HDFS-14685:
-

Thanx [~jojochuang], I will update a new patch which contains some test cases.

> DefaultAuditLogger doesn't print CallerContext
> --
>
> Key: HDFS-14685
> URL: https://issues.apache.org/jira/browse/HDFS-14685
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14685-trunk-002.patch
>
>
> If we not set dfs.namenode.audit.loggers(default is null), DefaultAuditLogger 
> will not print CallerConext into audit.log even if we set 
> hadoop.caller.context.enabled to true.
>  
> This bug also exists in 
> [HDFS-14625|https://issues.apache.org/jira/browse/HDFS-14625]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898523#comment-16898523
 ] 

Hadoop QA commented on HDFS-14456:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
55s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 

[jira] [Commented] (HDFS-14318) dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898522#comment-16898522
 ] 

Hadoop QA commented on HDFS-14318:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
1s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 154 unchanged - 0 fixed = 155 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in 
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()  At 
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
  At DataNode.java:[lines 2212-2214] |
|  |  Null pointer dereference of DataNode.errorDisk in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:[line 3484] |
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | 

[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Status: Patch Available  (was: Open)

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-14693.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Attachment: HDFS-14693.001.patch

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-14693.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14680) StorageInfoDefragmenter should handle exceptions gently

2019-08-01 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898520#comment-16898520
 ] 

He Xiaoqiao commented on HDFS-14680:


Thanks [~zhangchen] for your pings and sorry for late response since leave of 
absence for some days.
I am have no experience about StorageInfoDefragmenter, just quick review the 
logic and I agree that it may `too radical` to terminate NameNode when meet any 
'Throwable'/'RuntimeException'.
cc [~jojochuang], I believe it is little probability to meet RuntimeException 
as you said above, do we need change it more elastically? Please correct me if 
I understand wrong. Thanks.

> StorageInfoDefragmenter should handle exceptions gently
> ---
>
> Key: HDFS-14680
> URL: https://issues.apache.org/jira/browse/HDFS-14680
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Priority: Major
>
> StorageInfoDefragmenter is responsible for FoldedTreeSet compaction, but it 
> terminates NameNode on any exception, is it too radical?
> I mean, even the critical threads like HeartbeatManager don't terminates 
> NameNode once they encounter exceptions, StorageInfoDefragmenter should not 
> do that either.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Status: Open  (was: Patch Available)

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Attachment: (was: HDFS-16493.001.patch)

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Description: 
In a production environment, there may be some differences in each JouranlNode 
(e.g. network condition, disk condition, and so on). For example, If a JN's 
network is much worse than other JNs, then the time taken by the NN to write 
this JN will be much greater than other JNs, in this case, it will cause the 
IPC Logger thread corresponding to this JN to have many pending edits, when the 
pending edits exceeds the maximum limit (default 10MB), the new edits about to 
write to this JN will be silently dropped, and will result gaps in the editlog 
segment, which causing this JN and NN repeatedly reporting the following 
errors: 
{code:java}
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't write 
txid 1904164873 expecting nextTxId=1904164871{code}
Unfortunately, the above error message can not help us quickly find the root 
cause, It took more time to find the cause, so it's better to add a warning log 
here, like this: 
{code:java}
2019-08-02 04:55:05,879 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
size:10224, will silently drop 174 bytes of edits!{code}
 This is just a very small improvement.

  was:
In a production environment, there may be some differences in each JouranlNode 
(e.g. network condition, disk condition, and so on). For example, If a JN's 
network is much worse than other JNs, then the time taken by the NN to write 
this JN will be much greater than other JNs, in this case, it will cause the 
IPC Logger thread corresponding to this JN to have many pending edits, when the 
pending edits exceeds the maximum limit (default 10MB), the new edits about to 
write to this JN will be silently dropped, and will result gaps in the editlog 
segment, which causing this JN and NN repeatedly reporting the following 
errors: 
{code:java}
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't write 
txid 1904164873 expecting nextTxId=1904164871{code}
 Unfortunately, the above error message can not help us quickly find the root 
cause, so it's better to add a warning log to tell us the really reason, like 
this: 
{code:java}
2019-08-02 04:55:05,879 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
size:10224, will silently drop 174 bytes of edits!{code}
 This is just a very small improvement.


> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Component/s: namenode

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Affects Version/s: 3.1.2

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898516#comment-16898516
 ] 

Xudong Cao commented on HDFS-14693:
---

Just add a log, does not need an unit test.

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Assignee: Xudong Cao
  Status: Patch Available  (was: Open)

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Attachment: HDFS-16493.001.patch

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xudong Cao
>Priority: Minor
> Attachments: HDFS-16493.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14669) TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails intermittently in trunk

2019-08-01 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14669:
-
Attachment: HDFS-14669-trunk.004.patch

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails 
> intermittently in trunk
> -
>
> Key: HDFS-14669
> URL: https://issues.apache.org/jira/browse/HDFS-14669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.2.0
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: scanner, test
> Attachments: HDFS-14669-trunk-001.patch, HDFS-14669-trunk.002.patch, 
> HDFS-14669-trunk.003.patch, HDFS-14669-trunk.004.patch
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testDirectoryScannerInFederatedCluster
>  radomlly Failes because of write files of the same name, meaning intent to 
> write 2 files but  2 files are the same name, witch cause a race condition of 
> datanode delete block and the scan action count block.
>  
> Ref :: 
> [https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1207/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testDirectoryScannerInFederatedCluster/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Description: 
In a production environment, there may be some differences in each JouranlNode 
(e.g. network condition, disk condition, and so on). For example, If a JN's 
network is much worse than other JNs, then the time taken by the NN to write 
this JN will be much greater than other JNs, in this case, it will cause the 
IPC Logger thread corresponding to this JN to have many pending edits, when the 
pending edits exceeds the maximum limit (default 10MB), the new edits about to 
write to this JN will be silently dropped, and will result gaps in the editlog 
segment, which causing this JN and NN repeatedly reporting the following 
errors: 
{code:java}
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't write 
txid 1904164873 expecting nextTxId=1904164871{code}
 Unfortunately, the above error message can not help us quickly find the root 
cause, so it's better to add a warning log to tell us the really reason, like 
this: 
{code:java}
2019-08-02 04:55:05,879 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
size:10224, will silently drop 174 bytes of edits!{code}
 This is just a very small improvement.

  was:
In a production environment, there may be some differences in each JouranlNode 
(e.g. network condition, disk condition, and so on). For example, If a JN's 
network is much worse than other JNs, then the time taken by the NN to write 
this JN will be much greater than other JNs, in this case, it will cause the 
IPC Logger thread corresponding to this JN to have many pending edits, when the 
pending edits exceeds the maximum limit (default 10MB), the new edits about to 
write to this JN will be silently dropped, and will result gaps in the editlog 
segment, which causing this JN and NN repeatedly reporting the following errors:
 
{code:java}
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't write 
txid 1904164873 expecting nextTxId=1904164871{code}
 

Unfortunately, the above error message can not help us quickly find the root 
cause, so it's better to add a warning log to tell us the really reason, like 
this:

 
{code:java}
2019-08-02 04:55:05,879 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
size:10224, will silently drop 174 bytes of edits!{code}
 

This is just a very small improvement.


> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xudong Cao
>Priority: Minor
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14693:
--
Description: 
In a production environment, there may be some differences in each JouranlNode 
(e.g. network condition, disk condition, and so on). For example, If a JN's 
network is much worse than other JNs, then the time taken by the NN to write 
this JN will be much greater than other JNs, in this case, it will cause the 
IPC Logger thread corresponding to this JN to have many pending edits, when the 
pending edits exceeds the maximum limit (default 10MB), the new edits about to 
write to this JN will be silently dropped, and will result gaps in the editlog 
segment, which causing this JN and NN repeatedly reporting the following errors:
 
{code:java}
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't write 
txid 1904164873 expecting nextTxId=1904164871{code}
 

Unfortunately, the above error message can not help us quickly find the root 
cause, so it's better to add a warning log to tell us the really reason, like 
this:

 
{code:java}
2019-08-02 04:55:05,879 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
size:10224, will silently drop 174 bytes of edits!{code}
 

This is just a very small improvement.

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xudong Cao
>Priority: Minor
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors:
>  
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
>  
> Unfortunately, the above error message can not help us quickly find the root 
> cause, so it's better to add a warning log to tell us the really reason, like 
> this:
>  
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  
> This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1882) TestReplicationManager failed with NPE in ReplicationManager.java

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1882?focusedWorklogId=287454=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287454
 ]

ASF GitHub Bot logged work on HDDS-1882:


Author: ASF GitHub Bot
Created on: 02/Aug/19 03:10
Start Date: 02/Aug/19 03:10
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1197: HDDS-1882. 
TestReplicationManager failed with NPE in ReplicationManager
URL: https://github.com/apache/hadoop/pull/1197#issuecomment-517530783
 
 
   Thanks @nandakumar131  for review the patch. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287454)
Time Spent: 50m  (was: 40m)

> TestReplicationManager failed with NPE in ReplicationManager.java 
> --
>
> Key: HDDS-1882
> URL: https://issues.apache.org/jira/browse/HDDS-1882
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-08-01 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898508#comment-16898508
 ] 

Feilong He commented on HDFS-13762:
---

Thanks [~jojochuang] for your attention on this Jira. Before closing it, we 
will post a formal performance report for this feature. Currently, half of the 
test work has been done.

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, 
> SCMCacheDesign-2019-07-12.pdf, SCMCacheDesign-2019-07-16.pdf, 
> SCMCacheDesign-2019-3-26.pdf, SCMCacheTestPlan-2019-3-27.pdf, 
> SCMCacheTestPlan.pdf, SCM_Cache_Perf_Results-v1.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14685) DefaultAuditLogger doesn't print CallerContext

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898500#comment-16898500
 ] 

Wei-Chiu Chuang commented on HDFS-14685:


Do you have a test? A test would be easier to verify and understand.

> DefaultAuditLogger doesn't print CallerContext
> --
>
> Key: HDFS-14685
> URL: https://issues.apache.org/jira/browse/HDFS-14685
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14685-trunk-002.patch
>
>
> If we not set dfs.namenode.audit.loggers(default is null), DefaultAuditLogger 
> will not print CallerConext into audit.log even if we set 
> hadoop.caller.context.enabled to true.
>  
> This bug also exists in 
> [HDFS-14625|https://issues.apache.org/jira/browse/HDFS-14625]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898494#comment-16898494
 ] 

Wei-Chiu Chuang commented on HDFS-13901:


So, this is an issue when access time is enabled 
(dfs.namenode.accesstime.precision > 0, where the default is 1 hour)

[~csun] [~xkrogen] [~kihwal] [~daryn] you have worked in this space before. 
Mind to review this?

The patch looks logically correct, but this is a hot code path and so I wonder 
if this can result in additional performance hit? 

Specifically, would it be more efficient if you do the check in reverse order
{code:java}
if (!isInSafeMode() && updateAccessTime) {
  if (!inode.isDeleted()) {{code}
instead? 

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1798) Propagate failure in writeStateMachineData to Ratis

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1798?focusedWorklogId=287446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287446
 ]

ASF GitHub Bot logged work on HDDS-1798:


Author: ASF GitHub Bot
Created on: 02/Aug/19 02:29
Start Date: 02/Aug/19 02:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1113: HDDS-1798. 
Propagate failure in writeStateMachineData to Ratis. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1113#issuecomment-517523882
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 106 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 628 | trunk passed |
   | +1 | compile | 382 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 940 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | trunk passed |
   | 0 | spotbugs | 482 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 726 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 634 | the patch passed |
   | +1 | compile | 427 | the patch passed |
   | +1 | javac | 427 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 804 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 667 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 359 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2101 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8609 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1113 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 18c1db4e2dff 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d086d05 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/3/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/3/testReport/ |
   | Max. process+thread count | 5328 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287446)

[jira] [Work logged] (HDDS-1798) Propagate failure in writeStateMachineData to Ratis

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1798?focusedWorklogId=287447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287447
 ]

ASF GitHub Bot logged work on HDDS-1798:


Author: ASF GitHub Bot
Created on: 02/Aug/19 02:29
Start Date: 02/Aug/19 02:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1113: 
HDDS-1798. Propagate failure in writeStateMachineData to Ratis. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1113#discussion_r309962091
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
 ##
 @@ -139,13 +139,8 @@ public void testContainerStateMachineFailures() throws 
Exception {
 
.getContainer(omKeyLocationInfo.getContainerID()).getContainerData()
 .getContainerPath()));
 
-try {
-  key.close();
-  Assert.fail();
-} catch (IOException ioe) {
-  Assert.assertTrue(ioe.getMessage().contains(
-  "Requested operation not allowed as ContainerState is UNHEALTHY"));
-}
+key.close();
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287447)
Time Spent: 1h  (was: 50m)

> Propagate failure in writeStateMachineData to Ratis
> ---
>
> Key: HDDS-1798
> URL: https://issues.apache.org/jira/browse/HDDS-1798
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, 
> writeStateMachineData() returns a future to Ratis. This future does not track 
> any errors or failures encountered as part of the operation - WriteChunk / 
> handleWriteChunk(). The error is propagated back to the client in the form of 
> an error code embedded inside writeChunkResponseProto. But the error goes 
> undetected and unhandled in the Ratis server. The future handed back to Ratis 
> is always completed with success.
> The goal is to detect any errors in writeStateMachineData in Ratis and treat 
> is as a failure of the Ratis log. Handling for which is already implemented 
> in HDDS-1603. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=287444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287444
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 02/Aug/19 02:24
Start Date: 02/Aug/19 02:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1076: HDDS-1782. Add 
an option to MiniOzoneChaosCluster to read files multiple times. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#issuecomment-517522989
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 52 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 715 | trunk passed |
   | +1 | compile | 384 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 727 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 621 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 370 | the patch passed |
   | +1 | javac | 370 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 646 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 642 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 307 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1802 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7591 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1076 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux cf454aa352c7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 17e8cf5 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/7/testReport/ |
   | Max. process+thread count | 4884 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/7/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287444)
Time Spent: 2h 40m  (was: 2.5h)

> Add an option to MiniOzoneChaosCluster to read files multiple times.
> 
>
> Key: HDDS-1782
> URL: 

[jira] [Created] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-01 Thread Xudong Cao (JIRA)
Xudong Cao created HDFS-14693:
-

 Summary: NameNode should log a warning when EditLog IPC logger's 
pending size exceeds limit.
 Key: HDFS-14693
 URL: https://issues.apache.org/jira/browse/HDFS-14693
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xudong Cao






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14260) Replace synchronized method in BlockReceiver with atomic value

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898491#comment-16898491
 ] 

Hadoop QA commented on HDFS-14260:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} https://github.com/apache/hadoop/pull/483 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/483 |
| JIRA Issue | HDFS-14260 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-483/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Replace synchronized method in BlockReceiver with atomic value
> --
>
> Key: HDFS-14260
> URL: https://issues.apache.org/jira/browse/HDFS-14260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14260.1.patch, HDFS-14260.2.patch
>
>
> This synchronized block is protecting {{lastSentTime}} which is a native 
> long.  Can use AtomicLong and remove this synchronization.
> {code}
>   synchronized boolean packetSentInTime() {
> long diff = Time.monotonicNow() - lastSentTime;
> if (diff > maxSendIdleTime) {
>   LOG.info("A packet was last sent " + diff + " milliseconds ago.");
>   return false;
> }
> return true;
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L392-L399



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13131:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~candychencan]

> Modifying testcase testEnableAndDisableErasureCodingPolicy
> --
>
> Key: HDFS-13131
> URL: https://issues.apache.org/jira/browse/HDFS-13131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13131.patch
>
>
> In testcase testEnableAndDisableErasureCodingPolicy in 
> TestDistributedFileSystem.java, when enable or disable an 
> ErasureCodingPolicy, we should query from enabledPoliciesByName other than 
> policiesByName to Check whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898485#comment-16898485
 ] 

Wei-Chiu Chuang commented on HDFS-13131:


+1

> Modifying testcase testEnableAndDisableErasureCodingPolicy
> --
>
> Key: HDFS-13131
> URL: https://issues.apache.org/jira/browse/HDFS-13131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Attachments: HDFS-13131.patch
>
>
> In testcase testEnableAndDisableErasureCodingPolicy in 
> TestDistributedFileSystem.java, when enable or disable an 
> ErasureCodingPolicy, we should query from enabledPoliciesByName other than 
> policiesByName to Check whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898480#comment-16898480
 ] 

Hadoop QA commented on HDFS-14295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} https://github.com/apache/hadoop/pull/497 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/497 |
| JIRA Issue | HDFS-14295 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/5/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14295.1.patch, HDFS-14295.10.patch, 
> HDFS-14295.2.patch, HDFS-14295.3.patch, HDFS-14295.4.patch, 
> HDFS-14295.5.patch, HDFS-14295.6.patch, HDFS-14295.7.patch, 
> HDFS-14295.8.patch, HDFS-14295.9.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1836) Change the default value of ratis leader election min timeout to a lower value

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1836?focusedWorklogId=287440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287440
 ]

ASF GitHub Bot logged work on HDDS-1836:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:56
Start Date: 02/Aug/19 01:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1133: HDDS-1836. 
Change the default value of ratis leader election min timeout to a lower value
URL: https://github.com/apache/hadoop/pull/1133#issuecomment-517517917
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 84 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 638 | trunk passed |
   | +1 | compile | 377 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 953 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 508 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 730 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 606 | the patch passed |
   | +1 | compile | 426 | the patch passed |
   | +1 | javac | 426 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 1065 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | +1 | findbugs | 726 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 372 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2091 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8824 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux a3a07e134eb0 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d086d05 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/3/testReport/ |
   | Max. process+thread count | 4279 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287440)
Time Spent: 50m  (was: 40m)

> Change the default value of ratis leader election min timeout to a lower value
> --
>
> Key: HDDS-1836
> URL: 

[jira] [Commented] (HDFS-14462) WebHDFS throws "Error writing request body to server" instead of DSQuotaExceededException

2019-08-01 Thread Simbarashe Dzinamarira (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898471#comment-16898471
 ] 

Simbarashe Dzinamarira commented on HDFS-14462:
---

Upload v4 patch for this issue.

Fixes checkstyle failed tests.

> WebHDFS throws "Error writing request body to server" instead of 
> DSQuotaExceededException
> -
>
> Key: HDFS-14462
> URL: https://issues.apache.org/jira/browse/HDFS-14462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
> Attachments: HDFS-14462.001.patch, HDFS-14462.002.patch, 
> HDFS-14462.003.patch, HDFS-14462.004.patch
>
>
> We noticed recently in our environment that, when writing data to HDFS via 
> WebHDFS, a quota exception is returned to the client as:
> {code}
> java.io.IOException: Error writing request body to server
> at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536)
>  ~[?:1.8.0_172]
> at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519)
>  ~[?:1.8.0_172]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[?:1.8.0_172]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[?:1.8.0_172]
> at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) 
> ~[?:1.8.0_172]
> at java.io.DataOutputStream.flush(DataOutputStream.java:123) 
> ~[?:1.8.0_172]
> {code}
> It is entirely opaque to the user that this exception was caused because they 
> exceeded their quota. Yet in the DataNode logs:
> {code}
> 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /foo/path/here is exceeded: quota =  B = X TB but diskspace 
> consumed =  B = X TB
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
> {code}
> This was on a 2.7.x cluster, but I verified that the same logic exists on 
> trunk. I believe we need to fix some of the logic within the 
> {{ExceptionHandler}} to add special handling for the quota exception.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14462) WebHDFS throws "Error writing request body to server" instead of DSQuotaExceededException

2019-08-01 Thread Simbarashe Dzinamarira (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simbarashe Dzinamarira updated HDFS-14462:
--
Attachment: HDFS-14462.004.patch

> WebHDFS throws "Error writing request body to server" instead of 
> DSQuotaExceededException
> -
>
> Key: HDFS-14462
> URL: https://issues.apache.org/jira/browse/HDFS-14462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
> Attachments: HDFS-14462.001.patch, HDFS-14462.002.patch, 
> HDFS-14462.003.patch, HDFS-14462.004.patch
>
>
> We noticed recently in our environment that, when writing data to HDFS via 
> WebHDFS, a quota exception is returned to the client as:
> {code}
> java.io.IOException: Error writing request body to server
> at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536)
>  ~[?:1.8.0_172]
> at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519)
>  ~[?:1.8.0_172]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[?:1.8.0_172]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[?:1.8.0_172]
> at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) 
> ~[?:1.8.0_172]
> at java.io.DataOutputStream.flush(DataOutputStream.java:123) 
> ~[?:1.8.0_172]
> {code}
> It is entirely opaque to the user that this exception was caused because they 
> exceeded their quota. Yet in the DataNode logs:
> {code}
> 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /foo/path/here is exceeded: quota =  B = X TB but diskspace 
> consumed =  B = X TB
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
> at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
> {code}
> This was on a 2.7.x cluster, but I verified that the same logic exists on 
> trunk. I believe we need to fix some of the logic within the 
> {{ExceptionHandler}} to add special handling for the quota exception.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=287437=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287437
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:42
Start Date: 02/Aug/19 01:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#issuecomment-517515593
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 89 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 627 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 497 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 722 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | +1 | mvninstall | 592 | the patch passed |
   | +1 | compile | 396 | the patch passed |
   | +1 | javac | 396 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 750 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 94 | hadoop-ozone generated 7 new + 13 unchanged - 0 fixed 
= 20 total (was 13) |
   | +1 | findbugs | 764 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 349 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2415 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8789 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1146 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c60914aaf75c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d086d05 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/testReport/ |
   | Max. process+thread count | 5321 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon-codegen hadoop-ozone/ozone-recon U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above 

[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=287436=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287436
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:38
Start Date: 02/Aug/19 01:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #950: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/950#issuecomment-517514817
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 629 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1409 | branch has no errors when building and testing 
our client artifacts. |
   | -0 | patch | 1469 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 586 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 2903 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/950 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 28995c9521a1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e20b195 |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/4/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287436)
Time Spent: 3h  (was: 2h 50m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=287427=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287427
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:30
Start Date: 02/Aug/19 01:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1147: HDDS-1619. 
Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517513551
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 88 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 583 | trunk passed |
   | +1 | compile | 397 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | trunk passed |
   | 0 | spotbugs | 475 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 687 | trunk passed |
   | -0 | patch | 529 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 39 | hadoop-hdds in the patch failed. |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 678 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 356 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2263 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8442 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 817dda8a59f0 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b94eba9 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/13/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/13/testReport/ |
   | Max. process+thread count | 5348 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287427)
Time Spent: 7h  (was: 6h 50m)

> Support volume acl operations for OM HA.
> 
>
> Key: HDDS-1619
> URL: https://issues.apache.org/jira/browse/HDDS-1619
> Project: Hadoop Distributed Data 

[jira] [Commented] (HDFS-14478) Add libhdfs APIs for openFile

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898467#comment-16898467
 ] 

Hadoop QA commented on HDFS-14478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
59s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/955 |
| JIRA Issue | HDFS-14478 |
| Optional Tests | dupname asflicense compile cc mvnsite javac unit |
| uname | Linux a7839b69c8c9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 17e8cf5 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/6/testReport/ |
| Max. process+thread count | 411 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/6/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Add libhdfs APIs for openFile
> -
>
> Key: HDFS-14478
> URL: https://issues.apache.org/jira/browse/HDFS-14478
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> HADOOP-15229 added a "FileSystem builder-based openFile() API" that allows 
> specifying configuration values for opening files (similar to HADOOP-14365).
> Support for {{openFile}} will be a little tricky as it is asynchronous 

[jira] [Work logged] (HDDS-1786) Datanodes takeSnapshot should delete previously created snapshots

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1786?focusedWorklogId=287421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287421
 ]

ASF GitHub Bot logged work on HDDS-1786:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:20
Start Date: 02/Aug/19 01:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1163: HDDS-1786 : 
Datanodes takeSnapshot should delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#issuecomment-517511785
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 103 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 654 | trunk passed |
   | +1 | compile | 373 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | trunk passed |
   | 0 | spotbugs | 487 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 739 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 569 | the patch passed |
   | +1 | compile | 390 | the patch passed |
   | +1 | javac | 390 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 724 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 369 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2083 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8484 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1163 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8fba00b210ad 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b94eba9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/4/testReport/ |
   | Max. process+thread count | 4989 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287421)
Time Spent: 2h 40m  (was: 2.5h)

> Datanodes takeSnapshot should delete previously created snapshots
> -
>
> Key: HDDS-1786
> URL: 

[jira] [Commented] (HDFS-14455) Fix typo in HAState.java

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898464#comment-16898464
 ] 

Hadoop QA commented on HDFS-14455:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} https://github.com/apache/hadoop/pull/764 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/764 |
| JIRA Issue | HDFS-14455 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/4/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix typo in HAState.java
> 
>
> Key: HDFS-14455
> URL: https://issues.apache.org/jira/browse/HDFS-14455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: hunshenshi
>Priority: Major
>
> There are some typo in HAState
> destructuve -> destructive
> Aleady -> Already
> Transtion -> Transition



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=287419=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287419
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:17
Start Date: 02/Aug/19 01:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#issuecomment-517511298
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 596 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 492 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 726 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 587 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 673 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 349 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1679 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7863 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1154 |
   | JIRA Issue | HDDS-1200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 75460ae7d374 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b94eba9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/2/testReport/ |
   | Max. process+thread count | 4860 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287419)
Time Spent: 3h 20m  (was: 3h 10m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>

[jira] [Commented] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898463#comment-16898463
 ] 

Hadoop QA commented on HDDS-1200:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  8m 
12s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
49s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/2/artifact/out/Dockerfile
 |
| GITHUB PR | 

[jira] [Commented] (HDFS-13677) Dynamic refresh Disk configuration results in overwriting VolumeMap

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898460#comment-16898460
 ] 

Hadoop QA commented on HDFS-13677:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} https://github.com/apache/hadoop/pull/780 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/780 |
| JIRA Issue | HDFS-13677 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-780/6/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Dynamic refresh Disk configuration results in overwriting VolumeMap
> ---
>
> Key: HDFS-13677
> URL: https://issues.apache.org/jira/browse/HDFS-13677
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Blocker
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-13677-001.patch, HDFS-13677-002-2.9-branch.patch, 
> HDFS-13677-002.patch, image-2018-06-14-13-05-54-354.png, 
> image-2018-06-14-13-10-24-032.png
>
>
> When I added a new disk by dynamically refreshing the configuration, an 
> exception "FileNotFound while finding block" was caused.
>  
> The steps are as follows:
> 1.Change the hdfs-site.xml of DataNode to add a new disk.
> 2.Refresh the configuration by "./bin/hdfs dfsadmin -reconfig datanode 
> :50020 start"
>  
> The error is like:
> ```
> VolumeScannerThread(/media/disk5/hdfs/dn): FileNotFound while finding block 
> BP-233501496-*.*.*.*-1514185698256:blk_1620868560_547245090 on volume 
> /media/disk5/hdfs/dn
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not 
> found for BP-1997955181-*.*.*.*-1514186468560:blk_1090885868_17145082
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:471)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:240)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:553)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
>  at java.lang.Thread.run(Thread.java:748)
> ```
> I added some logs for confirmation, as follows:
> Log Code like:
> !image-2018-06-14-13-05-54-354.png!
> And the result is like:
> !image-2018-06-14-13-10-24-032.png!  
> The Size of 'VolumeMap' has been reduced, and We found the 'VolumeMap' to be 
> overridden by the new Disk Block by the method 'ReplicaMap.addAll(ReplicaMap 
> other)'.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?focusedWorklogId=287415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287415
 ]

ASF GitHub Bot logged work on HDDS-1863:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:07
Start Date: 02/Aug/19 01:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#issuecomment-517509664
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 84 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 604 | trunk passed |
   | +1 | compile | 404 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 945 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 474 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 681 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 585 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 753 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   | +1 | findbugs | 738 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 359 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2121 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8371 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c1f6d5f0c2ef 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 32607db |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/6/testReport/ |
   | Max. process+thread count | 4797 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287415)
Time Spent: 3h 10m  (was: 3h)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>

[jira] [Work logged] (HDDS-1870) ConcurrentModification at PrometheusMetricsSink

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1870?focusedWorklogId=287410=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287410
 ]

ASF GitHub Bot logged work on HDDS-1870:


Author: ASF GitHub Bot
Created on: 02/Aug/19 01:00
Start Date: 02/Aug/19 01:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1179: HDDS-1870. 
ConcurrentModification at PrometheusMetricsSink
URL: https://github.com/apache/hadoop/pull/1179#issuecomment-517508371
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 634 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 964 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 655 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 613 | the patch passed |
   | +1 | compile | 398 | the patch passed |
   | +1 | javac | 398 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 809 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 97 | hadoop-ozone in the patch failed. |
   | +1 | findbugs | 747 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 364 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2481 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8853 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1179/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1179 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 875180c00d89 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1179/2/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1179/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1179/2/testReport/ |
   | Max. process+thread count | 4813 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1179/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[jira] [Created] (HDDS-1893) Fix bug in removeAcl in Bucket

2019-08-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1893:


 Summary: Fix bug in removeAcl in Bucket
 Key: HDDS-1893
 URL: https://issues.apache.org/jira/browse/HDDS-1893
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


{code:java}
// When we are removing subset of rights from existing acl.
for(OzoneAcl a: bucketInfo.getAcls()) {
 if(a.getName().equals(acl.getName()) &&
 a.getType().equals(acl.getType())) {
 BitSet bits = (BitSet) acl.getAclBitSet().clone();
 bits.and(a.getAclBitSet());

 if (bits.equals(ZERO_BITSET)) {
 return false;
 }
 bits = (BitSet) acl.getAclBitSet().clone();
 bits.and(a.getAclBitSet());
 a.getAclBitSet().xor(bits);

 if(a.getAclBitSet().equals(ZERO_BITSET)) {
 bucketInfo.getAcls().remove(a);
 }
 break;
 } else {
 return false;
 }{code}
In for loop, if first one is not matching with name and type, in else we return 
false. We should iterate entire acl list and then return response.
}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=287405=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287405
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:47
Start Date: 02/Aug/19 00:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1184: HDDS-1865. Use 
"ozone.network.topology.aware.read" to control both RPC client and server side 
logic
URL: https://github.com/apache/hadoop/pull/1184#issuecomment-517506319
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 615 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 787 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | trunk passed |
   | 0 | spotbugs | 428 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 612 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 536 | the patch passed |
   | +1 | compile | 359 | the patch passed |
   | +1 | cc | 359 | the patch passed |
   | +1 | javac | 359 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 999 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 665 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 304 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2265 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 8289 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1184/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1184 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 279dc4f219e4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1184/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1184/3/testReport/ |
   | Max. process+thread count | 4698 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/client hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1184/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287405)
Time Spent: 1h  (was: 50m)

> Use 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=287402=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287402
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:46
Start Date: 02/Aug/19 00:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1202: 
HDDS-1884. Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#discussion_r309947374
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/util/ObjectParser.java
 ##
 @@ -0,0 +1,73 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.util;
+
+import com.google.common.base.Preconditions;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+
+/**
+ * Utility class to parse {@link OzoneObj#getPath()}.
+ */
+public class ObjectParser {
+
+  private String volume;
+  private String bucket;
+  private String key;
+
+  /**
+   * Parse the path and extract volume, bucket and key names.
+   * @param path
+   */
+  public ObjectParser(String path, ObjectType objectType) throws OMException {
+Preconditions.checkNotNull(path);
+String[] tokens = StringUtils.split(path, OZONE_URI_DELIMITER, 3);
+
+if (objectType == ObjectType.VOLUME && tokens.length == 1) {
 
 Review comment:
   Discussed offline, fine with this new class.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287402)
Time Spent: 2.5h  (was: 2h 20m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=287404=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287404
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:46
Start Date: 02/Aug/19 00:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1202: 
HDDS-1884. Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#discussion_r309947428
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java
 ##
 @@ -0,0 +1,151 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.bucket.acl;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.request.util.ObjectParser;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.acl.OMBucketAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.AddAclRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.AddAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handle add Acl request for bucket.
+ */
+public class OMBucketAddAclRequest extends OMClientRequest {
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287404)
Time Spent: 2h 50m  (was: 2h 40m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=287403=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287403
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:46
Start Date: 02/Aug/19 00:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1202: 
HDDS-1884. Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#discussion_r309947392
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 ##
 @@ -759,6 +766,43 @@ public void testReadRequest() throws Exception {
 }
   }
 
+  @Test
+  public void testAddBucketAcl() throws Exception {
+OzoneBucket ozoneBucket = setupBucket();
+String remoteUserName = "remoteUser";
+OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName,
+READ, DEFAULT);
+
+OzoneObj ozoneObj = OzoneObjInfo.Builder.newBuilder()
+.setResType(OzoneObj.ResourceType.BUCKET)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.setVolumeName(ozoneBucket.getVolumeName())
+.setBucketName(ozoneBucket.getName()).build();
+
+boolean addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl);
+Assert.assertTrue(addAcl);
+
+ozoneBucket.addAcls(Collections.singletonList(defaultUserAcl));
+List acls = ozoneBucket.getAcls();
+
+Assert.assertTrue(containsAcl(defaultUserAcl, acls));
+
+addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl);
+Assert.assertFalse(addAcl);
+  }
+
+  private boolean containsAcl(OzoneAcl ozoneAcl, List ozoneAcls) {
+for (OzoneAcl acl : ozoneAcls) {
+  if (acl.getType().equals(ozoneAcl.getType())
+  && acl.getName().equals(ozoneAcl.getName())
+  && acl.getAclBitSet().equals(ozoneAcl.getAclBitSet())
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287403)
Time Spent: 2h 40m  (was: 2.5h)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=287401=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287401
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:45
Start Date: 02/Aug/19 00:45
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-517506082
 
 
   Now it is ready for review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287401)
Time Spent: 2h 20m  (was: 2h 10m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=287396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287396
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:32
Start Date: 02/Aug/19 00:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1187: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187#issuecomment-517503988
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 587 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 850 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 659 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 578 | the patch passed |
   | +1 | compile | 382 | the patch passed |
   | +1 | javac | 382 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | the patch passed |
   | +1 | findbugs | 699 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 301 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2339 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 8228 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1187 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7326e3798d07 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/1/testReport/ |
   | Max. process+thread count | 4504 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287396)
Time Spent: 4h 20m  (was: 4h 

[jira] [Work logged] (HDDS-1871) Remove anti-affinity rules from k8s minkube example

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1871?focusedWorklogId=287394=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287394
 ]

ASF GitHub Bot logged work on HDDS-1871:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:25
Start Date: 02/Aug/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1180: HDDS-1871. 
Remove anti-affinity rules from k8s minkube example
URL: https://github.com/apache/hadoop/pull/1180#issuecomment-517502952
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 618 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 726 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 584 | the patch passed |
   | +1 | compile | 417 | the patch passed |
   | +1 | javac | 417 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 310 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2595 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6880 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1180 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs yamllint |
   | uname | Linux 780b0ebf10d8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/2/testReport/ |
   | Max. process+thread count | 4377 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287394)
Time Spent: 0.5h  (was: 20m)

> Remove anti-affinity rules from k8s minkube 

[jira] [Work logged] (HDDS-1725) pv-test example to test csi is not working

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1725?focusedWorklogId=287391=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287391
 ]

ASF GitHub Bot logged work on HDDS-1725:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:23
Start Date: 02/Aug/19 00:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1070: HDDS-1725. 
pv-test example to test csi is not working
URL: https://github.com/apache/hadoop/pull/1070#issuecomment-517502605
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 9 | https://github.com/apache/hadoop/pull/1070 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1070 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287391)
Time Spent: 1h 10m  (was: 1h)

> pv-test example to test csi is not working
> --
>
> Key: HDDS-1725
> URL: https://issues.apache.org/jira/browse/HDDS-1725
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ratish Maruthiyodan
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [~rmaruthiyodan] reported two problems regarding to the pv-test example in 
> csi examples folder.
> pv-test folder contains an example nginx deployment which can use an ozone 
> PVC/PV to publish content of a folder via http.
> Two problems are identified:
>  * The label based matching filter of service doesn't point to the nginx 
> deployment
>  * The configmap mounting is missing from nginx deployment



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=287385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287385
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:21
Start Date: 02/Aug/19 00:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket addACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-517502233
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 134 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 762 | trunk passed |
   | +1 | compile | 451 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1120 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 199 | trunk passed |
   | 0 | spotbugs | 501 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 738 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 688 | the patch passed |
   | +1 | compile | 443 | the patch passed |
   | +1 | cc | 443 | the patch passed |
   | +1 | javac | 443 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 755 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 670 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 338 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2186 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 9075 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4d53313f4013 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e111789 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/5/testReport/ |
   | Max. process+thread count | 5378 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287385)
Time Spent: 2h 10m  (was: 2h)

[jira] [Work logged] (HDDS-1832) Improve logging for PipelineActions handling in SCM and datanode

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1832?focusedWorklogId=287386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287386
 ]

ASF GitHub Bot logged work on HDDS-1832:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:21
Start Date: 02/Aug/19 00:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1200: HDDS-1832 : 
Improve logging for PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1200#issuecomment-517502240
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 628 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 968 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 480 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 692 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 572 | the patch passed |
   | +1 | compile | 379 | the patch passed |
   | +1 | javac | 379 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | the patch passed |
   | +1 | findbugs | 692 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 345 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2127 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8337 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 14a028e78af8 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/2/testReport/ |
   | Max. process+thread count | 5317 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=287387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287387
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:21
Start Date: 02/Aug/19 00:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1204: HDDS-1768. Audit 
xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-517502262
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 621 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 456 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 669 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 601 | the patch passed |
   | +1 | compile | 454 | the patch passed |
   | +1 | javac | 454 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 196 | the patch passed |
   | +1 | findbugs | 828 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2874 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 9127 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1204 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dc27ac67b899 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e111789 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/1/testReport/ |
   | Max. process+thread count | 3553 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287387)
Time Spent: 50m  (was: 40m)

> Audit xxxAcl methods in OzoneManager
> 
>

[jira] [Updated] (HDFS-14686) HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14686:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I merged the PR. Resolve this jira.

> HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null
> ---
>
> Key: HDFS-14686
> URL: https://issues.apache.org/jira/browse/HDFS-14686
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> The root cause is that *FSOperations#contentSummaryToJSON* doesn't parse 
> *ContentSummary.erasureCodingPolicy* into the json.
> The expected behavior is that *HttpFSFileSystem#getErasureCodingPolicy* 
> should at least return "" (empty string, for directories or symlinks), or 
> "Replicated" (for non-EC files), "RS-6-3-1024k", etc.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14683) WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14683:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I merged the PR. Resolve this jira.

> WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response
> 
>
> Key: HDFS-14683
> URL: https://issues.apache.org/jira/browse/HDFS-14683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> Quote [~jojochuang]'s 
> [comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
> {quote}
> ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
> but webhdfs GETCONTENTSUMMARY doesn't include that.
> {quote}
> Examples:
> {code:json|title=Directory, Before}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=Directory, After, With EC policy RS-6-3-1024k set}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "ecPolicy": "RS-6-3-1024k",
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=Directory, After, No EC policy set}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "ecPolicy": "",
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}
> {code:json|title=File, After, No EC policy set}
> GET /webhdfs/v1/tmp/file?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 0,
> "ecPolicy": "Replicated",
> "fileCount": 1,
> "length": 29,
> "quota": -1,
> "spaceConsumed": 29,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14557) JournalNode error: Can't scan a pre-transactional edit log

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898444#comment-16898444
 ] 

Wei-Chiu Chuang commented on HDFS-14557:


Fix & test looks really good!

Would the error "Header value is -1 indicating it was never written" too 
cryptic? I didn't understand it the first time I read it. Can we make it 
something that an ordinary administrator can understand. How do we make this 
error message more descriptive, like "the edit log file xxx will be sidelined 
to file name xxx.empty." 

Another question: if the JN indeed runs out of disk, sidelining the edit log 
file is not going to help much, right? Unless administrator steps in and clean 
up the space, JN will not be able to return back to a good state.

> JournalNode error: Can't scan a pre-transactional edit log
> --
>
> Key: HDFS-14557
> URL: https://issues.apache.org/jira/browse/HDFS-14557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14557.001.patch
>
>
> We saw the following error in JournalNodes a few times before.
> {noformat}
> 2016-09-22 12:44:24,505 WARN org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Caught exception after scanning through 0 ops from /data/1/dfs/current/ed
> its_inprogress_0661942 while determining its valid length. 
> Position was 761856
> java.io.IOException: Can't scan a pre-transactional edit log.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4592)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
> at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:193)
> at org.apache.hadoop.hdfs.qjournal.server.Journal.(Journal.java:153)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
> {noformat}
> The edit file was corrupt, and one possible culprit of this error is a full 
> disk. The JournalNode can't recovered and must be resync manually from other 
> JournalNodes. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1878) checkstyle error in ContainerStateMachine

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1878?focusedWorklogId=287379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287379
 ]

ASF GitHub Bot logged work on HDDS-1878:


Author: ASF GitHub Bot
Created on: 02/Aug/19 00:05
Start Date: 02/Aug/19 00:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1195: HDDS-1878. 
checkstyle error in ContainerStateMachine
URL: https://github.com/apache/hadoop/pull/1195#issuecomment-517499489
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 154 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for branch |
   | +1 | mvninstall | 812 | trunk passed |
   | +1 | compile | 442 | trunk passed |
   | +1 | checkstyle | 95 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1138 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 233 | trunk passed |
   | 0 | spotbugs | 547 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 824 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 586 | the patch passed |
   | +1 | compile | 402 | the patch passed |
   | +1 | javac | 402 | the patch passed |
   | +1 | checkstyle | 34 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) |
   | +1 | checkstyle | 38 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-hdds generated 2 new + 16 unchanged - 0 fixed = 
18 total (was 16) |
   | +1 | findbugs | 687 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 355 | hadoop-hdds in the patch passed. |
   | -1 | unit | 260 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7237 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1195 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d780ed664168 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/3/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/3/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287379)
Time Spent: 1h 20m  (was: 1h 10m)

> checkstyle error in ContainerStateMachine
> -
>
> Key: HDDS-1878
> URL: https://issues.apache.org/jira/browse/HDDS-1878
> Project: Hadoop Distributed Data Store
>

[jira] [Updated] (HDDS-1892) HA failover attempt log level should be set to DEBUG

2019-08-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1892:
-
Description: 
Current HA failover log level is INFO:
{code:bash}
$ ozone sh volume create /volume
2019-08-01 20:55:03 INFO RetryInvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeld=om2,nodeAddress=om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:03 INFO RpcClient:288 — Creating Volume: volume, with hadoop 
as owner.
VOLUME_ALREADY_EXISTS Volume already exists

$ ozone sh bucket create /volume/bucket
2019-08-01 20:55:23 INFO RetryInvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.Notleader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeId.om2,nodeAddress.om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:24 INFO RpcClient:425 — Creating Bucket: volume/bucket, with 
Versioning false and Storage Type set to DISK and Encryption set to false
{code}

The client will print out every failover attempt.

Suppress it by setting the log level to DEBUG.

  was:
Current HA failover log level is INFO:
{code:bash}
$ ozone sh volume create /volume
2019-08-01 20:55:03 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 

[jira] [Updated] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14687:
--
Summary: Standby Namenode never come out of safemode when EC files are 
being written.  (was: Standby Namenode never come out of samemode when EC files 
are being written.)

> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of same mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14687:
--
Description: 
When huge number of EC files are being written and SBN is restarted then it 
will never come out of safe mode and required blocks count getting increase.

{noformat}

The reported blocks 16658401 needs additional 1702 blocks to reach the 
threshold 0.9 of total blocks 16660120.

The reported blocks 16658659 needs additional 2935 blocks to reach the 
threshold 0.9 of total blocks 16661611.

The reported blocks 16659947 needs additional 3868 blocks to reach the 
threshold 0.9 of total blocks 16663832.

The reported blocks 1335 needs additional 5116 blocks to reach the 
threshold 0.9 of total blocks 16671468.

The reported blocks 16669311 needs additional 6384 blocks to reach the 
threshold 0.9 of total blocks 16675712.

{noformat}

  was:
When huge number of EC files are being written and SBN is restarted then it 
will never come out of same mode and required blocks count getting increase.

{noformat}

The reported blocks 16658401 needs additional 1702 blocks to reach the 
threshold 0.9 of total blocks 16660120.

The reported blocks 16658659 needs additional 2935 blocks to reach the 
threshold 0.9 of total blocks 16661611.

The reported blocks 16659947 needs additional 3868 blocks to reach the 
threshold 0.9 of total blocks 16663832.

The reported blocks 1335 needs additional 5116 blocks to reach the 
threshold 0.9 of total blocks 16671468.

The reported blocks 16669311 needs additional 6384 blocks to reach the 
threshold 0.9 of total blocks 16675712.

{noformat}


> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) Got an unexpected txid when tail editlog

2019-08-01 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898439#comment-16898439
 ] 

Konstantin Shvachko commented on HDFS-14674:


Hey [~wangzhaohui], thanks for reporting and fixing this bug. The patch looks 
good.
One thing, it would be good to add a unit test for this bug, so that we would 
never hit this problem again.

> Got an unexpected txid when tail editlog
> 
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, image-2019-07-26-11-34-23-405.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> 

[jira] [Commented] (HDFS-14462) WebHDFS throws "Error writing request body to server" instead of DSQuotaExceededException

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898437#comment-16898437
 ] 

Hadoop QA commented on HDFS-14462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
105 unchanged - 0 fixed = 108 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14462 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976471/HDFS-14462.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75876ade772a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Work logged] (HDDS-1788) Add kerberos support to Ozone Recon

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1788?focusedWorklogId=287374=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287374
 ]

ASF GitHub Bot logged work on HDDS-1788:


Author: ASF GitHub Bot
Created on: 01/Aug/19 23:49
Start Date: 01/Aug/19 23:49
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1201: HDDS-1788. Add 
kerberos support to Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517496715
 
 
   The unit test, integration test and checkstyle failures are not related to 
this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287374)
Time Spent: 1.5h  (was: 1h 20m)

> Add kerberos support to Ozone Recon
> ---
>
> Key: HDDS-1788
> URL: https://issues.apache.org/jira/browse/HDDS-1788
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Recon fails to startup in a kerberized cluster with the following error:
> {code:java}
> Failed startup of context 
> o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
>  javax.servlet.ServletException: javax.servlet.ServletException: Principal 
> not defined in configuration at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
>  at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) 
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) 
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at org.eclipse.jetty.server.Server.start(Server.java:427) at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) 
> at org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
> org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
> picocli.CommandLine.execute(CommandLine.java:1173) at 
> picocli.CommandLine.access$800(CommandLine.java:141) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
> picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
> org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
> org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
> 

[jira] [Created] (HDDS-1892) HA failover log level should be set to DEBUG

2019-08-01 Thread Siyao Meng (JIRA)
Siyao Meng created HDDS-1892:


 Summary: HA failover log level should be set to DEBUG
 Key: HDDS-1892
 URL: https://issues.apache.org/jira/browse/HDDS-1892
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Siyao Meng
Assignee: Siyao Meng


Current HA failover log level is INFO:
{code:bash}

$ ozone sh volume create /volume
2019-08-01 20:55:03 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeld=om2,nodeAddress=om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:03 INFO RpcClient:288 — Creating Volume: volume, with hadoop 
as owner.
VOLUME_ALREADY_EXISTS Volume already exists

$ ozone sh bucket create /volume/bucket
2019-08-01 20:55:23 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.Notleader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeId.om2,nodeAddress.om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:24 INFO RpcClient:425 — Creating Bucket: volume/bucket, with 
Versioning false and Storage Type set to DISK and Encryption set to false
{code}

The client will print out every failover attempt.

Suppress it by setting the log level to DEBUG.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1892) HA failover attempt log level should be set to DEBUG

2019-08-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1892:
-
Description: 
Current HA failover log level is INFO:
{code:bash}
$ ozone sh volume create /volume
2019-08-01 20:55:03 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeld=om2,nodeAddress=om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:03 INFO RpcClient:288 — Creating Volume: volume, with hadoop 
as owner.
VOLUME_ALREADY_EXISTS Volume already exists

$ ozone sh bucket create /volume/bucket
2019-08-01 20:55:23 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.Notleader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.base/java.security.AccessController.doPrivileged(Native Method) 
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy19.submitRequest over nodeId.om2,nodeAddress.om2:9862 
after 1 failover attempts. Trying to failover immediately.
2019-08-01 20:55:24 INFO RpcClient:425 — Creating Bucket: volume/bucket, with 
Versioning false and Storage Type set to DISK and Encryption set to false
{code}

The client will print out every failover attempt.

Suppress it by setting the log level to DEBUG.

  was:
Current HA failover log level is INFO:
{code:bash}

$ ozone sh volume create /volume
2019-08-01 20:55:03 INFO RetrylnvocationHandler:411 — 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
 Exception): OM om2 is not the leader. Could not determine the leader node.
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
at 

[jira] [Updated] (HDDS-1892) HA failover attempt log level should be set to DEBUG

2019-08-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1892:
-
Summary: HA failover attempt log level should be set to DEBUG  (was: HA 
failover log level should be set to DEBUG)

> HA failover attempt log level should be set to DEBUG
> 
>
> Key: HDDS-1892
> URL: https://issues.apache.org/jira/browse/HDDS-1892
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current HA failover log level is INFO:
> {code:bash}
> $ ozone sh volume create /volume
> 2019-08-01 20:55:03 INFO RetrylnvocationHandler:411 — 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.NotLeader
>  Exception): OM om2 is not the leader. Could not determine the leader node.
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotleaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.base/java.security.AccessController.doPrivileged(Native 
> Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> , while invoking $Proxy19.submitRequest over nodeld=om2,nodeAddress=om2:9862 
> after 1 failover attempts. Trying to failover immediately.
> 2019-08-01 20:55:03 INFO RpcClient:288 — Creating Volume: volume, with hadoop 
> as owner.
> VOLUME_ALREADY_EXISTS Volume already exists
> $ ozone sh bucket create /volume/bucket
> 2019-08-01 20:55:23 INFO RetrylnvocationHandler:411 — 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.Notleader
>  Exception): OM om2 is not the leader. Could not determine the leader node.
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:183)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:172)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:94)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.base/java.security.AccessController.doPrivileged(Native 
> Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> , while invoking $Proxy19.submitRequest over nodeId.om2,nodeAddress.om2:9862 
> after 1 failover attempts. Trying to failover immediately.
> 2019-08-01 20:55:24 INFO RpcClient:425 — Creating Bucket: volume/bucket, with 
> Versioning false and Storage Type set to DISK and Encryption set to false
> {code}
> The client will print out every failover attempt.
> Suppress it by setting the log level to DEBUG.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1788) Add kerberos support to Ozone Recon

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1788?focusedWorklogId=287371=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287371
 ]

ASF GitHub Bot logged work on HDDS-1788:


Author: ASF GitHub Bot
Created on: 01/Aug/19 23:46
Start Date: 01/Aug/19 23:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1201: HDDS-1788. Add 
kerberos support to Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517496280
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 80 | Maven dependency ordering for branch |
   | +1 | mvninstall | 613 | trunk passed |
   | +1 | compile | 399 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 467 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 676 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 567 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | hadolint | 4 | There were no new hadolint issues. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 687 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 340 | hadoop-hdds in the patch passed. |
   | -1 | unit | 212 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6456 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1201 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml hadolint shellcheck shelldocs yamllint findbugs 
checkstyle |
   | uname | Linux 2b59c8fcad27 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f86de6f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/2/testReport/ |
   | Max. process+thread count | 984 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist hadoop-ozone/ozone-recon 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287371)
Time Spent: 1h 20m  (was: 1h 10m)

> Add kerberos support to Ozone Recon
> ---
>
> Key: HDDS-1788
> URL: https://issues.apache.org/jira/browse/HDDS-1788
> 

[jira] [Updated] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12914:
---
Attachment: HDFS-12914.branch-2.001.patch

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.001.patch, HDFS-12914.branch-2.patch, 
> HDFS-12914.branch-3.0.patch, HDFS-12914.branch-3.1.001.patch, 
> HDFS-12914.branch-3.1.002.patch, HDFS-12914.branch-3.2.patch, 
> HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1886?focusedWorklogId=287354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-287354
 ]

ASF GitHub Bot logged work on HDDS-1886:


Author: ASF GitHub Bot
Created on: 01/Aug/19 23:19
Start Date: 01/Aug/19 23:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1205: HDDS-1886. Use 
ArrayList#clear to address audit failure scenario
URL: https://github.com/apache/hadoop/pull/1205#issuecomment-517491076
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 582 | trunk passed |
   | +1 | compile | 355 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 616 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 567 | the patch passed |
   | +1 | compile | 370 | the patch passed |
   | +1 | javac | 370 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   | +1 | findbugs | 654 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 281 | hadoop-hdds in the patch passed. |
   | -1 | unit | 153 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 5686 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1205 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d2102f8a0289 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e111789 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/1/testReport/ |
   | Max. process+thread count | 506 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 287354)
Time Spent: 1h 10m  (was: 1h)

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.




[jira] [Updated] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14652:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~zhangchen] for the context.

+1 from me. Committed patch to trunk.

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >